patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11861320
DETAILED DESCRIPTION Techniques and mechanisms described herein provide for an intermediate filtering model to reduce the workload of a large language model (LLM) so that a task can be performed within reasonable time and compute restraints. Given a question that must be answered from a large collection of documents or text portions, an intermediate machine learning model is used to first filter down the documents or text portions most likely to contain the relevant answer to the question. Then, the LLM may be used to optionally reduce the documents or text portions even further to a smaller set based on relevance. The LLM may then be used to answer the question based on the filtered documents or text portions. To effectively answer a question based on a large set of documents using conventional techniques, the LLM must read and/or process the text contained in every document to synthesize a complete and accurate answer. However, LLMs have a small context window, meaning that they can only read a limited number of words before forgetting everything that came before it. In many systems this context window may be up to 8,000 words long. In addition, LLMs are extremely compute heavy, with some state-of-the-art systems needing up to 50 ms to process a single word. This significant compute load causes a process that involves reading every word in very large corpora of documents that have millions or billions of words to become prohibitively expensive. The usage of LLMs for tasks such as question answering from documents is a recent phenomenon. Previous conventional techniques for question answering from documents generally required that the LLM itself be trained specifically for this purpose. For instance, retrieval augmented generation techniques were used. However, given the large size of LLMs and the significant expense (e.g., millions of dollars) involved in training an LLM, retraining an LLM for a specific purpose is impractical. Accordingly, a general purpose LLM trained on a large and general-purpose corpus of documents may be employed. Other conventional techniques for answering questions from documents have involved breaking the documents of interest into chunks, running the question on each chunk, and then ‘chaining together’ the answers into one cohesive response. While this approach is feasible for small collections of documents, it becomes again intractable for document collections consisting of tens or hundreds of millions of words because the LLM must still read each word in the collection. In contrast to conventional techniques, techniques and mechanisms described herein can reduce the workload of the LLM by a large factor. For instance, in some embodiments, the workload may be reduced by a factor of 10× to a factor of 1,000×. In contrast to conventional techniques, techniques and mechanisms described herein can reduce or eliminate the hallucination problems to which LLMs are prone. LLMs tend to create misinformation or false knowledge when generation answers on the fly. Grounding the LLM's answers in a set of documents greatly reduces the hallucination rate. In contrast to conventional techniques, techniques and mechanisms described herein allow for the use of LLMs to generate answers to questions based on novel information from a document collection. In the absence of such a collection, LLMs can only answer questions based on information they have been trained on (e.g., Wikipedia). In contrast to conventional techniques, techniques and mechanisms described herein allow for the use of an LLM with extremely large collections of documents, such as thousands, hundreds of thousands, or millions of documents, to answer a single question without retraining the LLM. When using conventional techniques, such a corpus of documents would be intractable to process. According to various embodiments, techniques and mechanisms described herein provide for novel text generation in domain-specific contexts. A text generation interface system may take as input one or more arbitrary documents, process them via optical text recognition, segment them into portions, and process the segmented text via various tasks based on need. Different workflows are provided for different tasks, and this application describes a number of examples of such workflows. In many workflows, an input document is divided into chunks via a chunking technique. Then, chunks are inserted into prompt templates for processing by a large language model such as the GPT-3 or GPT-4 available from OpenAl. The large language model's response is then parsed and potentially used to trigger additional analysis, such as one or more database searches, one or more additional prompts sent back to the large language model, and/or a response returned to a client machine. According to various embodiments, techniques and mechanisms described herein provide for retrieval augmented generation. A search is conducted based on a search query. Then, the search results are provided to an artificial intelligence system. The artificial intelligence system then further processes the search results to produce an answer based on those search results. In this context, a large language model may be used to determine the search query, apply one or more filters and/or tags, and/or synthesize potentially many different types of search. According to various embodiments, techniques and mechanisms described herein provide for a sophisticated document processing pipeline. The pipeline receives one or more input documents, identifies text that should be kept together, identifies extraneous text such as headers, footers, and line numbers, and segments the text accordingly. In this way, the quality of the text provided to the rest of the system is improved. According to various embodiments, techniques and mechanisms described herein provide for new approaches to text segmentation. Large language models often receive as input a portion of input text and generate in response a portion of output text. In many systems, the large language model imposes a limit on the input text size. Accordingly, in the event that the large language model is asked to summarize a length document, the document may need to be segmented into portions in order to achieve the desired summarization. Conventional text segmentation techniques frequently create divisions in text that negatively affect the performance of the model, particularly in domains-specific contexts such as law. For example, consider a caption page of a legal brief, which includes text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the text in the different columns should not be mixed and should be treated separately from the line numbers, while both columns should precede the document title, when converting the document to an input query for a large language model. However, conventional techniques would result in these semantically different elements of text being jumbled together, resulting in an uninformative query provided to the large language model and hence a low-quality response. In contrast to these conventional techniques, techniques and mechanisms described herein provide for a pipeline that cleans such raw text so that it can be provided to a large language model. According to various embodiments, techniques and mechanisms described herein provide for the division of text into chunks, and the incorporation of those chunks into prompts that can be provided to a large language model. For instance, a large language model may impose a limit of, for instance, 8,193 tokens on a task, including text input, text output, and task instructions. In order to process longer documents, the system may split them. However, splitting a document can easily destroy meaning depending on where and how the document is split. Techniques and mechanisms described herein provide for evenly splitting a document or documents into chunks, and incorporating those chunks into prompts, in ways that retain the semantic content associated with the raw input document or documents. In some embodiments, techniques and mechanisms described herein may be applied to generate novel text in domain-specific contexts, such as legal analysis. Large language models, while powerful, have a number of drawbacks when used for technical, domain-specific tasks. When using conventional techniques, large language models often invent “facts” that are actually not true. For instance, if asked to summarize the law related to non-obviousness in the patent context, a large language model might easily invent a court case, complete with caption and ruling, that in fact did not occur. In contrast to conventional techniques, techniques and mechanisms described herein provide for the generation of novel text in domain-specific contexts while avoiding such drawbacks. According to various embodiments, techniques and mechanisms described herein may be used to automate complex, domain-specific tasks that were previously the sole domain of well-trained humans. Moreover, such tasks may be executed in ways that are significantly faster, less expensive, and more auditable than the equivalent tasks performed by humans. For example, a large language model may be employed to produce accurate summaries of legal texts, to perform legal research tasks, to generate legal documents, to generate questions for legal depositions, and the like. In some embodiments, techniques and mechanisms described herein may be used to divide text into portions while respecting semantic boundaries and simultaneously reducing calls to the large language model. The cost of using many large language models depends on the amount of input and/or output text. Accordingly, techniques and mechanisms described herein provide for reduced overhead associated with prompt instructions while at the same time providing for improved model context to yield an improved response. In some embodiments, techniques and mechanisms described herein may be used to process an arbitrary number of unique documents (e.g., legal documents) that cannot be accurately parsed and processed via existing optical character recognition and text segmentation solutions. In some embodiments, techniques and mechanisms described herein may be used to link a large language model with a legal research database, allowing the large language model to automatically determine appropriate searches to perform and then ground its responses to a source of truth (e.g., in actual law) so that it does not “hallucinate” a response that is inaccurate. In some embodiments, techniques and mechanisms described herein provide for specific improvements in the legal domain. For example, tasks that were previously too laborious for attorneys with smaller staffs may now be more easily accomplished. As another example, attorneys may automatically analyze large volumes of documents rather than needing to perform such tasks manually. As another example, text chunking may reduce token overhead and hence cost expended on large language model prompts. As yet another example, text chunking may reduce calls to a large language model, increasing response speed. As still another example, text chunking may increase and preserve context provided to a large language model by dividing text into chunks in semantically meaningful ways. According to various embodiments, techniques and mechanisms described herein may provide for automated solutions for generated text in accordance with a number of specialized applications. Such applications may include, but are not limited to: simplifying language, generating correspondence, generating a timeline, reviewing documents, editing a contract clause, drafting a contract, performing legal research, preparing for a depositions, drafting legal interrogatories, drafting requests for admission, drafting requests for production, briefing a litigation case, responding to requests for admission, responding to interrogatories, responding to requests for production, analyzing cited authorities, and answering a complaint. FIG.1illustrates a document reduction and analysis overview method100, performed in accordance with one or more embodiments. In some implementations, the method100may be performed at a text generation interface system such as the system200shown inFIG.2. For instance, the method100may be performed at the text generation interface system210. A request to analyze a plurality of text portions based on a query is received at102. In some embodiments, the request may be received as part of a chat session in which a text generation system automatically generates responses to text received from a client machine. Alternatively, or additionally, the request may be received in the context of an API call. The request may identify a set of text portions to analyze. The text portions may be documents, document portions, or other passages of natural language text. The text portions may be identified by, for instance, one or more identifiers included with the request. The one or more identifiers may identify, for instance, individual documents or collections of documents available in a text repository. Alternatively, or additionally, text portions may be identified based on query results returned by a database system. The query is optionally pre-processed at104to determine one or more subqueries and/or one or more pretrained classification models. According to various embodiments, a single query may be divided into multiple subqueries to facilitate more granular analysis. Alternatively, or additionally, one or more text classification models may be pretrained for improved performance. Additional details regarding query pre-processing are discussed with respect to the method1300shown inFIG.13. A first subset of the text portions that are relevant to the request are determined at104based on a machine learning model. In some implementations, identifying the first subset of the text portions may involve applying a cross-encoder or other type of machine learning model to classify a given text portion based on its relevance to the query. Additional details regarding the application of a machine learning model to identify a subset of documents based on relevance are discussed with respect to the method1400shown inFIG.14. A second subset of the text portions that are relevant to the request is determined at108based on communication with a text generation modeling system. In some embodiments, the second subset of the text portions may be identified by providing some or all of the first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query. A response to the query based on the first and/or second subsets of the text portions is determined at110. In some embodiments, the response to the query may be determined based on the application of one or more workflows to the first and/or second subsets of documents. The one or more workflows may involve further communication with the text generation modeling system. Additional details regarding the application of the text generation modeling system to optionally further restrict the text portions and to determine a response to the query based on the first and/or second subsets of documents are discussed with respect to the method1500shown inFIG.15. FIG.2illustrates a text generation system200, configured in accordance with one or more embodiments. The text generation system200includes client machines202through204in communication with a text generation interface system210, which in turn is in communication with a text generation modeling system270. The text generation modeling system270includes a communication interface272, a text generation API274, and a text generation model276. The text generation interface system210includes a communication interface212, a database system214, a testing module220, and an orchestrator230. The testing module220includes a query cache222, a test repository224, and a prompt testing utility226. The orchestrator230includes skills232through234, and prompt templates236through238. The orchestrator also includes a chunker240and a scheduler242. The orchestrator also includes API interfaces250, which include a model interface252, an external search interface254, an internal search interface256, and a chat interface258. According to various embodiments, a client machine may be any suitable computing device or system. For instance, a client machine may be a laptop computer, desktop computer, mobile computing device, or the like. Alternatively, or additionally, a client machine may be an interface through which multiple remote devices communicate with the text generation interface system210. According to various embodiments, a client machine may interact with the text generation interface system in any of various ways. For example, a client machine may access the text generation interface system via a text editor plugin, a dedicated application, a web browser, other types of interactions techniques, or combinations thereof. According to various embodiments, the text generation modeling system270may be configured to receive, process, and respond to requests via the communication interface272, which may be configured to facilitate communications via a network such as the internet. In some embodiments, some or all of the communication with the text generation modeling system270may be conducted in accordance with the text generation API274, which may provide remote access to the text generation model276. The text generation API274may provide functionality such as defining standardized message formatting, enforcing maximum input and/or output size for the text generation model, and/or tracking usage of the text generation model. According to various embodiments, the text generation model276may be a large language model. The text generation model276may be trained to predict successive words in a sentence. It may be capable of performing functions such as generating correspondence, summarizing text, and/or evaluating search results. The text generation model276may be pre-trained using many gigabytes of input text and may include billions or trillions of parameters. In some embodiments, large language models impose a tradeoff. A large language model increases in power with the number of parameters and the amount of training data used to train the model. However, as the model parameters and input data increase in magnitude, the model's training cost, storage requirements, and required computing resources increase as well. Accordingly, the large language model may be implemented as a general-purpose model configured to generate arbitrary text. The text generation interface system210may serve as an interface between the client machines and the text generation modeling system270to support the use of the text generation modeling system270for performing complex, domain-specific tasks in fields such as law. That is, the text generation interface system210may be configured to perform one or more methods described herein. According to various embodiments, the orchestrator230facilitates the implementation of one or more skills, such as the skills232through234. A skill may act as a collection of interfaces, prompts, actions, data, and/or metadata that collectively provide a type of functionality to the client machine. For instance, a skill may involve receiving information from a client machine, transmitting one or more requests to the text generation modeling system270, processing one or more response received form the text generation modeling system270, performing one or more searches, and the like. Skills are also referred to herein as text generation flows. Additional details regarding specific skills are provided with reference toFIGS.8-10. In some embodiments, a skill may be associated with one or more prompts. For instance, the skill234is associated with the prompt templates236and238. A prompt template may include information such as instructions that may be provided to the text generation modeling system270. A prompt template may also include one or more fillable portions that may be filled based on information determined by the orchestrator230. For instance, a prompt template may be filled based on information received from a client machine, information returned by a search query, or another information source. Additional details regarding prompt templates are provided with reference toFIGS.8-10. In some implementations, the chunker240is configured to divide text into smaller portions. Dividing text into smaller portions may be needed at least in part to comply with one or more size limitations associated with the text. For instance, the text generation API274may impose a maximum size limit on prompts provided to the text generation model276. The chunker may be used to subdivide text included in a request from a client, retrieved from a document, returned in a search result, or received from any other source. According to various embodiments, the API interfaces250include one or more APIs for interacting with internal and/or external services. The model interface252may expose one or more functions for communicating with the text generation modeling system270. For example, the model interface252may provide access to functions such as transmitting requests to the text generation modeling system270, receiving responses from the text generation modeling system270, and the like. In some embodiments, the external search interface254may be used to search one or more external data sources such as information repositories that are generalizable to multiple parties. For instance, the external search interface254may expose an interface for searching legal case law and secondary sources. In some implementations, the internal search interface256may facilitate the searching of private documents. For instance, a client may upload or provide access to a set of private documents, which may then be indexed by the text generation interface system210. According to various embodiments, the chat interface258may facilitate text-based communication with the client machines. For instance, the chat interface258may support operations such as parsing chat messages, formulating responses to chat messages, identifying skills based on chat messages, and the like. In some configurations, the chat interface258may orchestrate text-based chat communication between a user at a client machine and the text generation model276, for instance via web sockets. In some embodiments, the query cache222may store queries such as testing queries sent to the text generation modeling system270. Then, the query cache222may be instructed to return a predetermined result to a query that has already been sent to the text generation modeling system270rather than sending the same query again. In some embodiments, the prompt testing utility226is configured to perform operations such as testing prompts created based on prompt templates against tests stored in the test repository224. In some embodiments, the communication interface212is configured to facilitate communications with the client machines and/or the text generation modeling system270via a network such as the internet. The scheduler242may be responsible for scheduling one or more tasks performed by the text generation interface system210. For instance, the scheduler may schedule requests for transmission to the text generation modeling system270. In some embodiments, the database system214is configured to store information determined based on natural language. For example, the database system214may be configured to store one or more database tables that include fields corresponding with information extracted from natural language documents. As another example, the database system214may be configured to store metadata information about documents based on information extracted from those documents. As yet another example, the database system214may be configured to store linkages between documents and document portions. According to various embodiments, the database system214may be configured using any of a variety of suitable database technologies. For instance, the database system214may be configured as a relational database system, a non-relational database system, or any other type of database system capable of supporting the storage and querying of information described herein. FIG.3illustrates a document parsing method300, performed in accordance with one or more embodiments. According to various embodiments, the method300may be performed on any suitable computing system. For instance, the method300may be performed on the text generation interface system230shown inFIG.2. The method300may be performed in order to convert a document into usable text while at the same time retaining metadata information about the text, such as the page, section, and/or document at which the text was located. A request to parse a document is received at302. In some embodiments, the request to parse a document may be generated when a document is identified for analysis. For example, as discussed herein, a document may be uploaded or identified by a client machine as part of communication with the text generation interface system230. As another example, a document may be returned as part of a search result. The document is converted to portable document format (PDF) or another suitable document format at304. In some embodiments, the document need only be converted to PDF if the document is not already in the PDF format. Alternatively, PDF conversion may be performed even on PDFs to ensure that PDFs are properly formatted. PDF conversion may be performed, for instance, by a suitable Python library or the like. For instance, PDF conversion may be performed with the Hyland library. Multipage pages are split into individual pages at306. In some implementations, multipage pages may be split into individual pages via a machine learning model. The machine learning model may be trained to group together portions of text on a multipage page. For instance, a caption page in a legal decision may include text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the machine learning model may be trained to treat separately the text in the different columns, and to separate the text from the line numbers. The document title may be identified as a first page, with the left column identified as the second page and the right column identified as the third page. Optical character recognition is performed on individual pages or on the document as a whole at308. In some implementations, optical character recognition may be performed locally via a library. Alternatively, optical character recognition may be performed by an external service. For instance, documents or pages may be sent to a service such as Google Vision. Performing optical character recognition on individual pages may provide for increased throughout via parallelization. Individual pages are combined in order at310. In some implementations, combining pages in order may be needed if optical character recognition were applied to individual pages rather than to the document as a whole. Inappropriate text splits are identified and corrected at312. In some embodiments, inappropriate text splits include instances where a paragraph, sentence, word, or other textual unit was split across different pages. Such instances may be identified by, for example, determining whether the first textual unit in a page represents a new paragraph, sentence, word, or other unit, or if instead it represents the continuation of a textual unit from the previous page. When such a split is identified, the continuation of the textual unit may be excised from the page on which it is located and moved to the end of the previous page. Such an operation may be performed by, for instance, the Poppler library available in Python. Segmented JSON text is determined at314. In some embodiments, the segmented JSON text may include the text returned by the optical character recognition performed at operation308. In addition, the segmented JSON text may include additional information, such as one or more identifiers for the page, section, and/or document on which the text resides. The output of the segmented JSON may be further processed, for instance via the text sharding method500shown inFIG.5and/or the text chunking method600shown inFIG.6. FIG.4illustrates a text generation method400, performed in accordance with one or more embodiments. According to various embodiments, the method400may be performed on any suitable computing system. For instance, the method400may be performed on the text generation interface system230shown inFIG.2. The method400may be performed in order to identify and implement a text generation flow based on input text. A request from a client machine to generate a novel text portion is received at402. In some embodiments, the request may include a query portion. The query portion may include natural language text, one or more instructions in a query language, user input in some other format, or some combination thereof. For instance, the query portion may include an instruction to “write an email”, “summarize documents”, or “research case law”. In some embodiments, the request may include an input text portion. For example, the request may link to, upload, or otherwise identify documents. As another example, the request may characterize the task to be completed. For instance, the request may discuss the content of the desired email or other correspondence. The particular types of input text included in the request may depend in significant part on the type of request. Accordingly, many variations are possible. A text generation flow is determined at404. In some embodiments, the text generation flow may be explicitly indicated as part of the request received from the client machine. For instance, the client machine may select a particular text generation flow from a list. Alternatively, the text generation flow may be determined at least in part by analyzing the request received from the client machine. For example, the request may be analyzed to search for keywords or other indications that a particular text generation flow is desired. As another example, all or a portion of the request may be provided to a machine learning model to predict the requested text generation flow. In some configurations, a predicted text generation flow may be provided to the client machine for confirmation before proceeding. Input text is determined at406. In some embodiments, the input text may be determined by applying one or more text processing, search, or other operations based on the request received from the client machine. For example, the input text may be determined at least in part by retrieving one or more documents identified in or included with the request received from the client machine. As another example, the input text may be determined at least in part by applying one or more natural language processing techniques such as cleaning or tokenizing raw text. In some embodiments, determining input text may involve executing a search query. For example, a search of a database, set of documents, or other data source may be executed based at least in part on one or more search parameters determined based on a request received from a client machine. For instance, the request may identify one or more search terms and a set of documents to be searched using the one or more search terms. In some embodiments, determining input text may involve processing responses received from a text generation modeling system. For instance, all or a portion of the results from an initial request to summarizing a set of text portions may then be used to create a new set of more compressed input text, which may then be provided to the text generation modeling system for further summarization or other processing. One or more prompt templates are determined at408based on the input text and the text generation flow. As discussed with respect toFIG.2, different text generation flows may be associated with different prompt templates. Prompt templates may be selected from the prompt library based on the particular text generation flow. Additional details regarding the content of particular prompt templates are discussed with respect to the text generation flows illustrated inFIGS.8-10. At410, one or more prompts based on the prompt templates are determined. In some embodiments, a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient, source, topic, and discussion points. The one or more prompts are transmitted to a text generation modeling system at412. In some embodiments, the text generation modeling system may be implemented at a remote computing system. The text generation modeling system may be configured to implement a text generation model. The text generation modeling system may expose an application procedure interface via a communication interface accessible via a network such as the internet. One or more text response messages are received from the remote computing system at414. According to various embodiments, the one or more text response messages include one or more novel text portions generated by a text generation model implemented at the remote computing system. The novel text portions may be generated based at least in part on the prompt received at the text generation modeling system, including the instructions and the input text. The one or more responses are parsed at416to produce a parsed response. In some embodiments, parsing the one or more responses may involve performing various types of processing operations. For example, in some systems a large language model may be configured to complete a prompt. Hence, a response message received from the large language model may include the instructions and/or the input text. Accordingly, the response message may be parsed to remove the instructions and/or the input text. In some implementations, parsing the one or more responses may involve combining text from different responses. For instance, a document may be divided into a number of portions, each of which is summarized by the large language model. The resulting summaries may then be combined to produce an overall summary of the document. A determination is made at418as to whether to provide a response to the client machine. In some embodiments, the determination made at418may depend on the process flow. For example, in some process flows, additional user input may be solicited by providing a response message determined based at least in part on one or more responses received from the text generation modeling system. As another example, in some process flows, a parsed response message may be used to produce an output message provided to the client machine. If a response is to be provided to the client machine, then a client response message including a novel text passage is transmitted to the client machine at420. In some embodiments, the client response message may be determined based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. Additional details regarding the generation of a novel text passage are discussed with respect to the text generation flows illustrated inFIGS.8-10. A determination is made at422as to whether to generate an additional prompt. According to various embodiments, the determination as to whether to generation an additional prompt may be made based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. As a simple example, a text generation flow may involve an initial set of prompts to summarize a set of portions, and then another round of interaction with the text generation modeling system to produce a more compressed summary. Additional details regarding the generation of a novel text passage are discussed with respect to the text generation flows illustrated inFIGS.8-10. According to various embodiments, the operations shown inFIG.4may be performed in an order different from that shown. Alternatively, or additionally, one or more operations may be omitted, and/or other operations may be performed. For example, a text generation flow may involve one or more search queries executed outside the context of the text generation modeling system. As another example, a text generation flow may involve one or more processes for editing, cleaning, or otherwise altering text in a manner not discussed with respect toFIG.4. Various operations are possible. FIG.5illustrates a method500of sharding text, performed in accordance with one or more embodiments. According to various embodiments, the method500may be performed on any suitable computing system. For instance, the method500may be performed on the text generation interface system230shown inFIG.2. The method500may be performed in order to divide a body of text into potentially smaller units that fall beneath a designated size threshold, such as a size threshold imposed by an interface providing access to a large language model. For instance, a text generation modeling system implementing a large language model may specify a size threshold in terms of a number of tokens (e.g., words). As one example of such a threshold, a text generation modeling system may impose a limit of 8,193 tokens per query. In particular embodiments, a size threshold may be adjusted based on considerations apart from a threshold imposed by an external text generation modeling system. For instance, a text generation interface system may formulate a prompt that includes input text as well as metadata such as one or more instructions for a large language model. In addition, the output of the large language model may be included in the threshold. If the external text generation modeling system imposes a threshold (e.g., 8,193 tokens), the text generation interface system230may need to impose a somewhat lower threshold when dividing input text in order to account for the metadata included in the prompt and/or the response provided by the large language model. A request to divide text into one or more portions is received at502. According to various embodiments, the request may be received as part of the implementation of one or more of the workflows shown herein, for instance in the methods shown inFIGS.8-10. The request may identify a body of text. The body of text may include one or more documents, search queries, instruction sets, search results, and/or any other suitable text. In some configurations, a collection of text elements may be received. For instance, a search query and a set of documents returned by the search query may be included in the text. In some implementations, text may be pre-divided into a number of different portions. Examples of divisions of text into portions may include, but are not limited to: lists of documents, documents, document sections, document pages, document paragraphs, and document sentences. Alternatively, or additionally, text may be divided into portions upon receipt at the text generation interface system230. For instance, text may be divided into a set of portions via a text chunker, document parser, or other natural language processing tool. A maximum text chunk size is identified at504. In some embodiments, the maximum text chunk size may be identified based on one or more configuration parameters. In some configurations, the maximum text size may be imposed by the text generation interface system230. Alternatively, or additionally, a size threshold may be imposed by an interface providing access to a large language model. As one example of a maximum text chunk size may be 100 kilobytes of text, 1 megabyte of text, 10 megabytes of text, or any other suitable chunk size. A portion of the text is selected at506. In some embodiments, as discussed herein, text may be pre-divided into text portion. Alternatively, or additionally, text may be divided into text portions as part of, or prior to, the operation of the method500. As still another possibility, text may not be divided into portions. In such a configuration, the initial portion of text that is selected may be the entirety of the text. Then, the identification of one or more updated text portions at512may result in the division of the text into one or more portions as part of the operation of the method500. A determination is made at508as to whether the length of the selected text portion exceeds the maximum text chunk size. In some embodiments, the determination may be made by computing a length associated with the selected text portion and then comparing it with the maximum text chunk size. The calculation of the length associated with the selected text portion may be performed in different ways, depending on how the maximum text chunk size is specified. For instance, the maximum text chunk size may be specified as a memory size (e.g., in kilobytes or megabytes), as a number of words, or in some other fashion. If it is determined that the length of the selected text portion exceeds the maximum text chunk size, then at510one or more domain-specific text chunking constraints are identified. In some embodiments, domain-specific text chunking constraints may be identified based on one or more pre-determined configuration parameters. For example, one domain-specific text chunking constraint may discourage division of a question and answer in a deposition transcript or other question/answer context. As another example, a domain-specific text chunking constraint may discourage splitting of a contract clause. As yet another example, a domain-specific text chunking constraint may discourage splitting of a minority and majority opinion in a legal opinion. An updated text portion that does not exceed the maximum text chunk size is identified at512. In some embodiments, the updated text portion may be determined by applying a more granular division of the text portion into small portions. For example, a document may be divided into sections, pages, or paragraphs. As another example, a document page or section may be divided into paragraphs. As another example, a paragraph may be divided into sentences. As still another example, a sentence may be divided into words. In particular embodiments, the updated text portion may be the sequentially first portion of the selected text portion that falls below the maximum text chunk size threshold identified at operation504. The text portion is assigned to a text chunk at514. In some embodiments, the text may be associated with a sequence of text chunks. The text portions selected at506and identified at512may be assigned to these text chunks, for instance in a sequential order. That is, text portions near to one another in the text itself may be assigned to the same text chunk where possible to reduce the number of divisions between semantically similar elements of the text. In particular embodiments, some attention may be paid to text divisions such as document, document section, paragraph, and/or sentence borders when assigning text portions to chunks. For instance, text portions belonging to the same document, document section, paragraph, and/or sentence may be grouped together when possible to ensure semantic continuity. In particular embodiments, the method500may be performed in conjunction with the method600shown inFIG.6. In such a configuration, operation514may be omitted. Alternatively, the assignment of text portions into text chunks in operation514may be treated as provisional, subject to subsequent adjustment via the method600shown inFIG.6. In some implementations, the identification of an updated text portion may result in the creation of two or more new text portions as a consequence of the division. In this case, the updated text portion may be assigned to a text chunk at514, while the remainder portion or portions may be reserved for later selection at506. Alternatively, or additionally, if two or more of the text portions resulting from the division at512each fall below the maximum text chunk size, then each of these may be assigned to a text chunk or chunks at operation514. A determination is made at516as to whether to select an additional portion of the text. According to various embodiments, additional portions of the text may continue to be selected as long as additional portions are available, or until some other triggering condition is met. For example, the system may impose a maximum amount of text for a particular interaction. As another example, the amount of text may exceed a designated threshold, such as a cost threshold. FIG.6illustrates a text chunk determination method600, performed in accordance with one or more embodiments. According to various embodiments, the method600may be performed on any suitable computing system. For instance, the method600may be performed on the text generation interface system230shown inFIG.2. The method600may be performed in order to assign a set of text portions into text chunks. In some embodiments, the method600may be used to compress text portions into text chunks of smaller size. For instance, the method600may receive as an input a set of text portions divided into text chunks of highly variable sizes, and then produce as an output a division of the same text portions into the same number of text chunks, but with the maximum text chunk size being lower due to more even distribution of text portions across text chunks. A request is received at602to divide a set of text portions into one or more chunks. In some embodiments, the request may be automatically generated, for instance upon completion of the method500shown inFIG.5. The request may identify, for instance, a set of text portions to divide into text chunks. An initial maximum text chunk size is identified at604. In some embodiments, the initial maximum text chunk size may be identified in a manner similar to that for operation504shown inFIG.5. A text portion is selected for processing at606. In some embodiments, text portions may be selected sequentially. Sequential or nearly sequential ordering may ensure that semantically contiguous or similar text portions are often included within the same text chunk. A determination is made at608as to whether the text portion fits into the latest text chunk. In some embodiments, text portions may be processed via the method500shown inFIG.5to ensure that each text portion is smaller than the maximum chunk size. However, a text chunk may already include one or more text portions added to the text chunk in a previous iteration. In the event that the text portion fits into the last text chunk size, the text portion is inserted into the last text chunk at610. If instead the text portion is the first to be processed, or the text portion does not fit into the last text chunk size, then the text portion is inserted into a new text chunk at612. The new chunk may be created with a maximum size in accordance with the maximum text chunk size, which may be the initial maximum text chunk upon the first iteration or the reduced maximum text chunk size upon subsequent iterations. A determination is made at614as to whether to select an additional text portion for processing. In some embodiments, additional text portions may be selected until all text portions have been added to a respective text chunk. A determination is made at616as to whether the number of text chunks has increased relative to the previous maximum text chunk size. If the number of text chunks increases, then a reduced maximum text chunk size is determined at618, and the text portions are again assigned into chunks in operations606through614. According to various embodiments, for the first iteration, the number of chunks will not have increased because there was no previous assignment of text portions into text chunks. However, for the second and subsequent iterations, reducing the maximum text chunk size at618may cause the number of text chunks needed to hold the text portions to crease because the reduced maximum text chunk size may cause a text portion to no longer fit in a chunk and instead to spill over to the next chunk. In some embodiments, the first increase of the number of text chunks may cause the termination of the method at operation620. Alternatively, a different terminating criteria may be met. For instance, an increase in the number of text chunks may be compared with the reduction in text chunk size to produce a ratio, and additional reductions in text chunk size may continue to be imposed so long as the ratio falls below a designated threshold. In some embodiments, the reduced text chunk size may be determined at618in any of various ways. For example, the text chunk size may be reduced by a designated amount (e.g., 10 words, 5 kilobytes, etc.) As another example, the text chunk size may be reduced by a designated percentage (e.g., 1%, 5%, etc.). When it is determined that the number of text chunks has unacceptably increased, then at620the previous maximum text chunk size and assignment of text portions into chunks is returned. In this way, the number of text chunks may be limited while at the same time dividing text portions more equally into text chunks. The number of text chunks may be strictly capped at the input value, or may be allowed to increase to some degree if a sufficiently improved division of text portions into text chunks is achieved. FIG.7illustrates one example of a computing device700, configured in accordance with one or more embodiments. According to various embodiments, a system700suitable for implementing embodiments described herein includes a processor701, a memory module703, a storage device705, an interface711, and a bus715(e.g., a PCI bus or other interconnection fabric.) System700may operate as variety of devices such as an application server, a database server, or any other device or service described herein. Although a particular configuration is described, a variety of alternative configurations are possible. The processor701may perform operations such as those described herein. Instructions for performing such operations may be embodied in the memory703, on one or more non-transitory computer readable media, or on some other storage device. Various specially configured devices can also be used in place of or in addition to the processor701. The interface711may be configured to send and receive data packets over a network. Examples of supported interfaces include, but are not limited to: Ethernet, fast Ethernet, Gigabit Ethernet, frame relay, cable, digital subscriber line (DSL), token ring, Asynchronous Transfer Mode (ATM), High-Speed Serial Interface (HSSI), and Fiber Distributed Data Interface (FDDI). These interfaces may include ports appropriate for communication with the appropriate media. They may also include an independent processor and/or volatile RAM. A computer system or computing device may include or communicate with a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user. FIG.8illustrates an example of a method800for conducting a chat session, performed in accordance with one or more embodiments. The method800may be performed at the text generation system200in order to provide one or more responses to one or more chat messages provided by a client machine. For instance, the method800may be performed at the text generation interface system210to provide novel text to the client machine202based on interactions with the text generation modeling system270. User input is received at802. In some embodiments, the user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input is used to generate a chat input message804, which is sent to the text generation interface system210. In some implementations, the chat input message804may be received by the text generation interface system210via a web socket. At806, the text generation interface system210determines a chat prompt808based on the chat input message804. The chat prompt808may include one or more instructions for implementation by the text generation modeling system270. Additionally, the chat prompt808includes a chat message810determined based on the chat input message804. In some implementations, determining the chat prompt808may involve processing the chat input message804. In some embodiments, as discussed with respect to the methods500and600shown inFIG.5andFIG.6, the chat input message804may be processed via text sharding and/or chunking to divide the text into manageable portions. Portions may then be included in the same or separate chat prompts depending on chunk size. For instance, text may be inserted into a template via a tool such as Jinja2. The chat prompt808is then sent to the text generation modeling system270via a chat prompt message812. The text generation modeling system270generates a raw chat response at814, which is then sent back to the text generation interface system210via a chat response message at816. The chat response message is parsed at818to produce a parsed chat response at820. In some embodiments, the chat response message received at816may include ancillary information such as all or a portion of the chat prompt message sent at812. Accordingly, parsing the chat response message may involve performing operations such as separating the newly generated chat response from the ancillary information included in the chat response message. For example, the response generated by the model may include information such as the name of a chat bot, which may be removed during parsing by techniques such as pattern matching. The parsed chat response820is provided to the client machine via the chat output message at822. The parsed chat response message is then presented via user output at824. According to various embodiments, the user output may be presented via a chat interface, via a file, or in some other suitable format. In some implementations, the chat interaction may continue with successive iterations of the operations and elements shown at802-824inFIG.8. In order to maintain semantic and logical continuity, all or a portion of previous interactions may be included in successive chat prompts sent to the text generation modeling system270. For instance, at the next iteration, the chat prompt message sent to the text generation modeling system may include all or a portion of the initial user input, the parsed chat message determined based on the response generated by the text generation modeling system270, and/or all or a portion of subsequent user input generated by the client machine in response to receiving the parsed chat message. In some embodiments, the text generation modeling system270may be configured such that the entire state of the text generation model needs to fit in a prompt smaller than a designated threshold. In such a configuration, when the chat history grows too long to include the entire history in a single prompt, then the most recent history may be included in subsequent chat prompts. According to various embodiments, the method800may be performed in such a way as to facilitate tasks more complex text analysis tasks. Examples of such complex text analysis tasks may include, but are not limited to, identifying recommended skills, generating correspondence, and revising correspondence. These tasks are discussed in more detail below. In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to suggest one or more skills. The text generation modeling system270may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at818may involve searching the chat response message816for the natural language text and/or the one or more skill codes. Skill codes identified in this way may be used to influence the generation of the chat output message sent at822. For example, the chat output message sent at822may include instructions for generating one or more user interface elements such as buttons or lists allowing a user to select the recommended skill or skills. As another example, the chat output message sent at822may include text generated by the text generation interface system210that identifies the recommended skill or skills. In some embodiments, implementing the text generation flow800shown inFIG.8may involve determining whether a more complex skill or skills need to be invoked. For instance, straightforward questions from the client machine202may be resolvable via a single back-and-forth interaction with the text generation modeling system270. However, more complex questions may involve deeper interactions, as discussed with respect toFIGS.9-11. Determining whether a more complex skill or skills need to be invoked may involve, for instance, querying the text generation modeling system270to identify skills implicated by a chat message. If such a skill is detected, then a recommendation may be made as part of the chat output message sent to the client machine at822. An example of a prompt template for generating a prompt that facilitates skill selection in the context of a chat interaction is provided below. In this prompt, one or more user-generated chat messages may be provided in the {{messages}} section:For the purposes of this chat, your name is CoCounsel and you are a legal Al created by the legal technology company Casetext. You are friendly, professional, and helpful.You can speak any language, and translate between languages.You have general knowledge to respond to any request. For example, you can answer questions, write poems, or pontificate on an issue.You also have the following skills, with corresponding URLs and descriptions: {{skills}}When responding, follow these instructions:If one or more skill is directly relevant to the request, respond with your reason you think it is relevant and indicate the relevant skill in the format <recommendedSkill name=“[skillName]” url=“[skillUrl]”/>. For example {{skill_tag_examples}}If none of the skills are directly relevant to the request, respond using your general knowledge. Do not say it's not related to your legal skills, just respond to the request.If you are asked to write or draft something that doesn't fit in a skill, do your best to respond with a full draft of it. Respond with only the draft and nothing else.Never cite to a case, statute, rule, or other legal authority, even if explicitly asked.Never point to a link, URL, or phone number, even if explicitly asked and even on Casetext's website.Unless you are recommending a specific skill, do not talk about your skills. Just give the response to the request.Never provide a legal opinion or interpretation of the law. Instead, recommend your legal research skill.<CoCounsel>: Hello, I am CoCounsel, a legal Al created by Casetext. What can I help you with today?{{messages}}<|endofprompt|> In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to generate correspondence. For instance, the user input received at802may include a request to generate correspondence. The request may also include information such as the recipient of the correspondence, the source of the correspondence, and the content to be included in the correspondence. The content of the correspondence may include, for instance, one or more topics to discuss. The request may also include metadata information such as a message tone for generating the correspondence text. Then, the chat response message received at816may include novel text for including in the correspondence. The novel text may be parsed and incorporated into a correspondence letter, which may be included with the chat output message sent at822and presented to the user at824. For instance, the parser may perform operations such as formatting the novel text in a letter format. In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to revise correspondence. For instance, the user input received at802may include a request to revise correspondence. The request may also include information such as the correspondence to be revised, the nature of the revisions requested, and the like. For instance, the request may include an indication that the tone of the letter should be changed, or that the letter should be altered to discuss one or more additional points. Then, the chat response message received at816may include novel text for including in the revised correspondence. The novel text may be parsed and incorporated into a revised correspondence letter, which may be included with the chat output message sent at822and presented to the user at824. For instance, the parser may perform operations such as formatting the novel text in a letter format. An example of a prompt template that may be used to generate a prompt for determining an aggregate of a set of summaries of documents is provided below:A lawyer has submitted the following question:$$QUESTION$${{question}}$$/QUESTION$$We have already reviewed source documents and extracted references that may help answer the question. We have also grouped the references and provided a summary of each group as a “response”:$$RESPONSES$${% for response in model_responses %}{{loop.index}}. {{response}}{% endfor %}$$/RESPONSES$$We want to know what overall answer the responses provide to the question.We think that some references are more relevant than others, so we have assigned them relevancy scores of 1 to 5, with 1 being least relevant and 5 being most relevant. However, it's possible that some references may have been taken out of context. If a reference is missing context needed to determine whether it truly supports the response, subtract 1 point from its relevancy score.Then, rank each response from most-reliable to least-reliable, based on the adjusted relevancy scores and how well the references support the response.Draft a concise answer to the question based only on the references and responses provided, prioritizing responses that you determined to be more reliable.If the most-reliable response completely answers the question, use its verbatim text as your answer and don't mention any other responses.Answer only the question asked and do not include any extraneous information.Don't let the lawyer know that we are using responses, references, or relevancy scores; instead, phrase the answer as if it is based on your own personal knowledge.Assume that all the information provided is true, even if you know otherwiseIf the none of the responses seem relevant to the question, just say “The documents provided do not fully answer this question; however, the following results may be relevant.” and nothing else.<|endofprompt|>Here's the answer and nothing else: FIG.9illustrates an example of a method900for generating a document timeline, performed in accordance with one or more embodiments. The method900may be performed at the text generation system200in order to summarize one or more documents provided or identified by a client machine. In some configurations, the method900may be performed to summarize one or more documents returned by a search query. One or more documents are received at902. In some embodiments, a document may be uploaded by the client machine. Alternatively, a document may be identified by the client machine, for instance via a link. As still another possibility, a document may be returned in a search result responsive to a query provided by a client machine. A single summary request may include documents identified and provided in various ways. In some embodiments, the one or more documents may be received along with user input. The user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input may be used to generate a summary input message904, which is sent to the text generation interface system210. In some implementations, the summary input message904may be received by the text generation interface system210via a web socket. Alternatively, a different form of communication may be used, for instance an asynchronous mode of communication. At906, the text generation interface system210determines one or more summarize prompt908based on the summary request message904. In some embodiments, the determination of the summarize prompt may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods500and600shown inFIG.5andFIG.6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text. Then, each chunk may be used to create a respective summarize prompt for summarizing the text in the chunk. For instance, text may be inserted into a template via a tool such as Jinja2. The one or more summarize prompts908may include one or more instructions for implementation by the text generation modeling system270. Additionally, the one or more summarize prompts each includes a respective text chunk910determined based on the summary request message904. The one or more summarize prompts908are then sent to the text generation modeling system270via one or more summarize prompt messages912. The text generation modeling system270generates one or more raw summaries at914, which are then sent back to the text generation interface system210via one or more summarize response messages at916. The one or more summarize response messages are parsed at918to produce one or more parsed summary responses at920. In some embodiments, the one or more summary response messages received at916may include ancillary information such as all or a portion of the summarize prompt messages sent at912. Accordingly, parsing the summarize response messages may involve performing operations such as separating the newly generated summaries from the ancillary information included in the one or more summarize response messages. An example of a prompt template used to instruct a text generation system to summarize a text is shown below:You are a highly sophisticated legal Al. A lawyer has submitted questions that need answers.Below is a portion of a longer document that may be responsive to the questions:$$DOCUMENT$${%-for page in page_list-%}$$PAGE {{pager [page] }}$${{page[“text”]}}$5/PAGE$$1%-endfor-%1$5/DOCUMENT$$We would like you to perform two tasks that will help the lawyer answer the questions. Each task should be performed completely independently, so that the lawyer can compare the results.Extractive taskThe purpose of this task is not to answer the questions, but to find any passages in the document that will help the lawyer answer them. For each question, perform the following steps:1. Extract verbatim as many passages from the document (sentences, sentence fragments, or phrases) as possible that could be useful in answering the question. There is no limit on the number of passages you can extract, so more is better. Don't worry if the passages are repetitive; we need every single one you can find.If the question asks for a list of things or the number of times something occurred, include a passage for every instance that appears in the document2. If you extracted any passages, assign each one a score from 1 to 5, representing how the passage relates to the question:(complete answer)4 (one piece of a multipart answer)3 (relevant definition or fact)2 (useful context)1 (marginally related)Abstractive taskThe purpose of this task is to compose an answer to each question. Follow these instructions:Base the answer only on the information contained in the document, and no extraneous information. If a direct answer cannot be derived explicitly from the document, do not answer.Answer completely, fully, and precisely.Interpret each question as asking to provide a comprehensive list of every item instead of only a few examples or notable instances. Never summarize or omit information from the document unless the question explicitly asks for that.Answer based on the full text, not just a portion of it.For each and every question, include verbatim quotes from the text (in quotation marks) in the answer. If the quote is altered in any way from the original text, use ellipsis, brackets, or [sic] for minor typos.Be exact in your answer. Check every letter.There is no limit on the length of your answer, and more is betterCompose a full answer to each question; even if the answer is also contained in a response to another question, still include it in each answerHere are the questions:$$QUESTIONS$${{question_str}}$$/QUESTIONS$$Return your responses as a well-formed JSON array of objects, with each object having keys of:‘id’ (string) The three-digit ID associated with the Question‘passages’ (array) a JSON array of the verbatim passages you extracted, or else anempty array. Format each item as a JSON object with keys of:‘passage’ (string)‘score’ (int) the relevancy score you assigned the passage‘page’ (int) the number assigned to the page in which the snippet appears‘answer’ (string) the answer you drafted, or else “N/A”Escape any internal quotation marks or newlines using \” or \n[{“id”: <id>, “passages”: [{“passage”: <passage>, “score”: <score>, “page”: <page>}, . . . ] |[ ], “answer”: <text>|“N/A”), . . . ]Only valid JSON; check to make sure it parses, and that quotes within quotes are escaped or turned to single quotes, and don't forget the ‘,’ delimiters.<|endofprompt|>Here is the JSON array and nothing else: According to various embodiments, the one or more parsed summary responses920may be processed in any of various ways. In some embodiments, the one or more parsed summary response messages920may be concatenated into a summary and provided to the client machine via a summary message922. The summary may then be presented as output on the client machine at924. Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface, and/or storing the summary in a file. In some embodiments, the one or more parsed summary responses920may be used as input to generate a consolidated summary. For example, a consolidated summary may be generated if the aggregate size of the parsed summary responses920exceeds or falls below a designated threshold. As another example, a consolidated summary may be generated if the client machine provides an instruction to generated a consolidated summary, for instance after receiving the summary message at922. In some embodiments, generating a consolidated summary may involve determining a consolidation prompt at926. The consolidation prompt may be determined by concatenating the parsed summary responses at920and including the concatenation result in a consolidation prompt template. In the event that the concatenated parsed summary responses are too long for a single chunk, then more than one consolidation prompt may be generated, for instance by dividing the parsed summary response920across different consolidation prompts. In some implementations, one or more consolidation prompt messages including the one or more consolidation prompts are sent to the text generation modeling system270at928. The text generation modeling system270then generates a raw consolidation of the parsed summary responses920and provides the novel text generated as a result via one or more consolidation response messages sent at932. According to various embodiments, the one or more consolidation response messages are parsed at934. For instance, if the one or more consolidation response messages include two or more consolidation response messages, each of the different messages may be separately parsed, and the parsed results concatenated to produce a consolidated summary. The consolidated summary is provided to the client machine at936via a consolidation message. The client machine may then present the consolidated summary as consolidation output at938. In the event that further consolidation is required, operations92-934may be repeated. FIG.10illustrates an example of a method1000for generating a timeline, performed in accordance with one or more embodiments. The method1000may be performed at the text generation system200in order to generate an event timeline based on one or more documents provided or identified by a client machine. In some configurations, the method1000may be performed to generate a timeline based on one or more documents returned by a search query. One or more documents are received at1002. In some embodiments, a document may be uploaded by the client machine. Alternatively, a document may be identified by the client machine, for instance via a link. As still another possibility, a document may be returned in a search result responsive to a query provided by a client machine. A single timeline generation request may include documents identified and provided in various ways. In some embodiments, the one or more documents may be received along with user input. The user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input may be used to generate a timeline generation request message1004, which is sent to the text generation interface system210. In some implementations, the timeline generation request message1004may be received by the text generation interface system210via a web socket. Alternatively, a different form of communication may be used, for instance an asynchronous mode of communication. At1006, the text generation interface system210determines one or more timeline generation prompts1008based on the timeline generation request message1004. In some embodiments, the determination of the one or more timeline prompts may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods500and600shown inFIG.5andFIG.6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text. Then, each chunk may be used to create a respective summarize prompt for summarizing the text in the chunk. For instance, text may be inserted into a template via a tool such as Jinja2. The one or more timeline generation prompts1008may include one or more instructions for implementation by the text generation modeling system270. Additionally, the one or more timeline generation prompts each includes a respective text chunk1010determined based on the timeline generation request message received at1004. The one or more timeline generation prompts1008are then sent to the text generation modeling system270via one or more timeline generation prompt messages1012. The text generation modeling system270generates one or more input timelines at1014, which are then sent back to the text generation interface system210via one or more timeline generation response messages at1016. An example of a prompt template for generating a prompt for generating a timeline is provided below:You are a world-class robot associate reviewing the following text. It may be an excerpt from a larger document, an entire document, or encompass multiple documents.$$TEXT$${% for page in page_list %}$$PAGE {{page[“page”] }}$${{page[“text”] }}$$/PAGE$${% endfor %}$$/TEXT$$Create a list of all events for your managing partner based on what is described in the text.Draw only from events mentioned in the text; nothing extraneous.Events include occurrences that are seemingly insignificant to the matter at hand in the document, as well as mundane/pedestrian occurrences. Make sure to include ALL events, leaving nothing out (with a few exceptions listed below).If the text is a transcript, do not include events that took place during the creation of the transcript itself (like the witness being asked a question or actions by a court reporter); rather, include all the events described therein. Also include a single event for the occurrence during which the transcript is being taken.Do not include events associated with legal authorities if they are part of a legal citation.Legal arguments or contentions, e.g. interpretations of case law, are not events, although they may make reference to real events that you should include.Make sure to include events of legal significance even if they did not necessarily come to pass, such as when something is in effect, potential expirations, statutes of limitations, etc.Assume that when there is a date associated with a document, that document's creation/execution/delivery/etc. should be considered an event in and of itself.For each event you identify, determine how notable it is on a scale from 0 to 9, with 0 being utterly mundane to the extent that it is almost unworthy of mention and 9 being an essential fact without which the text is meaningless.In case it is relevant to your analysis, today's date is {{requested_date}}. Do not consider this one of the events to list.Answer in a JSONL list, with each event as its own JSONL object possessing the following keys:‘description’ (string): a fulsome description of the event using language from the text where possible. Use past tense.‘page’ (int): page in which the fact is described. If it is described in multiple pages, simply use the first occurrence‘notability’ (int): 0 to 9 assessment of the facts' notability‘year’ (int): year of the event‘month’ (int or null): If discernible‘day’ (int or null): If discernible‘hour’ Optional(int): If discernible, otherwise do not include. Use military (24 hour) time‘minute’ Optional(int): If discernible, otherwise do not include‘second’ Optional(int): If discernible, otherwise do not includeIn creating this JSONL list, make sure to do the following:If there are no events in the text, respond with a single JSONL object with a key of ‘empty’ and value of True.Note that some events may be expressed relatively to each other (e.g., “one day later” or “15 years after the accident”); in those circumstances, estimate the date based on the information provide and make a brief note in the description field.Keys that are marked as optional (hour, minute, second) should not be included in the event objects if that detail is not present in the text.Keys that are marked as ($type$ or null) should ALWAYS be present in the list, even when the value is null.If there is an event that took place over a period of time, include one event in the list for the start and one event for the end, noting as much in the descriptionIf there is no datetime information associated with an event, do not include it in your list.Your answer must be thorough and complete, capturing every item of the types described above that appears in the text.Return a JSON Lines (newline-delimited JSON) list of the events.<|endofprompt|>Here's the JSONLines list of events: In some implementations, an input timeline may be specified in a structured format included in the text generation generated by the text generation modeling system270. For instance, the input timeline may be provided in a JSON format. The one or more timeline generation response messages are parsed at1018to produce one or more parsed timelines events at1020. In some embodiments, the one or more timeline response messages received at1016may include ancillary information such as all or a portion of the timeline generation prompt messages sent at1012. Accordingly, parsing the timeline generation response messages may involve performing operations such as separating the newly generated timelines from the ancillary information included in the one or more timeline response messages. One or more deduplication prompts are created at1022. In some embodiments, a deduplication prompt may be created by inserting events from the parsed timelines at1020into the deduplication prompt, for instance via a tool such as Jinja2. Each timeline event may be specified as, for instance, a JSON portion. The deduplication prompt may include an instruction to the text generation modeling system to deduplicate the events. In some embodiments, in the event that the number of events is sufficiently large that the size of the deduplication prompt would exceed a maximum threshold, then the events may be divided across more than one deduplication prompt. In such a situation, the events may be ordered and/or group temporally to facilitate improved deduplication. In some embodiments, the one or more deduplication prompts are sent to the text generation modeling system270via one or more deduplication prompt messages1024. The text generation modeling system270generates a set of consolidated events at1026and provides a response message that includes the consolidated events at1028. An example of a deduplication prompt template that may be used to generate a deduplication prompt is provided below:Below are one or more lists of timeline events, with each event formatted as a JSON object:$$EVENT_LISTS$${% for list in event_lists %}$$LIST$${% for item in list %}{{item}}{% endfor %}$$LIST$${% endfor %}$$EVENT_LISTS$$We think that each list may contain some duplicate events, but we may be wrong. Your task is to identify and consolidate any duplicate events. To do this, please perform the following steps for each list:1. Identify any events in the list that are duplicative.For our purposes, events are duplicative if their ‘description’ keys appear to describe the same factual occurrence, even if they have different ‘datetime’ keys. For example, one event may say “Bob died” while another may say “the death of Bob.” Those should be considered duplicate events.Events are not duplicative just because they occurred on the same day. They must also describe the same occurrence to be considered duplicative.2. If there are duplicates, keep the event with the most complete description and discard the other duplicates3. If you discarded any events in step 2, append the items in their ‘references’ arrays to the ‘references’ array of the event you chose to keep. Retain the notability score from the event you chose to keep.4. Re-evaluate the entire list and discard any items from the list that are not valid events, which includes the following:Legal arguments and contentions, such as allegations that a statute was violated are not valid events.Actions that took place during a hearing or deposition such as a witness being asked a question or shown a document are not valid events.The fact that someone testified is not a valid event.The fact that someone or something was mentioned in the text is not a valid event. For example, “the document mentioned the defense for the first time” is not a valid event.The occurrence of a date or time reference in the text by itself, or where the event that occurred on that date is unknown is not a valid event. For example, “the mention of October as a month in which something occurred” is not a valid event. “The occurrence of the year 1986” is also not a valid event. “An event occurred at 7:00” is also not a valid event.Mentions of exhibits are not valid events.Respond with a well-formed JSON Lines (newline-delimited JSON) list with one object for each event from the lists provided that is not a duplicate, along with any events that you chose to keep in step 2.Aside from any changes you made in step 3, keep all the original keys and values for each event you return. For reference, each event should be in the following format:{‘id’ (string): <id>, ‘description’ (string): <description>, ‘datetime’ (string): <datetime>, ‘references’ (array): [{‘document_id’ (string): <document_id>, ‘page’ (int): <page> . . . ]}<|endofprompt|>Here's the JSON Lines list and nothing else: The one or more consolidation response messages are parsed at1030to generate a consolidated timeline. Parsing the one or more consolidation response messages may involve, for instance, separating JSON from ancillary elements of the one or more consolidation response messages, joining events from two or more consolidation response messages into a single consolidated timeline, and the like. The consolidated timeline is transmitted to the client machine via a consolidation message at1032, and presented at the client machine at1034. Presenting the consolidated timeline may involve, for instance, displaying the timeline in a user interface, including the timeline in a chat message, and/or storing the timeline in a file. FIG.11illustrates a flow diagram1100for generating correspondence, configured in accordance with one or more embodiments. The flow diagram1100provides an example of how techniques and mechanisms described herein may be combined to generate novel text in a manner far more sophisticated than simple back-and-forth interactions with text generation modeling systems. The operations shown in the flow diagram1100may be performed at a text generation interface system, such a the system210shown inFIG.2. A request is received at1102. In some embodiments, the request may be received as part of a chat flow. Alternatively, the request may be received as part of a correspondence generation flow. The request may, for instance, include a natural language instruction to generate a correspondence letter pertaining to a particular topic on behalf of a particular party. At1104, the text generation interface system identifies a skill associated with the request by transmitting a prompt to the text generation modeling system. The text generation modeling system returns a response identifying correspondence generation as the appropriate skill. Additional details regarding skill identification are discussed with respect toFIG.8. At1106, the text generation interface system identifies one or more search terms associated with the correspondence by transmitting a prompt to the text generation modeling system. The text generation modeling system may complete the prompt by identifying, for example, relevant keywords from within the request received at1102. At1108, one or more search queries are executed to determine search results. In some embodiments, one or more search queries may be executed against an external database such as a repository of case law, secondary sources, statutes, and the like. Alternatively, or additionally, one or more search queries may be executed against an internal database such as a repository of documents associated with the party generating the request at1102. At1110-1114, the text generation interface system summarizes the search results and then summarizes the resulting search summaries. According to various embodiments, such operations may be performed by retrieving one or more documents, dividing the one or more documents into chunks, and then transmitting the chunks in one or more requests to the text generation modeling system. Additional details regarding document summarization are discussed throughout the application, for instance with respect toFIG.9. At1116, based at least in part on the search summary, the text generation interface system determines a number of separate correspondence portions to generate. The correspondence portions are then generated at1118and1120and combined into a single correspondence at1122. According to various embodiments, such operations may be performed by transmitting appropriate prompts to the text generation modeling system, and then parsing the corresponding responses. Additional details regarding determining correspondence and combining results are discussed throughout the application, for instance with respect toFIGS.8and9. At1124, one or more factual claims in the generated correspondence are identified. According to various embodiments, factual claims may include, for instance, citations to legal case law, statutes, or other domain-specific source documents. Factual claims may also include claims based on other accessible information sources such as privately held documents, information publicly available on the internet, and the like. In some embodiments, the identification of a factual claim may be associated with a respective set of search terms. The search terms may be used to search for evidence for or against the factual claims at1126-1128. The results of these searches may then be provided in prompts to evaluate the factual claims sent to the text generation modeling system at1130-1132. The text generation modeling system may complete the prompts by indicating whether the factual claims are accurate given the available search results. At1134, the text generation interface system revises the correspondence by transmitting one or more prompts to the text generation modeling system. The requests may include the correspondence generated at1122as well as one or more results of the analysis of the factual claims. In this way, the text generation modeling system may revise the correspondence for accuracy, for instance by removing factual claims deemed to be inaccurate. It is important to note that the particular flow shown inFIG.11is only one example of ways in which text generation flows discussed herein may be combined to generate novel text. Many combinations are possible and in keeping with techniques and mechanisms described herein. For example, the flow1100may be supplemented with one or more user interactions. FIG.12illustrates a hallucination detection method1200, performed in accordance with one or more embodiments. The method1200may be performed by the text generation interface system210shown inFIG.2. In some embodiments, the method1200may be performed in order to determine whether novel text generated by a text generation modeling system includes one or more hallucinations. Generative text systems sometimes generate text that includes inaccurate claims. For example, in the legal sphere, a request to summarize a set of judicial opinions about a point of law may result in a summary text that includes a citation to a non-existent opinion. A request is received at1202to identify one or more hallucinations in novel text generated by a text generation model. In some embodiments, the request may be received as part of one or more methods shown herein. For example, the method1200may be performed as part of one or more of the methods shown inFIG.4,FIG.8,FIG.9,FIG.10, and/orFIG.11to evaluate a response returned by the text generation modeling system. When employed in this way, the method1200may be used to prompt the system to revise the response, for instance as discussed with respect toFIG.11. Alternatively, or additionally, the method1200may be used to prompt the system to generate a new response, to flag the error to a systems administrator, and/or to inform a response recipient of a potentially inaccurate response. In some implementations, the request may be received as part of a training and/or testing procedure. For instance, one or more prompts may be tested by the prompt testing utility226against one or more tests stored in the test repository224. A test result may be evaluated using the method1200to determine whether a prompt constructed from a prompt template being tested resulted in the generation of a hallucination, which may be treated as a test failure. One or more factual assertions in the novel text are identified at1204. In some embodiments, the one or more factual assertions may be identified by transmitting a prompt to the text generation modeling system. For instance, the novel text may be included in a prompt requesting that the text generation modeling system identify factual claims in the novel text. The resulting completed prompt may be parsed to identify the one or more factual assertions. A factual assertion is selected for analysis. Factual assertions identified at1204may be analyzed in sequence, in parallel, or in any suitable order. One or more search terms associated with the factual assertion are determined at1208. In some embodiments, one or more search terms may be returned by the text generation modeling system at1204. Alternatively, or additionally, one or more search terms may be determined based on a separate request sent to the text generation modeling system for the factual assertion being analyzed. A search query to identify one or more search results based on the one or more search terms is executed at1210. According to various embodiments, one or more searches may be executed against any suitable database. Such databases may include, but are not limited to: public sources such as the internet, internal document databases, and external document databases. The one or more search results are summarized at1212. In some embodiments, summarizing the one or more search results may involve, for instance, dividing documents into chunks and transmitting the one or more chunks to the text generation modeling system within summarization prompts. At1214, the factual assertion is evaluated against the one or more search results. In some embodiments, evaluating the factual assertion may involve transmitting to the text generation modeling system a prompt that includes a request to evaluate the factual assertion, information characterizing the factual assertion, and a summary of the one or more search results determined as discussed at1212. A determination is made at1216as to whether the factual assertion is accurate. In some embodiments, the determination may be made by parsing the response returned by the text generation modeling system at1214. For instance, the text generation modeling system may complete the prompt by indicating whether the factual assertion is true, false, or uncertain based on the provided summary of search results. If it is determined that the factual assertion is inaccurate, then at1218the factual assertion is identified as a hallucination. In some embodiments, identifying the factual assertion as a hallucination may cause one or more consequences in an encompassing process flow. For example, in a testing phase, the detection of a hallucination may cause the test to fail. As another example, in a production phase, the detection of a hallucination may cause the system to initiate a flow to revise the novel text to remove the hallucination. FIG.13illustrates a document reduction pre-processing method1300, performed in accordance with one or more embodiments. The method1300may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1300may be performed at the text generation interface system210. A request to analyze a set of text portions using a query is received at1302. In some embodiments, the request may be received via a chat interface. For instance, the text generation interface system may receive text-based messages from a client machine and then provide to the client machine text-based responses generated by a machine learning model. Alternatively, the request may be received in some other way, such as via an API request. The request may be generated automatically or based on user input. According to various embodiments, a text portion may correspond to a document, a set of documents, a portion of a document, or text outside the context of a document. Text portions may be identified in any of various ways. For example, the request received at1302may include one or more identifiers that uniquely identify individual text portions and/or groups of text portions stored in a document repository or other location accessible to the text generation interface system. As another example, the request received at1302may include a query for searching for text portions within one or more document repositories or other sources of text, and the text portions identified at1302may include results determined by executing such a search. In some implementations, the query included in the request received at1302may include a natural language question, instruction, filter, or other such actionable text implemented in natural language. For example, the query may ask the text generation interface system to answer a set of questions based on information stored in the text portions. As another example, the query may ask the text generation interface system to generate a set of questions for resolving uncertainty related to a topic based on the text portions. As yet another example, the query may ask the text generation interface system to generate an argument or a response to an argument based on the text portions. In this case, the query may include reference information such as an argument to which the text generation interface system is being asked to respond. Thus, the query may include additional information beyond an instruction, a question, or the like, such as contextual information needed to execute the request. A determination is made at1304as to whether to subdivide the query. In some embodiments, the determination may be made based on one or more indicators that the query is complex. For example, a determination may be made to subdivide a query based on its length and/or complexity. As another example, a determination may be made to subdivide the query based on the presence, absence, or number of characteristics such as question marks, sentences, conjunctives, and other such features. The determination may be made based at least in part on a machine learning model applied to the query to classify it in terms of complexity. If it is determined to subdivide the query, then at1306a query division prompt is determined for dividing the query into subqueries. In some embodiments, the prompt may be determined by combining a prompt template with the text of the query. The prompt template may include an instruction to divide the query into a set of subqueries. The prompt template may also include a fillable portion into which the query text may be inserted. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a query division prompt template is as follows:You are part of a retrieval system that attempts to understand user queries, split the following query into simpler queries that can be run individually:{{query text}}<|endofprompt|> At1308, two or more subqueries are identified based on communication with a text generation modeling system. In some embodiments, the two or more subqueries may be identified by sending the query division prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the query division prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the subqueries from the completed query division prompt, for instance by parsing JSON included in the completed request. A query is selected for analysis at1310. According to various embodiments, queries may be analyzed in sequence, in parallel, or in any suitable order. A training data generation prompt for generating training data based on the selected query is determined at1312. In some embodiments, the training data generation prompt may include an instruction for instructing a text generation modeling system to generate text that matches the query. The training data generation prompt may include a fillable portion for including the text of the query. Training data for the selected query is determined at1314based on communication with the text generation modeling system. In some embodiments, the training data may be identified by sending the training data generation prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the training data generation prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the training data from the completed query division prompt, for instance by parsing JSON included in the completed request. An example of a prompt template for generating training data in the context of legal contracts is as follows:The task is to generate queries and <N> variations thereof that would retrieve a specific contract clause in a retrieval system comprised of a large collection of contracts.Given the following clause for clause type<clause_type>: <clause> queries:<|endofprompt|> In some embodiments, the training data may include one or more training data text portions. Each training data text portion may include text constructed by the text generation modeling system based on the text of the query. For example, a training data text portion may substitute one or more of the words in the query for synonyms. As another example, a training data text portion may restate a query using a different sentence structure. A trained classification model is determined at1316based on the training data. According to various embodiments, any of a variety of classification models may be used. For instance, the classification model may include a text embedding model that positions text in a vector space. A determination is made at1318as to whether to select an additional query for analysis. In some implementations, additional queries may continue to be selected until all available queries are processed. If it is determined not to select an additional query for analysis, then a subset of the text portions is selected based on the one or more queries and the associated classification models. Additional details regarding the selection of text portions for analysis are discussed with respect to the method1400shown inFIG.14. FIG.14illustrates a document reduction first stage method1400, performed in accordance with one or more embodiments. The method1400may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1400may be performed at the text generation interface system210. A request is received at1402to reduce a set of text portions based on a query. In some embodiments, the request may be generated as discussed with respect to operation106. The request may identify a query to answer and a set of text portions that may be used to answer the query. Optionally, the request may be generated after performing one or more of the preprocessing operations discussed with respect to the method1300shown inFIG.13. A text portion is selected for relevance analysis at1404. According to various embodiments, text portions may be analyzed in parallel or in sequence, and in any suitable order. A text portion type associated with the text portion is determined at1406. A machine learning model is determined at1408based on the text portion type. In some embodiments, the text portion type may be determined based on the application of a classification model. For instance, a machine learning model may be configured to classify text portions or documents into one or more of a set of types of text. Then, a machine learning model may be selected that is specific to the text portion type. In some embodiments, different types of text may be associated with different types of models. Alternatively, or additionally, a type of text may be associated with a machine learning model that is specifically trained for that type of text. A relevance score is determined at1410by comparing the text portion to the query using a machine learning model. According to various embodiments, any of a variety of machine learning models may be used. In some embodiments, a machine learning model may be implemented as a pre-trained text embedding model trained as discussed with respect toFIG.13. For instance, a machine learning model may be implemented as a bi-encoder in which text portions are separately encoded and then mapped to a common embedding space. Then, at1406, the relevance score may depend on the distance between the query and the text portion in the embedding space. As another example, a machine learning model may be implemented as a cross-encoder model. In a cross-encoder, all or a portion of the query and all or a subportion of the text portion may be compared in a pair model, which may be built on a transformer-based language model such as BERT (Bidirectional Encoder Representations from Transformers) or RoBERTa (Robustly Optimized BERT Pretraining Approach). FIG.15illustrates a cross-encoder modeling system, configured in accordance with one or more embodiments. The cross-encoder modeling system accepts as input both a query portion1502and a text portion1504. The query and text portions are separated in the input by a separator1506. The cross-encoder modeling system that employs a number of layers of cross-linked neurons1508to produce a relevance score1510. According to various embodiments, the number of layers of neurons and the number of neurons in each layer may be strategically determined for accuracy and efficiency. For instance, one or more text embedding models may be created using a training data set. The text embedding models may then be used to produce relevance scores for a number of different queries and text portions. The relevance scores may then be used to create a loss function for hyperparameter tuning of the number of layers of neurons and number of neurons per layer in a cross-encoder model. Then, the cross-encoder model may be used for future iterations without pre-training. In some embodiments, a combination of approaches may be used. For instance, in a trans-encoder, one or more bi-encoder representations may be used to fine-tune a cross-encoder. Then, the cross-encoder may be used to perform more accurate knowledge extraction using inter-sentence modeling. The resulting information may be used to improve the accuracy of the bi-encoder model. The process may be repeated to iteratively bootstrap from both the bi-encoder and the cross-encoder. A determination is made at1408as to whether the relevance score exceeds a designated threshold. According to various embodiments, the designated threshold may be strategically determined based on various factors. For example, different machine learning models may produce relevance scores having different distributions, leading to a designated threshold that is model-dependent. As another example, the designated threshold may be determined based at least in part on the number of text portions included in the request and a desired reduction of the text portions. For instance, the designated threshold may be determined so as to select a particular number or proportion of the text portions as relevant. As another example, the designated threshold may be determined so as to select more or fewer text portions as relevant, which may involve various tradeoffs. For instance, setting a lower designated threshold may result in selecting more documents as relevant, potentially leading to improved accuracy in answering the query at the expense of relatively greater cost and compute time. If it is determined that the relevance score does not exceed the designated threshold, then at1414the selected text portion is excluded for query analysis. If instead it is determined that the relevance score does exceed the designated threshold, then at1416the selected text portion is included for query analysis. A determination is made at1418as to whether to select an additional text portion for analysis. According to various embodiments, text portions may continue to be selected until all available text portions have been analyzed for relevance. If it is determined not to select an additional text portion for analysis, then at1420an answer to the query is determined based on the included text portions. According to various embodiments, determining an answer to the query may involve communicating with a text generation modeling system using the selected text portion. In some implementations, determining an answer to the query may involve implementing one or more elements from workflows discussed herein. Optionally, the text portions may be reduced further, for instance as described with respect to the method1600shown inFIG.16. FIG.16illustrates a document reduction second stage method1600, performed in accordance with one or more embodiments. The method1600may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1600may be performed at the text generation interface system210. A request is received at1602to reduce a set of text portions based on a query. In some embodiments, the request may be generated as discussed with respect to operation108. The request may identify a query to answer and a set of text portions that may be used to answer the query. Optionally, the request may be generated after performing one or more of the preprocessing operations discussed with respect to the method1300shown inFIG.13and/or one or more of the document reduction operations discussed with respect to the method1400shown inFIG.14. One or more text portions are selected for analysis at1604. In some embodiments, text portions may be selected so as to fit within a designated chunk size. Additional details regarding the division of text into chunks are discussed with respect to the method600shown inFIG.6. A relevance prompt is determined at1606based on the selected one or more text portions. In some embodiments, the relevance prompt template may also include an instruction to the text generation modeling system to evaluate and/or rank the included text portions for relevance against the query. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a relevance prompt template is as follows:Evaluate whether these documents are relevant to this research request or query: “{{text}}”$$DOCUMENTS$$DOCUMENTS$$/DOCUMENTS$$Only respond with relevant documents. In order to be deemed relevant, a document must directly answer the request or query. A document should also be considered relevant if it reaches a conclusion in opposition to the research request.If there are no relevant documents, do not include any in your response.Assign a relevance score to each document, judging its relevance to the research request or query: “{{text}}”. The score should correlate to these values:5—the document is directly on-point (i.e., it precisely responds to every aspect of the query or request, even if it is in opposition to the request, and not a similar but different issue; it fully and conclusively settles the question raised in the request either in favor or against the intention of the request, if any)4—the document may provide a useful analogy to help answer the request, but is not directly responsive3—the document is roughly in the same topical area as the request, but otherwise not responsive2—the document might have something to do with the request, but there is no indication that it does in the text provided1—the document is in no way responsive to the requestReturn a JSON array of objects, each object representing a relevant case, ordered with the most relevant case first. Each object in the array will have the keys:\‘result_id\’—string, the result ID\‘reason_relevant\’—string, a description of how the document addresses the research request or query: “{user_request}”. In drafting this response, only draw from the excerpted language of the document; do not include extraneous information.\‘relevance_score\’—number, between 1-5, of how relevant the document is to the research request or query: “user_request”\‘quotes\’—array of strings. For each document, quote the language from the document that addresses the request. In finding these quotes, only draw from the excerpted language; do not include extraneous information. Do not put additional quotation marks around each quote beyond the quotation marks required to make valid JSON.Only valid JSON. Quotation marks within strings must be escaped with a backslash (\‘\\\’). Examples for reason_relevant: \‘“The concept of \\“equitable tolling\\” applies in this case.“\‘, \’” The case overturns a lower court decision that found a state abortion restriction unconstitutional based on Roe v. Wade and Casey, and argues that the viability rule from those cases is not the \\“central holding.\\” This case calls into question the continued validity of Roe v. Wade.“\’If there are no relevant documents, respond with an empty array.<|endofprompt|>Here's the JSON: Relevance scores for the selected one or more text portions are determined at1608based on communication with a text generation modeling system. In some embodiments, the relevance scores may be identified by sending the relevance prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the relevance prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the relevance from the completed query division prompt, for instance by parsing JSON included in the completed request. In particular embodiments, the relevance prompts may be implemented as high-read, low-write. In such a configuration, the text generation modeling system may be instructed to provide a small amount of feedback for a text portion rather than to generate a description in natural language. For instance, the text generation modeling system may be asked to provide a sequence of numbers corresponding to relevance scores for the sequence of text portions. In this way, the cost associated with interacting with the text generation modeling system may be reduced. A subset of the selected one or more text portions are selected as relevant at1610based on the relevance scores. According to various embodiments, the subset of the text portions may be selected as relevant based on a comparison of the relevance score against a designated threshold. As discussed with respect to the operation1408shown inFIG.14, a relevance threshold may be determined based on various factors. A determination is made at1612as to whether to select an additional text portion for analysis. According to various embodiments, additional text portions may continue to be selected until all available text portions have been analyzed for relevance. If it is determined not to select an additional text portion for analysis, then at1614an answer to the query is determined based on the text portions selected as relevant. According to various embodiments, determining an answer to the query may involve communicating with a text generation modeling system using the selected text portion. Determining an answer to the query may involve implementing one or more elements from workflows discussed herein. According to various embodiments, an answer to a query may be determined in various ways after performing one or two stages for reducing the number of input documents. In some embodiments, a particular workflow may be performed, depending on the type of query. Various types of suitable workflows are discussed throughout the application as filed. In some implementations, such as in the context of document review and/or retrieval, a chaining approach may be employed. In a chaining approach, the documents remaining after document reduction may be divided into chunks, for instance as discussed with respect toFIG.6. Then, individual chunks may be provided to the text generation system with the same prompt, for instance a request to answer a query based on the text included in the chunk. The completed prompt templates may then be included in a single prompt along with a request for the large language model implemented at the text generation modeling system to summarizing a single answer based on the chunk-level answers. In some embodiments, an answer to the query may be determined at least in part by employing a prompt that includes an instruction to determine an answer to a query based on one or more prompts identified as being potentially relevant. An example of such a prompt in the legal context is provided below:Answering a Question about a {{context}}The following is a Question being asked by a User and an excerpt from a Contract that we believe contains the answer to the question.Question: {{query}}Contract Clauses:<contract_clauses>{% for contract_section in paragraphs %}<section><text>{{contract_section.text}}</text></section>{% endfor %}</contract_clauses>Please answer the Question using only information in the Clauses provided. If the Clauses do not contain the answer to the Question, never try to guess—just state that the question cannot be answered from the information provided.Provide your answer in the following XML format:<question_comprehension>[restate what the Question is trying to ask in clear terms to show that you understood the question]</question_comprehension><quote_text>[quote the text from the Clauses above that answer the question. Provide exact quote with nothing added, though you may use ellipses ( . . . ) to skip less relevant portions in a quote. If the Question is asking about the presence or absence of a certain term, within a type of clause, the clause should be listed here even if the term is absent. If the Question cannot be answered from the information in the Clauses, write NO ANSWER here. If all of the text from the Clauses is relevant to the question, just write ALL RELEVANT here.]</quote_text><full_answer>[your full answer to the question here, including any explanation you think appropriate to give (or write NO ANSWER if you said NO ANSWER above). Where numerical amounts are involved or where the question or clause language are complex, write out the step-by-step reasoning required to come to the answer.]</full_answer><short_answer>[{{answer_type_instruction}} (or write NO ANSWER if you said NO ANSWER above)]</short_answer><|endofprompt|><question_comprehension> In some embodiments, a query may be answered via multiple prompts corresponding to, for instance, different portions of a contract and/or excerpts from different contracts. In such a situation, one or more consolidation prompts may be used to consolidate the answers from these different prompts into a single answer. An example of a consolidation prompt is as follows:#InstructionsYour job is to consolidate a number of different answers about parts of a document to come up with the best answer to a question that responds for the document overall.The partial answers presented to you will include a summary answer, an explanation of the answer, and the language the partial answer relied upon to come to that conclusion.Make sure your overall answer is only based on the information provided, and no extraneous information.Contracts often contain rules and exceptions or carveouts for certain situations. If you see this, note both the rule and the exception.In some cases, one of the partial answers will be correct and the others incorrect. In those situations, you can simply copy the correct answer.In other cases, multiple partial answers will be correct and provide parts of the overall answer. In those situations, you should synthesize your answer by combining information from the partial answers.If the partial answers do not provide enough information to fully answer the question, give as good of an answer as you can but fold in the uncertainty of your answer.#Output formatYour output should be in this format:<overall_answer>[string; an overall answer for the document. Always keep it on one line, even if it is a list—do not make bullet points.]</overall_answer><short_answer>[summarize your answer in 1-5 words]</short_answer><partials_relied_upon>[space separated list of integers representing the IDs of the partial answers that were used to create the overall answer. If a partial answer is wrong or didn't contribute any information to the overall answer, do not list it here</partials_relied_upon><explanation>[explain why you chose the overall answer you chose]</explanation>#ExamplesExample 1:Question: What is the vesting schedule of the stock option?Partial Answers:<partial_answers><partial_answer><id>1</id><language> The remainder of the shares shall vest in increments of 1/48th of the TotalShares each month for the following 3 years</language><summary> 1/48th monthly over 3 years</summary><explanation> The contract says that 1/48th of the shares will vest every month and that this will continue for 3 years, so the vesting schedule is 1/48th of the shares on a monthly basis for 3 years.</explanation></partial_answer><partial_answer><id>2</id><language>¼th of the shares will vest 1 year after the date of this agreement.</language><summary>¼th after 1 year</summary><explanation> The contract says that one quarter of the shares will vest on the 1-year anniversary of the date of this agreement, thus the vesting schedule is ¼th after 1 year.</explanation></partial_answer>In the above example, your response would be:<overall_answer> The vesting schedule is ¼ quarter of the shares vest at 1 year from the date of this agreement, and 1/48th of the shares shall vest monthly thereafter for the following 3 years.</overall_answer><partials_relied_upon>1 2</partials_relied_upon>#TaskOK, let's beginQuestion: {{question}}Partial Answers:<partial_answers>{% for partial_answer in partial_answers %}<partial_answer><id>{{loop.index0}}</id><language>{{partial_answer.paragraph.text}}</language><summary>{{partial_answer.short_form_answer}}</summary><explanation>{{partial_answer.answer_to_question}}</explanation></partial_answer>{% endfor %}</partial_answers><|endofprompt|><overall_answer> Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory (“ROM”) devices and random-access memory (“RAM”) devices. A computer-readable medium may be any combination of such storage devices. In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities. In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. For example, some techniques and mechanisms are described herein in the context of large language models. However, the techniques disclosed herein apply to a wide variety of language models. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents.
126,851
11861321
DETAILED DESCRIPTION Techniques and mechanisms described herein provide for the generation of novel text based on structured input documents. According to various embodiments, a document may first be analyzed by a large language model to identify a set of structural components. The structural components may then be used to subdivide the document into individual portions of text. These text portions may then be analyzed to determine a structural information associated with each portion. The structural information and the text portions may then be used to determine a structured document in which the text portions are arranged and organized in association with structural information. Finally, the structured document may be analyzed by a large language model to generate novel text. Consider the challenge of a transactional attorney who wishes to understand the common formulation of a given deal term in the market for contracts having particular characteristics. Using conventional techniques, the transactional attorney would need to rely on inaccurate and/or incomplete information, such as personal knowledge, simple text searches, surveys, practice guides, manual review of large volumes of documents, and the like. Such processes are slow, expensive, and/or error prone. The same is true for a variety of such complex, text-based inquiries. The following example queries that may be addressed in accordance with some embodiments of techniques and mechanisms described herein are drawn from the analysis of legal contracts. For example, “Show me material adverse effect definitions from public company merger agreements in the last 2 years.” As another example, “Identify all double trigger vesting acceleration clauses.” As yet another example, “What is the typical liquidation preference multiple in Series B rounds in the last 3 years?” As still another example, “Was it typical for force majeure clauses to mention pandemics prior to 2020?” Making matters worse, many documents included useful information embedded in the document structure itself. As a simple example, consider a document that includes a list of factual assertions under a heading that states: “These statements have been admitted as false.” Considered in isolation, each of the factual assertions would lead a conventional natural language processing system to an inaccurate conclusion. As another simple example, consider a document that includes one subheading that identifies statements of facts agreed upon by the parties, and another subheading identifying statements of fact that are in dispute. Again, the contextual information embedded in the document's structure is helpful for a natural language processing system to more fully understand the text of the document within each subheading. In contrast, embodiments of techniques and mechanisms described herein may be used to generate answers to complex queries of natural language documents. For instance, keeping to the above example, a set of reference contracts may be parsed to generate or update a database table characterizing the reference contracts along one or more numerical and/or classification dimensions. The database system may then be queried using terms identified based on a search query to identify a set of contracts that exhibit particular characteristics. The identified documents may then be further analyzed using a large language model to determine and quantify the various formulations of the given deal term for those documents, based in part on the structure of such documents. According to various embodiments, techniques and mechanisms described herein may be able to review large numbers of documents and to understand them sufficiently well so as to classify them along one or more numerical and/or discrete dimensions. The documents may then be filtered to identify a subset of documents relevant to a particular search query. The text of the filtered documents may then be analyzed against the search query to produce document-level answers to the search query. These document-level answers may then be combined into a single response to the search query. For instance, the system may answer a search query that asks about which features are common in a subset of a corpus of documents that exhibit one or more characteristics. According to various embodiments, techniques and mechanisms described herein provide for novel text generation in domain-specific contexts. A text generation interface system may take as input one or more arbitrary documents, process them via optical text recognition, segment them into portions, and process the segmented text via various tasks based on need. Different workflows are provided for different tasks, and this application describes a number of examples of such workflows. In many workflows, an input document is divided into chunks via a chunking technique. Then, chunks are inserted into prompt templates for processing by a large language model such as the GPT-3 or GPT-4 available from OpenAI. The large language model's response is then parsed and potentially used to trigger additional analysis, such as one or more database searches, one or more additional prompts sent back to the large language model, and/or a response returned to a client machine. According to various embodiments, techniques and mechanisms described herein provide for retrieval augmented generation. A search is conducted based on a search query. Then, the search results are provided to an artificial intelligence system. The artificial intelligence system then further processes the search results to produce an answer based on those search results. In this context, a large language model may be used to determine the search query, apply one or more filters and/or tags, and/or synthesize potentially many different types of search. Such techniques may be aided by employing structured rather than unstructured document text. According to various embodiments, techniques and mechanisms described herein provide for a sophisticated document processing pipeline. The pipeline receives one or more input documents, identifies text that should be kept together, identifies extraneous text such as headers, footers, and line numbers, and segments the text accordingly. In this way, the quality of the text provided to the rest of the system is improved. Similarly, document text may be subdivided into portions which may then be arranged in accordance with structural information. In this way, the contextual information embedded in document structure may be employed during document analysis. According to various embodiments, techniques and mechanisms described herein provide for new approaches to text segmentation. Large language models often receive as input a portion of input text and generate in response a portion of output text. In many systems, the large language model imposes a limit on the input text size. Accordingly, in the event that the large language model is asked to summarize a length document, the document may need to be segmented into portions in order to achieve the desired summarization. Conventional text segmentation techniques frequently create divisions in text that negatively affect the performance of the model, particularly in domains-specific contexts such as law. For example, consider a caption page of a legal brief, which includes text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the text in the different columns should not be mixed and should be treated separately from the line numbers, while both columns should precede the document title, when converting the document to an input query for a large language model. However, conventional techniques would result in these semantically different elements of text being jumbled together, resulting in an uninformative query provided to the large language model and hence a low-quality response. In contrast to these conventional techniques, techniques and mechanisms described herein provide for a pipeline that cleans such raw text so that it can be provided to a large language model. According to various embodiments, techniques and mechanisms described herein provide for the division of text into chunks, and the incorporation of those chunks into prompts that can be provided to a large language model. For instance, a large language model may impose a limit of, for instance, 8,193 tokens on a task, including text input, text output, and task instructions. In order to process longer documents, the system may split them. However, splitting a document can easily destroy meaning depending on where and how the document is split. Techniques and mechanisms described herein provide for evenly splitting a document or documents into chunks, and incorporating those chunks into prompts, in ways that retain the semantic content associated with the raw input document or documents. In some embodiments, techniques and mechanisms described herein may be applied to generate novel text in domain-specific contexts, such as legal analysis. Large language models, while powerful, have a number of drawbacks when used for technical, domain-specific tasks. When using conventional techniques, large language models often invent “facts” that are actually not true. For instance, if asked to summarize the law related to non-obviousness in the patent context, a large language model might easily invent a court case, complete with caption and ruling, that in fact did not occur. In contrast to conventional techniques, techniques and mechanisms described herein provide for the generation of novel text in domain-specific contexts while avoiding such drawbacks. According to various embodiments, techniques and mechanisms described herein may be used to automate complex, domain-specific tasks that were previously the sole domain of well-trained humans. Moreover, such tasks may be executed in ways that are significantly faster, less expensive, and more auditable than the equivalent tasks performed by humans. For example, a large language model may be employed to produce accurate summaries of legal texts, to perform legal research tasks, to generate legal documents, to generate questions for legal depositions, and the like. In some embodiments, techniques and mechanisms described herein may be used to divide text into portions while respecting semantic boundaries and simultaneously reducing calls to the large language model. The cost of using many large language models depends on the amount of input and/or output text. Accordingly, techniques and mechanisms described herein provide for reduced overhead associated with prompt instructions while at the same time providing for improved model context to yield an improved response. In some embodiments, techniques and mechanisms described herein may be used to process an arbitrary number of unique documents (e.g., legal documents) that cannot be accurately parsed and processed via existing optical character recognition and text segmentation solutions. In some embodiments, techniques and mechanisms described herein may be used to link a large language model with a legal research database, allowing the large language model to automatically determine appropriate searches to perform and then ground its responses to a source of truth (e.g., in actual law) so that it does not “hallucinate” a response that is inaccurate. In some embodiments, techniques and mechanisms described herein provide for specific improvements in the legal domain. For example, tasks that were previously too laborious for attorneys with smaller staffs may now be more easily accomplished. As another example, attorneys may automatically analyze large volumes of documents rather than needing to perform such tasks manually. As another example, text chunking may reduce token overhead and hence cost expended on large language model prompts. As yet another example, text chunking may reduce calls to a large language model, increasing response speed. As still another example, text chunking may increase and preserve context provided to a large language model by dividing text into chunks in semantically meaningful ways. According to various embodiments, techniques and mechanisms described herein may provide for automated solutions for generated text in accordance with a number of specialized applications. Such applications may include, but are not limited to: simplifying language, generating correspondence, generating a timeline, reviewing documents, editing a contract clause, drafting a contract, performing legal research, preparing for a depositions, drafting legal interrogatories, drafting requests for admission, drafting requests for production, briefing a litigation case, responding to requests for admission, responding to interrogatories, responding to requests for production, analyzing cited authorities, and answering a complaint. FIG.1illustrates an overview method100for generating novel text, performed in accordance with one or more embodiments. In some implementations, the method100may be performed at a text generation interface system such as the system200shown inFIG.2. For instance, the method100may be performed at the text generation interface system210. An input document is preprocessed at102to determine one or more input text portions. According to various embodiments, preprocessing an input document may involve one or more operations related to cleaning, parsing, tokenizing, sharding, analyzing, structuring, or dividing the text of the input document. Additional details regarding some examples of the types of operations that may be performed during document preprocessing are discussed with respect toFIG.3,FIG.4, andFIG.5. One or more regular expressions for determining disaggregated document portions are identified at104. In some implementations, regular expressions may be determined by providing some or all of the input text portions determined at102to a large language model for analysis. The text portions may be included in one or more prompts that in turn include natural language instructions to the large language model. The instructions may instruct the large language model to determine one or more natural expressions for subdividing the text into portions that correspond with structural elements of the input document. For instance, a structural element may include a heading, a subheading, a paragraph, a bulleted list, or some other type of text included within the document. Additional details regarding the determination of the regular expressions are discussed with respect to the method1800shown inFIG.18. The regular expressions are applied to the text of the document at106to determine a set of disaggregated text portions. In some embodiments, applying the regular expressions may involve executing them against the disaggregated text portions to determine a match. When a match is determined, an input text portion may be divided into two or more disaggregated text portions. The disaggregated text portions may in turn be evaluated against other regular expressions until the input text portions have been fully subdivided. Additional details regarding the disaggregation of the input text portions into the disaggregated text portions are discussed with respect to the method1900shown inFIG.19. Structural information is determined for the disaggregated text portions at108. In some embodiments, the structural information may be determined at least in part by providing to a large language model one or more prompts that include the disaggregated text portions. The one or more prompts may include natural language instructions to determine structural information for the disaggregated text portions. Additional details regarding the determination of structural information are discussed with respect to the method2000shown inFIG.20. A structured document is determined at110based on the disaggregated text portions and the structural information. In some embodiments, determining a structured document may involve creating a data structure, structured document (e.g., XML, JSON, etc.), or other type of output that reflects both the input text portions and the structural information. Additional details regarding the determination of the structured document are discussed with respect to the method2100shown inFIG.21. The structured document is analyzed at112to determine novel text. According to various embodiments, the operations performed when determining novel text based on the structured document may vary based on the type of application. Examples of such applications may include, but are not limited to: search, querying, policy evaluation, correspondence generation, filtering, and more. Additional details regarding the determination of novel text based on the structured document are discussed throughout the application, for instance with respect toFIGS.8-18. FIG.2illustrates a text generation system200, configured in accordance with one or more embodiments. The text generation system200includes client machines202through204in communication with a text generation interface system210, which in turn is in communication with a text generation modeling system270. The text generation modeling system270includes a communication interface272, a text generation API274, and a text generation model276. The text generation interface system210includes a communication interface212, a database system214, a testing module220, and an orchestrator230. The testing module220includes a query cache222, a test repository224, and a prompt testing utility226. The orchestrator230includes skills232through234, and prompt templates236through238. The orchestrator also includes a chunker240and a scheduler242. The orchestrator also includes API interfaces250, which include a model interface252, an external search interface254, an internal search interface256, and a chat interface258. According to various embodiments, a client machine may be any suitable computing device or system. For instance, a client machine may be a laptop computer, desktop computer, mobile computing device, or the like. Alternatively, or additionally, a client machine may be an interface through which multiple remote devices communicate with the text generation interface system210. According to various embodiments, a client machine may interact with the text generation interface system in any of various ways. For example, a client machine may access the text generation interface system via a text editor plugin, a dedicated application, a web browser, other types of interactions techniques, or combinations thereof. According to various embodiments, the text generation modeling system270may be configured to receive, process, and respond to requests via the communication interface272, which may be configured to facilitate communications via a network such as the internet. In some embodiments, some or all of the communication with the text generation modeling system270may be conducted in accordance with the text generation API274, which may provide remote access to the text generation model276. The text generation API274may provide functionality such as defining standardized message formatting, enforcing maximum input and/or output size for the text generation model, and/or tracking usage of the text generation model. According to various embodiments, the text generation model276may be a large language model. The text generation model276may be trained to predict successive words in a sentence. It may be capable of performing functions such as generating correspondence, summarizing text, and/or evaluating search results. The text generation model276may be pre-trained using many gigabytes of input text and may include billions or trillions of parameters. In some embodiments, large language models impose a tradeoff. A large language model increases in power with the number of parameters and the amount of training data used to train the model. However, as the model parameters and input data increase in magnitude, the model's training cost, storage requirements, and required computing resources increase as well. Accordingly, the large language model may be implemented as a general-purpose model configured to generate arbitrary text. The text generation interface system210may serve as an interface between the client machines and the text generation modeling system270to support the use of the text generation modeling system270for performing complex, domain-specific tasks in fields such as law. That is, the text generation interface system210may be configured to perform one or more methods described herein. According to various embodiments, the orchestrator230facilitates the implementation of one or more skills, such as the skills232through234. A skill may act as a collection of interfaces, prompts, actions, data, and/or metadata that collectively provide a type of functionality to the client machine. For instance, a skill may involve receiving information from a client machine, transmitting one or more requests to the text generation modeling system270, processing one or more response received form the text generation modeling system270, performing one or more searches, and the like. Skills are also referred to herein as text generation flows. In some embodiments, a skill may be associated with one or more prompts. For instance, the skill234is associated with the prompt templates236and238. A prompt template may include information such as instructions that may be provided to the text generation modeling system270. A prompt template may also include one or more fillable portions that may be filled based on information determined by the orchestrator230. For instance, a prompt template may be filled based on information received from a client machine, information returned by a search query, or another information source. In some implementations, the chunker240is configured to divide text into smaller portions. Dividing text into smaller portions may be needed at least in part to comply with one or more size limitations associated with the text. For instance, the text generation API274may impose a maximum size limit on prompts provided to the text generation model276. The chunker may be used to subdivide text included in a request from a client, retrieved from a document, returned in a search result, or received from any other source. According to various embodiments, the API interfaces250include one or more APIs for interacting with internal and/or external services. The model interface252may expose one or more functions for communicating with the text generation modeling system270. For example, the model interface252may provide access to functions such as transmitting requests to the text generation modeling system270, receiving responses from the text generation modeling system270, and the like. In some embodiments, the external search interface254may be used to search one or more external data sources such as information repositories that are generalizable to multiple parties. For instance, the external search interface254may expose an interface for searching legal case law and secondary sources. In some implementations, the internal search interface256may facilitate the searching of private documents. For instance, a client may upload or provide access to a set of private documents, which may then be indexed by the text generation interface system210. According to various embodiments, the chat interface258may facilitate text-based communication with the client machines. For instance, the chat interface258may support operations such as parsing chat messages, formulating responses to chat messages, identifying skills based on chat messages, and the like. In some configurations, the chat interface258may orchestrate text-based chat communication between a user at a client machine and the text generation model276, for instance via web sockets. In some embodiments, the query cache222may store queries such as testing queries sent to the text generation modeling system270. Then, the query cache222may be instructed to return a predetermined result to a query that has already been sent to the text generation modeling system270rather than sending the same query again. In some embodiments, the prompt testing utility226is configured to perform operations such as testing prompts created based on prompt templates against tests stored in the test repository224. In some embodiments, the communication interface212is configured to facilitate communications with the client machines and/or the text generation modeling system270via a network such as the internet. The scheduler242may be responsible for scheduling one or more tasks performed by the text generation interface system210. For instance, the scheduler may schedule requests for transmission to the text generation modeling system270. In some embodiments, the database system214is configured to store information determined based on natural language. For example, the database system214may be configured to store one or more database tables that include fields corresponding with information extracted from natural language documents. As another example, the database system214may be configured to store metadata information about documents based on information extracted from those documents. As yet another example, the database system214may be configured to store linkages between documents and document portions. According to various embodiments, the database system214may be configured using any of a variety of suitable database technologies. For instance, the database system214may be configured as a relational database system, a non-relational database system, or any other type of database system capable of supporting the storage and querying of information described herein. FIG.3illustrates a document parsing method300, performed in accordance with one or more embodiments. According to various embodiments, the method300may be performed on any suitable computing system. For instance, the method300may be performed on the text generation interface system230shown inFIG.2. The method300may be performed in order to convert a document into usable text while at the same time retaining metadata information about the text, such as the page, section, and/or document at which the text was located. A request to parse a document is received at302. In some embodiments, the request to parse a document may be generated when a document is identified for analysis. For example, as discussed herein, a document may be uploaded or identified by a client machine as part of communication with the text generation interface system230. As another example, a document may be returned as part of a search result. The document is converted to portable document format (PDF) or another suitable document format at304. In some embodiments, the document need only be converted to PDF if the document is not already in the PDF format. Alternatively, PDF conversion may be performed even on PDFs to ensure that PDFs are properly formatted. PDF conversion may be performed, for instance, by a suitable Python library or the like. For instance, PDF conversion may be performed with the Hyland library. Multipage pages are split into individual pages at306. In some implementations, multipage pages may be split into individual pages via a machine learning model. The machine learning model may be trained to group together portions of text on a multipage page. For instance, a caption page in a legal decision may include text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the machine learning model may be trained to treat separately the text in the different columns, and to separate the text from the line numbers. The document title may be identified as a first page, with the left column identified as the second page and the right column identified as the third page. Optical character recognition is performed on individual pages or on the document as a whole at308. In some implementations, optical character recognition may be performed locally via a library. Alternatively, optical character recognition may be performed by an external service. For instance, documents or pages may be sent to a service such as Google Vision. Performing optical character recognition on individual pages may provide for increased throughout via parallelization. Individual pages are combined in order at310. In some implementations, combining pages in order may be needed if optical character recognition were applied to individual pages rather than to the document as a whole. Inappropriate text splits are identified and corrected at312. In some embodiments, inappropriate text splits include instances where a paragraph, sentence, word, or other textual unit was split across different pages. Such instances may be identified by, for example, determining whether the first textual unit in a page represents a new paragraph, sentence, word, or other unit, or if instead it represents the continuation of a textual unit from the previous page. When such a split is identified, the continuation of the textual unit may be excised from the page on which it is located and moved to the end of the previous page. Such an operation may be performed by, for instance, the Poppler library available in Python. Segmented JSON text is determined at314. In some embodiments, the segmented JSON text may include the text returned by the optical character recognition performed at operation308. In addition, the segmented JSON text may include additional information, such as one or more identifiers for the page, section, and/or document on which the text resides. The output of the segmented JSON may be further processed, for instance via the text sharding method500shown inFIG.5and/or the text chunking method600shown inFIG.6. FIG.4illustrates a text generation method400, performed in accordance with one or more embodiments. According to various embodiments, the method400may be performed on any suitable computing system. For instance, the method400may be performed on the text generation interface system230shown inFIG.2. The method400may be performed in order to identify and implement a text generation flow based on input text. A request from a client machine to generate a novel text portion is received at402. In some embodiments, the request may include a query portion. The query portion may include natural language text, one or more instructions in a query language, user input in some other format, or some combination thereof. For instance, the query portion may include an instruction to “write an email”, “summarize documents”, or “research case law”. In some embodiments, the request may include an input text portion. For example, the request may link to, upload, or otherwise identify documents. As another example, the request may characterize the task to be completed. For instance, the request may discuss the content of the desired email or other correspondence. The particular types of input text included in the request may depend in significant part on the type of request. Accordingly, many variations are possible. A text generation flow is determined at404. In some embodiments, the text generation flow may be explicitly indicated as part of the request received from the client machine. For instance, the client machine may select a particular text generation flow from a list. Alternatively, the text generation flow may be determined at least in part by analyzing the request received from the client machine. For example, the request may be analyzed to search for keywords or other indications that a particular text generation flow is desired. As another example, all or a portion of the request may be provided to a machine learning model to predict the requested text generation flow. In some configurations, a predicted text generation flow may be provided to the client machine for confirmation before proceeding. Input text is determined at406. In some embodiments, the input text may be determined by applying one or more text processing, search, or other operations based on the request received from the client machine. For example, the input text may be determined at least in part by retrieving one or more documents identified in or included with the request received from the client machine. As another example, the input text may be determined at least in part by applying one or more natural language processing techniques such as cleaning or tokenizing raw text. In some embodiments, determining input text may involve executing a search query. For example, a search of a database, set of documents, or other data source may be executed base at least in part on one or more search parameters determined based on a request received from a client machine. For instance, the request may identify one or more search terms and a set of documents to be searched using the one or more search terms. In some embodiments, determining input text may involve processing responses received from a text generation modeling system. For instance, all or a portion of the results from an initial request to summarizing a set of text portions may then be used to create a new set of more compressed input text, which may then be provided to the text generation modeling system for further summarization or other processing. One or more prompt templates are determined at408based on the input text and the text generation flow. As discussed with respect toFIG.2, different text generation flows may be associated with different prompt templates. Prompt templates may be selected from the prompt library based on the particular text generation flow. At410, one or more prompts based on the prompt templates are determined. In some embodiments, a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient, source, topic, and discussion points. The one or more prompts are transmitted to a text generation modeling system at412. In some embodiments, the text generation modeling system may be implemented at a remote computing system. The text generation modeling system may be configured to implement a text generation model. The text generation modeling system may expose an application procedure interface via a communication interface accessible via a network such as the internet. One or more text response messages are received from the remote computing system at414. According to various embodiments, the one or more text response messages include one or more novel text portions generated by a text generation model implemented at the remote computing system. The novel text portions may be generated based at least in part on the prompt received at the text generation modeling system, including the instructions and the input text. The one or more responses are parsed at416to produce a parsed response. In some embodiments, parsing the one or more responses may involve performing various types of processing operations. For example, in some systems a large language model may be configured to complete a prompt. Hence, a response message received from the large language model may include the instructions and/or the input text. Accordingly, the response message may be parsed to remove the instructions and/or the input text. In some implementations, parsing the one or more responses may involve combining text from different responses. For instance, a document may be divided into a number of portions, each of which is summarized by the large language model. The resulting summaries may then be combined to produce an overall summary of the document. A determination is made at418as to whether to provide a response to the client machine. In some embodiments, the determination made at418may depend on the process flow. For example, in some process flows, additional user input may be solicited by providing a response message determined based at least in part on one or more responses received from the text generation modeling system. As another example, in some process flows, a parsed response message may be used to produce an output message provided to the client machine. If a response is to be provided to the client machine, then a client response message including a novel text passage is transmitted to the client machine at420. In some embodiments, the client response message may be determined based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. A determination is made at422as to whether to generate an additional prompt. According to various embodiments, the determination as to whether to generate an additional prompt may be made based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. As a simple example, a text generation flow may involve an initial set of prompts to summarize a set of portions, and then another round of interaction with the text generation modeling system to produce a more compressed summary. According to various embodiments, the operations shown inFIG.4may be performed in an order different from that shown. Alternatively, or additionally, one or more operations may be omitted, and/or other operations may be performed. For example, a text generation flow may involve one or more search queries executed outside the context of the text generation modeling system. As another example, a text generation flow may involve one or more processes for editing, cleaning, or otherwise altering text in a manner not discussed with respect toFIG.4. Various operations are possible. FIG.5illustrates a method500of sharding text, performed in accordance with one or more embodiments. According to various embodiments, the method500may be performed on any suitable computing system. For instance, the method500may be performed on the text generation interface system230shown inFIG.2. The method500may be performed in order to divide a body of text into potentially smaller units that fall beneath a designated size threshold, such as a size threshold imposed by an interface providing access to a large language model. For instance, a text generation modeling system implementing a large language model may specify a size threshold in terms of a number of tokens (e.g., words). As one example of such a threshold, a text generation modeling system may impose a limit of 8,193 tokens per query. In particular embodiments, a size threshold may be adjusted based on considerations apart from a threshold imposed by an external text generation modeling system. For instance, a text generation interface system may formulate a prompt that includes input text as well as metadata such as one or more instructions for a large language model. In addition, the output of the large language model may be included in the threshold. If the external text generation modeling system imposes a threshold (e.g., 8,193 tokens), the text generation interface system230may need to impose a somewhat lower threshold when dividing input text in order to account for the metadata included in the prompt and/or the response provided by the large language model. A request to divide text into one or more portions is received at502. According to various embodiments, the request may be received as part of the implementation of one or more of the workflows shown herein. The request may identify a body of text. The body of text may include one or more documents, search queries, instruction sets, search results, and/or any other suitable text. In some configurations, a collection of text elements may be received. For instance, a search query and a set of documents returned by the search query may be included in the text. In some implementations, text may be pre-divided into a number of different portions. Examples of divisions of text into portions may include, but are not limited to: lists of documents, documents, document sections, document pages, document paragraphs, and document sentences. Alternatively, or additionally, text may be divided into portions upon receipt at the text generation interface system230. For instance, text may be divided into a set of portions via a text chunker, document parser, or other natural language processing tool. A maximum text chunk size is identified at504. In some embodiments, the maximum text chunk size may be identified based on one or more configuration parameters. In some configurations, the maximum text size may be imposed by the text generation interface system230. Alternatively, or additionally, a size threshold may be imposed by an interface providing access to a large language model. As one example of a maximum text chunk size may be 100 kilobytes of text, 1 megabyte of text, 10 megabytes of text, or any other suitable chunk size. A portion of the text is selected at506. In some embodiments, as discussed herein, text may be pre-divided into text portion. Alternatively, or additionally, text may be divided into text portions as part of, or prior to, the operation of the method500. As still another possibility, text may not be divided into portions. In such a configuration, the initial portion of text that is selected may be the entirety of the text. Then, the identification of one or more updated text portions at512may result in the division of the text into one or more portions as part of the operation of the method500. A determination is made at508as to whether the length of the selected text portion exceeds the maximum text chunk size. In some embodiments, the determination may be made by computing a length associated with the selected text portion and then comparing it with the maximum text chunk size. The calculation of the length associated with the selected text portion may be performed in different ways, depending on how the maximum text chunk size is specified. For instance, the maximum text chunk size may be specified as a memory size (e.g., in kilobytes or megabytes), as a number of words, or in some other fashion. If it is determined that the length of the selected text portion exceeds the maximum text chunk size, then at510one or more domain-specific text chunking constraints are identified. In some embodiments, domain-specific text chunking constraints may be identified based on one or more pre-determined configuration parameters. For example, one domain-specific text chunking constraint may discourage division of a question and answer in a deposition transcript or other question/answer context. As another example, a domain-specific text chunking constraint may discourage splitting of a contract clause. As yet another example, a domain-specific text chunking constraint may discourage splitting of a minority and majority opinion in a legal opinion. An updated text portion that does not exceed the maximum text chunk size is identified at512. In some embodiments, the updated text portion may be determined by applying a more granular division of the text portion into small portions. For example, a document may be divided into sections, pages, or paragraphs. As another example, a document page or section may be divided into paragraphs. As another example, a paragraph may be divided into sentences. As still another example, a sentence may be divided into words. In particular embodiments, the updated text portion may be the sequentially first portion of the selected text portion that falls below the maximum text chunk size threshold identified at operation504. The text portion is assigned to a text chunk at514. In some embodiments, the text may be associated with a sequence of text chunks. The text portions selected at506and identified at512may be assigned to these text chunks, for instance in a sequential order. That is, text portions near to one another in the text itself may be assigned to the same text chunk where possible to reduce the number of divisions between semantically similar elements of the text. In particular embodiments, some attention may be paid to text divisions such as document, document section, paragraph, and/or sentence borders when assigning text portions to chunks. For instance, text portions belonging to the same document, document section, paragraph, and/or sentence may be grouped together when possible to ensure semantic continuity. In particular embodiments, the method500may be performed in conjunction with the method600shown inFIG.6. In such a configuration, operation514may be omitted. Alternatively, the assignment of text portions into text chunks in operation514may be treated as provisional, subject to subsequent adjustment via the method600shown inFIG.6. In some implementations, the identification of an updated text portion may result in the creation of two or more new text portions as a consequence of the division. In this case, the updated text portion may be assigned to a text chunk at514, while the remainder portion or portions may be reserved for later selection at506. Alternatively, or additionally, if two or more of the text portions resulting from the division at512each fall below the maximum text chunk size, then each of these may be assigned to a text chunk or chunks at operation514. A determination is made at516as to whether to select an additional portion of the text. According to various embodiments, additional portions of the text may continue to be selected as long as additional portions are available, or until some other triggering condition is met. For example, the system may impose a maximum amount of text for a particular interaction. As another example, the amount of text may exceed a designated threshold, such as a cost threshold. FIG.6illustrates a text chunk determination method600, performed in accordance with one or more embodiments. According to various embodiments, the method600may be performed on any suitable computing system. For instance, the method600may be performed on the text generation interface system230shown inFIG.2. The method600may be performed in order to assign a set of text portions into text chunks. In some embodiments, the method600may be used to compress text portions into text chunks of smaller size. For instance, the method600may receive as an input a set of text portions divided into text chunks of highly variable sizes, and then produce as an output a division of the same text portions into the same number of text chunks, but with the maximum text chunk size being lower due to more even distribution of text portions across text chunks. A request is received at602to divide a set of text portions into one or more chunks. In some embodiments, the request may be automatically generated, for instance upon completion of the method500shown inFIG.5. The request may identify, for instance, a set of text portions to divide into text chunks. An initial maximum text chunk size is identified at604. In some embodiments, the initial maximum text chunk size may be identified in a manner similar to that for operation504shown inFIG.5. A text portion is selected for processing at606. In some embodiments, text portions may be selected sequentially. Sequential or nearly sequential ordering may ensure that semantically contiguous or similar text portions are often included within the same text chunk. A determination is made at608as to whether the text portion fits into the latest text chunk. In some embodiments, text portions may be processed via the method500shown inFIG.5to ensure that each text portion is smaller than the maximum chunk size. However, a text chunk may already include one or more text portions added to the text chunk in a previous iteration. In the event that the text portion fits into the last text chunk size, the text portion is inserted into the last text chunk at610. If instead the text portion is the first to be processed, or the text portion does not fit into the last text chunk size, then the text portion is inserted into a new text chunk at612. The new chunk may be created with a maximum size in accordance with the maximum text chunk size, which may be the initial maximum text chunk upon the first iteration or the reduced maximum text chunk size upon subsequent iterations. A determination is made at614as to whether to select an additional text portion for processing. In some embodiments, additional text portions may be selected until all text portions have been added to a respective text chunk. A determination is made at616as to whether the number of text chunks has increased relative to the previous maximum text chunk size. If the number of text chunks increases, then a reduced maximum text chunk size is determined at618, and the text portions are again assigned into chunks in operations606through614. According to various embodiments, for the first iteration, the number of chunks will not have increased because there was no previous assignment of text portions into text chunks. However, for the second and subsequent iterations, reducing the maximum text chunk size at618may cause the number of text chunks needed to hold the text portions to crease because the reduced maximum text chunk size may cause a text portion to no longer fit in a chunk and instead to spill over to the next chunk. In some embodiments, the first increase of the number of text chunks may cause the termination of the method at operation620. Alternatively, a different terminating criteria may be met. For instance, an increase in the number of text chunks may be compared with the reduction in text chunk size to produce a ratio, and additional reductions in text chunk size may continue to be imposed so long as the ratio falls below a designated threshold. In some embodiments, the reduced text chunk size may be determined at618in any of various ways. For example, the text chunk size may be reduced by a designated amount (e.g., 10 words, 5 kilobytes, etc.) As another example, the text chunk size may be reduced by a designated percentage (e.g., 1%, 5%, etc.). When it is determined that the number of text chunks has unacceptably increased, then at620the previous maximum text chunk size and assignment of text portions into chunks is returned. In this way, the number of text chunks may be limited while at the same time dividing text portions more equally into text chunks. The number of text chunks may be strictly capped at the input value, or may be allowed to increase to some degree if a sufficiently improved division of text portions into text chunks is achieved. FIG.7illustrates one example of a computing device700, configured in accordance with one or more embodiments. According to various embodiments, a system700suitable for implementing embodiments described herein includes a processor701, a memory module703, a storage device705, an interface711, and a bus715(e.g., a PCI bus or other interconnection fabric.) System700may operate as variety of devices such as an application server, a database server, or any other device or service described herein. Although a particular configuration is described, a variety of alternative configurations are possible. The processor701may perform operations such as those described herein. Instructions for performing such operations may be embodied in the memory703, on one or more non-transitory computer readable media, or on some other storage device. Various specially configured devices can also be used in place of or in addition to the processor701. The interface711may be configured to send and receive data packets over a network. Examples of supported interfaces include, but are not limited to: Ethernet, fast Ethernet, Gigabit Ethernet, frame relay, cable, digital subscriber line (DSL), token ring, Asynchronous Transfer Mode (ATM), High-Speed Serial Interface (HSSI), and Fiber Distributed Data Interface (FDDI). These interfaces may include ports appropriate for communication with the appropriate media. They may also include an independent processor and/or volatile RAM. A computer system or computing device may include or communicate with a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user. FIG.8illustrates a hallucination detection method800, performed in accordance with one or more embodiments. The method800may be performed by the text generation interface system210shown inFIG.2. In some embodiments, the method800may be performed in order to determine whether novel text generated by a text generation modeling system includes one or more hallucinations. Generative text systems sometimes generate text that includes inaccurate claims. For example, in the legal sphere, a request to summarize a set of judicial opinions about a point of law may result in a summary text that includes a citation to a non-existent opinion. A request is received at802to identify one or more hallucinations in novel text generated by a text generation model. In some embodiments, the request may be received as part of one or more methods shown herein. For example, the method800may be performed to evaluate a response returned by the text generation modeling system. When employed in this way, the method800may be used to prompt the system to revise the response. Alternatively, or additionally, the method800may be used to prompt the system to generate a new response, to flag the error to a systems administrator, and/or to inform a response recipient of a potentially inaccurate response. In some implementations, the request may be received as part of a training and/or testing procedure. For instance, one or more prompts may be tested by the prompt testing utility226against one or more tests stored in the test repository224. A test result may be evaluated using the method800to determine whether a prompt constructed from a prompt template being tested resulted in the generation of a hallucination, which may be treated as a test failure. One or more factual assertions in the novel text are identified at804. In some embodiments, the one or more factual assertions may be identified by transmitting a prompt to the text generation modeling system. For instance, the novel text may be included in a prompt requesting that the text generation modeling system identify factual claims in the novel text. The resulting completed prompt may be parsed to identify the one or more factual assertions. A factual assertion is selected for analysis. Factual assertions identified at804may be analyzed in sequence, in parallel, or in any suitable order. One or more search terms associated with the factual assertion are determined at808. In some embodiments, one or more search terms may be returned by the text generation modeling system at804. Alternatively, or additionally, one or more search terms may be determined based on a separate request sent to the text generation modeling system for the factual assertion being analyzed. A search query to identify one or more search results based on the one or more search terms is executed at810. According to various embodiments, one or more searches may be executed against any suitable database. Such databases may include, but are not limited to: public sources such as the internet, internal document databases, and external document databases. The one or more search results are summarized at812. In some embodiments, summarizing the one or more search results may involve, for instance, dividing documents into chunks and transmitting the one or more chunks to the text generation modeling system within summarization prompts. At814, the factual assertion is evaluated against the one or more search results. In some embodiments, evaluating the factual assertion may involve transmitting to the text generation modeling system a prompt that includes a request to evaluate the factual assertion, information characterizing the factual assertion, and a summary of the one or more search results determined as discussed at812. A determination is made at816as to whether the factual assertion is accurate. In some embodiments, the determination may be made by parsing the response returned by the text generation modeling system at814. For instance, the text generation modeling system may complete the prompt by indicating whether the factual assertion is true, false, or uncertain based on the provided summary of search results. If it is determined that the factual assertion is inaccurate, then at818the factual assertion is identified as a hallucination. In some embodiments, identifying the factual assertion as a hallucination may cause one or more consequences in an encompassing process flow. For example, in a testing phase, the detection of a hallucination may cause the test to fail. As another example, in a production phase, the detection of a hallucination may cause the system to initiate a flow to revise the novel text to remove the hallucination. FIG.9illustrates an example of a method900for generating a document summary, performed in accordance with one or more embodiments. The method900may be performed at the text generation system200in order to summarize one or more documents provided or identified by a client machine. In some configurations, the method900may be performed to summarize one or more documents returned by a search query. One or more documents are received at902. In some embodiments, a document may be uploaded by the client machine. Alternatively, a document may be identified by the client machine, for instance via a link. As still another possibility, a document may be returned in a search result responsive to a query provided by a client machine. A single summary request may include documents identified and provided in various ways. In some embodiments, the one or more documents may be received along with user input. The user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input may be used to generate a summary input message904, which is sent to the text generation interface system210. In some implementations, the summary input message904may be received by the text generation interface system210via a web socket. Alternatively, a different form of communication may be used, for instance an asynchronous mode of communication. At906, the text generation interface system210determines one or more summarize prompt908based on the summary request message904. In some embodiments, the determination of the summarize prompt may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods500and600shown inFIG.5andFIG.6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text. Then, each chunk may be used to create a respective summarize prompt for summarizing the text in the chunk. For instance, text may be inserted into a template via a tool such as Jinja2. The one or more summarize prompts908may include one or more instructions for implementation by the text generation modeling system270. Additionally, the one or more summarize prompts each includes a respective text chunk910determined based on the summary request message904. The one or more summarize prompts908are then sent to the text generation modeling system270via one or more summarize prompt messages912. The text generation modeling system270generates one or more raw summaries at914, which are then sent back to the text generation interface system210via one or more summarize response messages at916. The one or more summarize response messages are parsed at918to produce one or more parsed summary responses at920. In some embodiments, the one or more summary response messages received at916may include ancillary information such as all or a portion of the summarize prompt messages sent at912. Accordingly, parsing the summarize response messages may involve performing operations such as separating the newly generated summaries from the ancillary information included in the one or more summarize response messages. An example of a prompt template used to instruct a text generation system to summarize a text is shown below:You are a highly sophisticated legal AI. A lawyer has submitted questions that need answers.Below is a portion of a longer document that may be responsive to the questions:$$DOCUMENT$${%—for page in page_list—%}$$PAGE {{page[“page”]}}$${{page[“text”]}}$$/PAGE$${%—endfor—%}$$/DOCUMENT$$ We would like you to perform two tasks that will help the lawyer answer the questions. Each task should be performed completely independently, so that the lawyer can compare the results. Extractive Task The purpose of this task is not to answer the questions, but to find any passages in the document that will help the lawyer answer them. For each question, perform the following steps:1. Extract verbatim as many passages from the document (sentences, sentence fragments, or phrases) as possible that could be useful in answering the question. There is no limit on the number of passages you can extract, so more is better. Don't worry if the passages are repetitive; we need every single one you can find.If the question asks for a list of things or the number of times something occurred, include a passage for every instance that appears in the document2. If you extracted any passages, assign each one a score from 1 to 5, representing how the passage relates to the question:5 (complete answer)4 (one piece of a multipart answer)3 (relevant definition or fact)2 (useful context)1 (marginally related) Abstractive Task The purpose of this task is to compose an answer to each question. Follow these instructions:Base the answer only on the information contained in the document, and no extraneous information. If a direct answer cannot be derived explicitly from the document, do not answer.Answer completely, fully, and precisely.Interpret each question as asking to provide a comprehensive list of every item instead of only a few examples or notable instances. Never summarize or omit information from the document unless the question explicitly asks for that.Answer based on the full text, not just a portion of it.For each and every question, include verbatim quotes from the text (in quotation marks) in the answer. If the quote is altered in any way from the original text, use ellipsis, brackets, or [sic] for minor typos.Be exact in your answer. Check every letter.There is no limit on the length of your answer, and more is betterCompose a full answer to each question; even if the answer is also contained in a response to another question, still include it in each answerHere are the questions:$$QUESTIONS$${{question_str}}$$/QUESTIONS$$Return your responses as a well-formed JSON array of objects, with each object having keys of:‘id’ (string) The three-digit ID associated with the Question‘passages’ (array) a JSON array of the verbatim passages you extracted, or else an empty array. Format each item as a JSON object with keys of:‘passage’ (string)‘score’ (int) the relevancy score you assigned the passage‘page’ (int) the number assigned to the page in which the snippet appears‘answer’ (string) the answer you drafted, or else “N/A”Escape any internal quotation marks or newlines using \” or \n[{“id”: <id>, “passages”: [{“passage”: <passage>, “score”: <score>, “page”: <page>}, . . . ]|[ ], “answer”: <text>|“N/A”}, . . . ]Only valid JSON; check to make sure it parses, and that quotes within quotes are escaped or turned to single quotes, and don't forget the ‘,’ delimiters.<|endofprompt|>Here is the JSON array and nothing else: According to various embodiments, the one or more parsed summary responses920may be processed in any of various ways. In some embodiments, the one or more parsed summary response messages920may be concatenated into a summary and provided to the client machine via a summary message922. The summary may then be presented as output on the client machine at924. Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface, and/or storing the summary in a file. In some embodiments, the one or more parsed summary responses920may be used as input to generate a consolidated summary. For example, a consolidated summary may be generated if the aggregate size of the parsed summary responses920exceeds or falls below a designated threshold. As another example, a consolidated summary may be generated if the client machine provides an instruction to generate a consolidated summary, for instance after receiving the summary message at922. In some embodiments, generating a consolidated summary may involve determining a consolidation prompt at926. The consolidation prompt may be determined by concatenating the parsed summary responses at920and including the concatenation result in a consolidation prompt template. In the event that the concatenated parsed summary responses are too long for a single chunk, then more than one consolidation prompt may be generated, for instance by dividing the parsed summary response920across different consolidation prompts. In some implementations, one or more consolidation prompt messages including the one or more consolidation prompts are sent to the text generation modeling system270at928. The text generation modeling system270then generates a raw consolidation of the parsed summary responses920and provides the novel text generated as a result via one or more consolidation response messages sent at932. According to various embodiments, the one or more consolidation response messages are parsed at934. For instance, if the one or more consolidation response messages include two or more consolidation response messages, each of the different messages may be separately parsed, and the parsed results concatenated to produce a consolidated summary. The consolidated summary is provided to the client machine at936via a consolidation message. The client machine may then present the consolidated summary as consolidation output at938. In the event that further consolidation is required, operations920-934may be repeated. FIG.10illustrates a database system updating method1000, performed in accordance with one or more embodiments. The method1000may be performed at a text generation system such as the system200shown inFIG.2. A request is received at1002to update a database system based on one or more natural language documents. In some embodiments, the request may be received via a chat interface. Alternatively, the request may be received in some other way, such as via an API request. The request may be generated automatically or based on user input, and may be received from a client machine. According to various embodiments, the natural language documents may be identified in various ways. For example, documents may be uploaded from a client machine, identified based on a search query, retrieved from a repository based on one or more document identifiers, or identified in any other suitable way. Clauses included in the natural language documents are identified at1004. In some embodiments, each clause may include some portion of a natural language document. For instance, a clause may include a single phase, a collection of phrases, a single sentence, a collection of sentences, a section, a page, one or more page, or any other unit of analysis. According to various embodiments, clauses may be identified based on one or more natural language processing techniques. For instance, a document may be tokenized into words. Words may then be grouped into phrases and/or sentences based on indicators such as punctuation and semantic content. Sentences may be grouped into sections such as paragraphs or other units. Clauses may then be identified based on the structure. In particular embodiments, the identification of clauses may involve domain-specific logic. For instance, the identification of clauses in a general-purpose non-fiction text may be different from the identification of clauses in a legal contract. Accordingly, the text generation interface system may store domain-specific instructions for identifying clauses in one or more contexts. One or more data fields associated with the one or more natural language documents are identified at1006. In some embodiments, one or more data fields may be identified based on a query. Additional details regarding query parsing are discussed with respect to query parsing are discussed with respect to the method1100shown inFIG.11. In some implementations, one or more data fields may be identified based on the structure of a table in a database system or other such configuration parameters. For instance, if metadata for a set of documents is intended to be combined with metadata for other documents already reflected in one or more database tables, then fields associated with those database tables may be identified so as to identify values corresponding to the existing table structure. One or more clauses are selected for analysis at1008. A text chunk is determined at1004based on the natural language documents. In some embodiments, the one or more may be determined by dividing the clauses identified at1004into chunks based on a chunk size. Examples of techniques for determining text chunks are discussed with respect to the method600shown inFIG.6. In some contexts, a text chunk may be limited to text from a single document. Alternatively, a single text chunk may include text from more than one documents. An input metadata extraction prompt is determined at1010based on the text chunk and a clause splitting prompt template. In some embodiments, the input metadata extraction prompt may be determined by supplementing and/or modifying the input metadata extraction prompt based on the one or more clauses and the one or more data fields. For instance, the one or more clauses and a description of the one or more data fields may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to identify values for the one or more data fields based on the one or more clauses. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a prompt template for identifying information and clauses relevant for answering a query is as follows:Purpose: Find information in a contract that is highly relevant to a question.The following Clauses are from a {{context}}For each of the Contract Clauses below, decide whether the Contract Clause contains language that is necessary or highly relevant to answer the question. If it does, provide the IDs of the clauses that contain the information necessary or highly relevant to answer the question.A few guidelines regarding what constitutes relevance:It will often be the case that nothing in the Contract Clauses answers the question. This is not a problem. When this happens, simply respond by saying “none” (all lower case)Sometimes, multiple clauses will contain information highly relevant or necessary to answer the question. If that happens, please list all such relevant clauses in your answer.If there is/are Clause(s) that only partially answer the question, include them in your answer.If the answer to a question can be inferred from a Clause, include that Clause in your answer list, even if the Clause does not directly answer the question.If a Clause contains information that could potentially help answer the question if it were combined with other information not seen here, include this Clause in your answer list.If a question is asking whether something is present or missing, a Clause closely related to the subject of the question that is missing the element is still helpful in answering the question.If a header Clause is relevant, then list all the Clauses under that header as relevant as well.Question: {{query.text}}Contract Clauses XML:<contract_clauses>{% for contract_section in paragraphs %}<section><id>CC{{loop.index0}}</id><text>{{contract_section.text}}</text></section>{% endfor %}</contract_clauses> Give your answer in the following format:<question_comprehension>[restate what the Question is trying to ask in clear terms to show that you understood the question]</question_comprehension><what_to_look_for>[briefly summarize what sorts of clauses you should be looking for to answer the question, but never refer to a specific clause ID here. It is very important that you not include the clause IDs in this section]</what_to_look_for><clauses>[if there are Clauses containing information highly relevant or necessary to answer the question, provide your answer as a pipe-character-separated list of the clause ID's here, for example: CC1|CC2|CC5|CC9</clauses>Then give a very brief explanation of your answer.<|endofprompt|>{% if question_comprehension %}<question_comprehension>{{question_comprehension}}</question_comprehension><what_to_look_for>{{what_to_look_for}}</what_to_look_for><clauses>{% else %}<question_comprehension>{%-endif %} A completed metadata extraction prompt is determined at1012based on a request sent to a remote text generation modeling system. In some embodiments, the completed metadata extraction prompt may be determined by sending the input metadata extraction prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the prompt, after which it may be sent back to the text generation interface system. Clause-level field values corresponding to the identified data fields are determined at1014. In some embodiments, the clause-level field values may be determined by parsing the completed metadata extraction prompt. For instance, structured text such as JSON included in the completed metadata extraction prompt may be parsed to identify data values corresponding with data fields for clauses included in the metadata extraction prompt. A determination is made at1016as to whether to determine an additional one or more clauses for analysis. In some implementations, additional clauses may continue to be selected for analysis until all of the natural language documents have been processed. Document-level field values are determined at1018based on the clause-level field values. In some embodiments, the document-level field values may be determined by first identifying and then aggregating clause-level field values for a given document. For example, in the legal context, a data field may indicate whether a contract includes an indemnification clause. One or more metadata extraction prompts may be used to identify, for each clause in the document, whether that clause is an indemnification clause. Although most clauses in the document will not be an indemnification clause, the data field value for the document as a whole will be true if even one of the clauses for the document is identified as an indemnification clause. As another example, in the legal context, a data field may indicate whether a contract involves an exchange valued at more than a threshold value. In this context, one or more metadata extraction prompts may be used to identify the exchange value, if any, associated with each clause in the document. The data field value for the document may then be determined by identifying the maximum exchange value determined for any of the clauses. In particular embodiments, determining the document-level field values may involve domain-specific logic. This domain-specific logic may be reflected in one or more configuration parameters and/or subroutines included in the text generation system. A database system is updated at1020to include one or more entries identifying the field values. In some embodiments, the database system may maintain one or more tables at the document level, as well as one or more tables at the clause level. The database system may link documents with clauses. The text of the clauses may be included within the database system itself and/or may be identified by location within the text of the associated document. The one or more tables may include the field values to facilitate searching the documents and/or clauses on the basis of the field values. Additional details regarding the searching of natural language documents based on data field values are discussed with respect to the method1200shown inFIG.12. According to various embodiments, the operations discussed inFIG.10may be performed in various orders, and in sequence or in parallel. For instance, a set of prompts may be created in one phase and then sent to the text generation modeling system in a subsequent phase. FIG.11illustrates a database system query and filter determination method1100, performed in accordance with one or more embodiments. The method1100may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1100may be performed at the text generation interface system210. A request to query a database system is received at1102. In some embodiments, the request may be received as part of a chat flow. Alternatively, the request may be received via an API call. In either case, the request may be received from a client machine in communication with the text generation interface system210via the internet. The request may, for instance, include a natural language query to identify, count, summarize, or other interact with documents that meet one or more criteria. For instance, the request may include a natural language query to determine the proportion of contracts for the purchase of goods or services valued over $100,000 signed by parties within California in the last 10 years where the contract includes a mandatory arbitration clause. A query and filter comprehension prompt is determined at1104based on the request. In some embodiments, the query and filter comprehension prompt may be determined by combining some or all of the query received with the request at1102with a query and filter comprehension prompt template. The query and filter comprehension prompt template may include one or more fillable elements that may be filled with text, such as “{{query.text}}”. The query and filter comprehension prompt template may also include an instruction to the text generation modeling system to restate the query and filter request included in the query and filter comprehension prompt template. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a template for generating a summary of a query is as follows:Purpose: Find information in a contract that is highly relevant to a question.Question: {{query.text}}Give your answer in the following format:<question_comprehension>[restate what the Question is trying to ask in clear terms to show that you understood the question]</question_comprehension>Then give a very brief explanation of your answer.<|endofprompt|><question_comprehension> A query and filter description is determined at1106based on the prompt. In some embodiments, the query and filter description may be determined by transmitting the query and filter comprehension prompt to a remote text generation modeling system, for instance via an API call. The remote text generation modeling system may then complete the prompt and return it to the text generation interface system. The text generation interface system may extract from the completed prompt a description of the query and filter request included in the prompt. The query and filter description is transmitted for feedback at1108. In some embodiments, the query and filter description may be transmitted to a client machine, such as the client responsible for generating the request received at1102. For instance, the query and filter description may be transmitted for feedback via a chat session or response to an API call. A determination is made at1110as to whether to receive an updated request to query the database system. In some embodiments, the determination may be made based at least in part on user input. For instance, a user may review the description and provide feedback as to whether the description produced by the text generation modeling system accurately characterizes the user's initial intent when formulating the query. The user may then provide feedback either accepting or updating the query requested. If it is determined to receive an updated request to query the database system, then an updated request to query the database system is received at1102. The updated request may then be re-evaluated. In this way, the text generation system may ensure that the text generation modeling system more accurately interprets the user's intent when formulating the query. If instead it is determined not to receive an updated request to query the database system, then a query generation prompt is determined at1112. In some embodiments, the query generation prompt may be determined by combining some or all of the query received with the request at1102and/or the query and filter description determined at1106with a query generation prompt template. The query generation prompt template may include one or more fillable elements that may be filled with text, such as “{{query text}}”. The query generation prompt template may also include an instruction to the text generation modeling system to determine one or more query and/or filter parameters based on the query generation prompt. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. In particular embodiments, a query generation prompt may be used to generate multiple queries, each of which may be executed against a suitable database. An example of a prompt template for generating a query is as follows:We are generating queries for a search engine given a user's original query. The search engine output must follow a specific output format which we will explain to you soon. The search engine, called AllSearch, can search with two different modes, “parallel” (aka Parallel Search) and “kw” (aka Keyword Searches).Parallel Searches are vector-based searches. This means that input queries must resemble full sentences.The full sentences are encoded as dense vectors and used to retrieve the K nearest neighbors in the index's vector space.For example, if a user wanted to know if refusal to wear a mask at work constituted employment discrimination, a good query for parallel search would be:“McVader's termination of Skywalker for refusal to wear a mask cannot be construed as discriminatory.”If the user provided a name, then it's good to use the name, but if no name is given, it's ok to make one up (in this case “McVader”).Keyword searches are bag-of-words based retrieval searches that use ranking methods such as BM-25 or TF-IDF.In these searches, it's important for queries to make exact word or phrase matches in order to get relevant results.A good query would use single words and/or short phrases with words that we would guess are likely to appear in the search corpus.For example, if the user who wanted to know if refusal to wear a mask at work constituted employment discrimination was making a keyword search, good queries would include:apparel workplace discriminationemployee discriminationmask mandates workplacereligious exemption employment lawand so forth.Finally, Keyword Searches can use terms and connectors. The purpose of using terms and connectors is less so to answer a question, but to help someone search over a corpus of documents that may be responsive to the query. Turn the user's question into three terms-and-connectors searches, including using proximity searching, “OR” and “AND” parameters, root expansion (using !), and parentheses using the following guidelines:The terms and connectors search terms should cover all the substantive aspects of the questionExamples of good terms-and-connectors searches: ‘(reject! or refus!)/s settl!/s fail!/s mitigat!’, ‘((sexual/2 (assault! OR harass! OR misconduct))/p “first amendment”) AND (school OR university OR college)’Given the user's original query: “{{query_text}}”,{% if query_comprehension_text %} And given this supplemental information about the query that the user approved: {{query_comprehension_text}},{% endif %}Generate several XML documents (bounded by the ‘<q>’ tag), with each document representing a search query.The documents must conform to the following schema:<q><t>[string—the query text that you generate]</t><m>[the mode, must be exactly one of “kw” or “parallel”]</m></q>You must provide at least two of each: parallel search, keyword search without terms and connectors, and keyword search with terms and connectors.Provide three more queries of any any mode.<|endofprompt|>Here are the XML documents and nothing else: The query generation prompt is transmitted to a text generation modeling system at1114. Then, a query generation prompt response message is received at1116. According to various embodiments, the query generation prompt may be transmitted to the text generation modeling system via an API request. The text generation modeling system may then complete the prompt via a text generation model implemented at the text generation modeling system, and send a response that includes the completed prompt. A database query is determined at1118based on the query generation prompt response message. In some embodiments, determining the database query may involve extracting one or more database query parameters from the query generation response message. For instance, the query generation response message may include a JSON portion that encodes a list of database query parameters. The database query parameters may then be combined with a query template to generate the database query. Alternatively, the query generation prompt response message may include a fully formed database query. According to various embodiments, the particular operations involved in determining the database query may depend in part on the type of database system employed. For example, the query structure may depend on whether the database system is a relational database system or a nonrelational database system. As another example, the query structure may depend on the structure of tables within the database system. Additional details regarding the querying of the database system are discussed with respect to the method1200shown inFIG.12. At1120, a text filter is determined based on the query generation prompt response message. In some embodiments, the text filter may include any suitable information for providing to a text generation modeling system for filtering results returned by the database query determined at1118. For example, the text filter may include one or more qualitative restrictions capable of being evaluated by the text generation modeling system. As another example, the text filter may include one or more restrictions that are not reflected by information stored in the database system. Additional details regarding the filtering of results returned by the database system are discussed with respect to the method1200shown inFIG.12. FIG.12illustrates a database system query and filter execution method1200, performed in accordance with one or more embodiments. The method1200may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1100may be performed at the text generation interface system210. A request to execute a database system is received at1102. In some embodiments, the request may be generated automatically, for instance after a database query is generated as discussed with respect to operation1118shown inFIG.11. The request may be generated as part of a chat flow or based on an API request. In either case, the request may be generated based on interaction with a client machine in communication with the text generation interface system210via the internet. A database system query is identified at1204. According to various embodiments, the database system query may be determined as discussed with respect to operation1118shown inFIG.11. One or more query response clauses and associated documents are determined at1206. In some embodiments, the one or more query response clauses and associated documents may be determined by executing the query identified at1204against the database system. As discussed herein, for instance with respect toFIG.10, the database system may store metadata characterizing documents portions of text from documents. Executing the query may result in the database system returning one or more documents, document portions, and/or identifiers that identify documents and/or document portions. One or more relevance prompts are determined at1208based on the one or more query response clauses. In some embodiments, a relevance prompt may be determined by combining some or all of the query results received at1206with a relevance prompt template. The relevance prompt template may include one or more fillable elements that may be filled with text. One or more of the fillable elements may be filled with some or all of the query results received at1206. Additionally, one or more of the fillable elements may be filled with relevance information. The relevance information may include some or all of the text filter determined at1120. Alternatively, or additionally, the relevance information may include some or all of the query received at1102, the query and filter description determined at1106, and/or the database query determined at1118. In some embodiments, the relevance prompt template may also include an instruction to the text generation modeling system to evaluate and/or rank the included search result or results for relevance against the relevance information. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a relevance prompt template is as follows:Evaluate whether these documents are relevant to this research request or query:“{{text}}”$$DOCUMENTS$${{documents}}$$/DOCUMENTS$$Only respond with relevant documents. In order to be deemed relevant, a document must directly answer the request or query. A document should also be considered relevant if it reaches a conclusion in opposition to the research request.If there are no relevant documents, do not include any in your response.Assign a relevance score to each document, judging its relevance to the research request or query: “{{text}}”. The score should correlate to these values:5—the document is directly on-point (i.e., it precisely responds to every aspect of the query or request, even if it is in opposition to the request, and not a similar but different issue; it fully and conclusively settles the question raised in the request either in favor or against the intention of the request, if any)4—the document may provide a useful analogy to help answer the request, but is not directly responsive3—the document is roughly in the same topical area as the request, but otherwise not responsive2—the document might have something to do with the request, but there is no indication that it does in the text provided1—the document is in no way responsive to the requestReturn a JSON array of objects, each object representing a relevant case, ordered with the most relevant case first. Each object in the array will have the keys:\‘result_id\’—string, the result ID\‘reason_relevant\’—string, a description of how the document addresses the research request or query: “{user_request}”. In drafting this response, only draw from the excerpted language of the document; do not include extraneous information.\‘relevance_score\’—number, between 1-5, of how relevant the document is to the research request or query: “{user_request}”\‘quotes\’—array of strings. For each document, quote the language from the document that addresses the request. In finding these quotes, only draw from the excerpted language; do not include extraneous information. Do not put additional quotation marks around each quote beyond the quotation marks required to make valid JSON.Only valid JSON. Quotation marks within strings must be escaped with a backslash (\‘\\\’). Examples for reason_relevant: \‘“The concept of \\“equitable tolling\\” applies in this case.”\’, \‘“The case overturns a lower court decision that found a state abortion restriction unconstitutional based on Roe v. Wade and Casey, and argues that the viability rule from those cases is not the \\“central holding.\\” This case calls into question the continued validity of Roe v. Wade.”\’If there are no relevant documents, respond with an empty array.<|endofprompt|>Here's the JSON: In some implementations, more than one relevance prompt may be determined. For instance, if many query response clauses are determined at1206, then these query responses may be divided into groups for the purpose of relevancy analysis. The size of the groups may be determined based on a chunk threshold. Additional details regarding the division of text into chunks are discussed with respect to the method600shown inFIG.6. A subset of the query response clauses that meet a relevancy threshold based on communication with a text generation modeling system are identified at1210. In some embodiments, the subset of the query response clauses may be identified by transmitting the prompt or prompts determined at1208to a remote text generation modeling system. The remote text generation modeling system may then respond with one or more completed prompts. The text generation interface system may then extract relevancy information from the completed prompts. According to various embodiments, the relevance threshold may be determined in any of various ways. For example, all results that exceed a designated relevance threshold (e.g.,3out of a scale of 1-5 as shown in the example prompt template included above) may be identified. As another example, the most relevant results that are able to fit in a designated number (e.g., one or two) chunks may be identified. A query and filter synthesis prompt is determined at1212based on the subset of the query response clauses. In some embodiments, the query and filter synthesis prompt may be determined by combining a query and filter synthesis prompt template with information about the query and with query response clauses deemed suitable relevant at operation1210. The query information may include some or all of the query received at1102, the query and filter description determined at1106, the database query determined at1118, and/or the text filter determined at1120. An example of a query and filter synthesis prompt template in the legal context is as follows:You are helping a lawyer research the prevailing market consensus on a given type of contract clause.Using the following list of contract clauses, analyze the range of different terms for this type of clause in the context of this request from the lawyer: “{{text}}”$$CONTRACT_CLAUSE_LIST$${{documents}}$$/CONTRACT_CLAUSE_LIST$$Based on these contract clauses, and in the context of the lawyer's request, prepare:1. Range of Terms: An extensive analysis of the range of different provisions included in these clauses, following these instructions:List the dimensions on which the clauses differ, and explain the range of provisions along each of the dimensions.Focus on the range of favorability to one side or anotherOnly draw from the language in this list of clauses; do not include extraneous information.2. Average Terms: State what the average terms over the above list of contracts is over the dimensions you analyzed for question 1 above.3. Suggested Language: Draft a contract clause that is approximately average in terms when compared to the above list of clauses.4. List the clauses that were most relevant to your analysis, following this guidance:Do not include in this list any clauses that are not relevant to the request.If none of the clauses are relevant, return an empty array for results.Respond with nothing but a JSON object, with the following keys:\‘range_of_terms\’: your analysis of the range of provisions in the clause list, in the context of the lawyer's request.\‘average_terms\’: your analysis of the average provisions over the clauses in the list, in the context of the lawyer's request.\‘suggested_language\’: your draft clause with approximately average terms.\‘ids\’: (array of strings), in order of relevance, the document IDs of the documents that are most relevant to the request.Only valid JSON; check to make sure it parses, and that quotes within quotes are escaped or turned to single quotes. For the \‘answer\’ key, this could look like: “This is an answer with \\“proper quoting\\””<|endofprompt|>Here's the JSON: A query and filter response message is determined at1214based on communication with the text generation modeling system. In some embodiments, determining the query and filter response message may involve transmitting the prompt determined at1212to the remote text generation modeling system. The remote text generation modeling system may then respond with one or more completed prompts. The text generation interface system may then extract information for providing the query and filter response message. The extracted information may be used as-is or may be edited, supplemented, or otherwise altered to create the query and filter response message. A query and filter response message is transmitted at1216. In some embodiments, the query and filter response message may be provided to a client machine. The message may be sent in response to an API request, transmitted via a chat session, or provided in some other way. FIG.13illustrates a policy evaluation pre-processing method1300, performed in accordance with one or more embodiments. The method1300may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1300may be performed at the text generation interface system210. A request to analyze a set of text portions based on a policy is received at1302. In some embodiments, the request may be received via a chat interface. For instance, the text generation interface system may receive text-based messages from a client machine and then provide to the client machine text-based responses generated by a machine learning model. Alternatively, the request may be received in some other way, such as via an API request. The request may be generated automatically or based on user input. According to various embodiments, a text portion may correspond to a document, a set of documents, a portion of a document, or text outside the context of a document. Text portions may be identified in any of various ways. For example, the request received at1302may include one or more identifiers that uniquely identify individual text portions and/or groups of text portions stored in a document repository or other location accessible to the text generation interface system. As another example, the request received at1302may include a query for searching for text portions within one or more document repositories or other sources of text, and the text portions identified at1302may include results determined by executing such a search. In some implementations, the policy included in the request received at1302may include a natural language question, instruction, filter, or other such actionable text implemented in natural language. For example, the policy may specify that all documents that meet one or more criteria must include one or more terms such as a limitation of liability, legal disclaimer, or privacy notice. As another example, the policy may specify that all documents that meet one or more criteria must not include one or more terms such as an arbitration clause or force majeure clause. A determination is made at1304as to whether to subdivide the policy. In some embodiments, the determination may be made based on one or more indicators that the policy is complex. For example, a determination may be made to subdivide a policy based on its length and/or complexity. As another example, a determination may be made to subdivide the policy based on the presence, absence, or number of characteristics such as question marks, sentences, conjunctives, and other such features. The determination may be made based at least in part on a machine learning model applied to the policy to classify it in terms of complexity. If it is determined to subdivide the policy, then at1306a policy division prompt is determined for dividing the policy into subqueries. In some embodiments, the prompt may be determined by combining a prompt template with the text of the policy. The prompt template may include an instruction to divide the policy into a set of criteria. The prompt template may also include a fillable portion into which the policy text may be inserted. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. At1308, two or more criteria are identified based on communication with a text generation modeling system. In some embodiments, the two or more subqueries may be identified by sending the policy division prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the policy division prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the subqueries from the completed policy division prompt, for instance by parsing JSON included in the completed request. A criterion is selected for analysis at1310. According to various embodiments, criteria may be analyzed in sequence, in parallel, or in any suitable order. A training data generation prompt for generating training data based on the selected criterion is determined at1312. In some embodiments, the training data generation prompt may include an instruction for instructing a text generation modeling system to generate text that matches the criterion. The training data generation prompt may include a fillable portion for including the text of the criterion. Training data for the selected criterion is determined at1314based on communication with the text generation modeling system. In some embodiments, the training data may be identified by sending the training data generation prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the training data generation prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the training data from the completed policy division prompt, for instance by parsing JSON included in the completed request. In some embodiments, the training data may include one or more training data text portions. Each training data text portion may include text constructed by the text generation modeling system based on the text of the criterion. For example, a training data text portion may substitute one or more of the words in the criterion for synonyms. As another example, a training data text portion may restate a criterion using a different sentence structure. A trained classification model is determined at1316based on the training data. According to various embodiments, any of a variety of classification models may be used. For instance, the classification model may include a text embedding model that positions text in a vector space. A determination is made at1318as to whether to select an additional criterion for analysis. In some implementations, additional queries may continue to be selected until all available queries are processed. If it is determined not to select an additional criterion for analysis, then a subset of the text portions is selected based on the one or more queries and the associated classification models. Additional details regarding the selection of text portions for analysis are discussed with respect to the method1400shown inFIG.14. FIG.14illustrates a text portion selection first stage method1400, performed in accordance with one or more embodiments. The method1400may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1400may be performed at the text generation interface system210. In some embodiments, the text portion selection first stage method1400may be performed to select a subset of text portions for evaluation against one or more criteria. Alternatively, the text portion selection second stage method1500shown inFIG.15may be performed to select a subset of text portions for evaluation against one or more criteria. As still another possibility, the text portion selection first stage method1400shown inFIG.14may be performed in conjunction with the text portion selection second stage method1500shown inFIG.15. A request is received at1402to reduce a set of text portions based on a policy. In some embodiments, the request may be generated as discussed with respect to operation106. The request may identify a policy to evaluate and a document or documents having a set of text portions that may be used to evaluate the policy. Optionally, the request may be generated after performing one or more of the preprocessing operations discussed with respect to the method1300shown inFIG.13. A text portion is selected for relevance analysis at1404. According to various embodiments, text portions may be analyzed in parallel or in sequence, and in any suitable order. A text portion type associated with the text portion is determined at1406. A machine learning model is determined at1408based on the text portion type. In some embodiments, the text portion type may be determined based on the application of a classification model. For instance, a machine learning model may be configured to classify text portions or documents into one or more of a set of types of text. Then, a machine learning model may be selected that is specific to the text portion type. In some embodiments, different types of text may be associated with different types of models. Alternatively, or additionally, a type of text may be associated with a machine learning model that is specifically trained for that type of text. A relevance score is determined at1410by comparing the text portion to one or more criteria using a machine learning model. According to various embodiments, any of a variety of machine learning models may be used. In some embodiments, a machine learning model may be implemented as a pre-trained text embedding model trained as discussed with respect toFIG.13. For instance, a machine learning model may be implemented as a bi-encoder in which text portions are separately encoded and then mapped to a common embedding space. Then, at1406, the relevance score may depend on the distance between the criterion and the text portion in the embedding space. As another example, a machine learning model may be implemented as a cross-encoder model. In a cross-encoder, all or a portion of the criterion and all or a sub-portion of the text portion may be compared in a pair model, which may be built on a transformer-based language model such as BERT (Bidirectional Encoder Representations from Transformers) or RoBERTa (Robustly Optimized BERT Pretraining Approach). FIG.15illustrates a cross-encoder modeling system, configured in accordance with one or more embodiments. The cross-encoder modeling system accepts as input both a criterion portion1502and a text portion1504. The criterion and text portions are separated in the input by a separator1506. The cross-encoder modeling system that employs a number of layers of cross-linked neurons1508to produce a relevance score1510. According to various embodiments, the number of layers of neurons and the number of neurons in each layer may be strategically determined for accuracy and efficiency. For instance, one or more text embedding models may be created using a training data set. The text embedding models may then be used to produce relevance scores for a number of different queries and text portions. The relevance scores may then be used to create a loss function for hyperparameter tuning of the number of layers of neurons and number of neurons per layer in a cross-encoder model. Then, the cross-encoder model may used for future iterations without pre-training. In some embodiments, a combination of approaches may be used. For instance, in a trans-encoder, one or more bi-encoder representations may be used to fine-tune a cross-encoder. Then, the cross-encoder may be used to perform more accurate knowledge extraction using inter-sentence modeling. The resulting information may be used to improve the accuracy of the bi-encoder model. The process may be repeated to iteratively bootstrap from both the bi-encoder and the cross-encoder. A determination is made at1408as to whether the relevance score exceeds a designated threshold. According to various embodiments, the designated threshold may be strategically determined based on various factors. For example, different machine learning models may produce relevance scores having different distributions, leading to a designated threshold that is model-dependent. As another example, the designated threshold may be determined based at least in part on the number of text portions included in the request and a desired reduction of the text portions. For instance, the designated threshold may be determined so as to select a particular number or proportion of the text portions as relevant. As another example, the designated threshold may be determined so as to select more or fewer text portions as relevant, which may involve various tradeoffs. For instance, setting a lower designated threshold may result in selecting more documents as relevant, potentially leading to improved accuracy in evaluating the policy at the expense of relatively greater cost and compute time. An example of a relevance prompt in the legal context is as follows: Below are portions of two documents. One is our company's policies for contracts, the other is part of a contract that our company may enter into.$$POLICY PROVISIONS$${% for policy in policies %}$$POLICY {{policy.id}}$${{policy.text}}$$/POLICY$${% endfor %}$$/POLICY PROVISIONS$$ The following Clauses are from a {{context}}$$CONTRACT CLAUSES$${% for clause in contract_clauses %}$$CLAUSE {{clause.id}}$${{clause.text}}$$/CLAUSE$${% endfor %}$$/CONTRACT CLAUSES$$For each contract clause that applies or violates any of the provided policy provisions, provide an XML document called ‘<relevant_clause>’ with the following tags:‘policy_id’ (string): ID of the policy (please use the full ID of the policy, e.g. ‘Policy #6’ instead of just ‘6’)‘clause_id’ (string): ID of the clause (please use the full ID of the clause, e.g. ‘Clause #9’ instead of just ‘9’)‘applies’ (bool): true if the policy applies to the clause, false if it does not. This should be true if the clause directly gives effect to or implements the policy.‘change_required’ (bool): true if the clause needs to be edited to come into compliance with the policy, false if it does not. This should be true whenever the clause violates the Policy.‘relevance_score’ (int): a 1-9 score rating how much the policy applies to the clause, 9 being the highest relevancy, 1 being the lowest.Generally, err on the side of clauses violating policy when there is ambiguity or if the violation is minor.Your answer must be thorough and complete, capturing every instance of the provided clauses being relevant to any of the provided policy provisions. You may provide lengthy answers when needed.Return an XML list of the objects following the above criteria.If nothing is relevant, return a single XML object with the sole key of ‘no_matches’ (bool) equalling true.<|endofprompt|>Here's the XML documents following your instructions: If it is determined that the relevance score does not exceed the designated threshold, then at1414the selected text portion is excluded for policy analysis. If instead it is determined that the relevance score does exceed the designated threshold, then at1416the selected text portion is included for policy analysis. A determination is made at1418as to whether to select an additional text portion for analysis. According to various embodiments, text portions may continue to be selected until all available text portions have been analyzed for relevance. If it is determined not to select an additional text portion for analysis, then at1420the policy is evaluated based on the included text portions. According to various embodiments, evaluation of the policy may involve communicating with a text generation modeling system using the selected text portion. In some implementations, evaluation of the policy may involve implementing one or more elements from workflows discussed herein. Optionally, the text portions may be reduced further, for instance as described with respect to the method1600shown inFIG.16. FIG.16illustrates a text portion selection second stage method1600, performed in accordance with one or more embodiments. The method1600may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1600may be performed at the text generation interface system210. A request is received at1602to reduce a set of text portions based on a policy. In some embodiments, the request may be generated as discussed with respect to operation108. The request may identify a policy to evaluate and a set of text portions that may be used to evaluate the policy. Optionally, the request may be generated after performing one or more of the preprocessing operations discussed with respect to the method1300shown inFIG.13and/or one or more of the text portion selection operations discussed with respect to the method1400shown inFIG.14. One or more text portions are selected for analysis at1604. In some embodiments, text portions may be selected so as to fit within a designated chunk size. Additional details regarding the division of text into chunks are discussed with respect to the method600shown inFIG.6. A relevance prompt is determined at1606based on the selected one or more text portions. In some embodiments, the relevance prompt template may also include an instruction to the text generation modeling system to evaluate and/or rank the included text portions for relevance against the policy. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. Relevance scores for the selected one or more text portions are determined at1608based on communication with a text generation modeling system. In some embodiments, the relevance scores may be identified by sending the relevance prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the relevance prompt, after which it may be sent back to the text generation interface system. The text generation interface system may then extract the relevance scores from the completed prompt, for instance by parsing JSON included in the completed request. In particular embodiments, the relevance prompts may be implemented as high-read, low-write. In such a configuration, the text generation modeling system may be instructed to provide a small amount of feedback for a text portion rather than to generate a description in natural language. For instance, the text generation modeling system may be asked to provide a sequence of numbers corresponding to relevance scores for the sequence of text portions. In this way, the cost associated with interacting with the text generation modeling system may be reduced. A subset of the selected one or more text portions are selected as relevant at1610based on the relevance scores. According to various embodiments, the subset of the text portions may be selected as relevant based on a comparison of the relevance score against a designated threshold. As discussed with respect to the operation1408shown inFIG.14, a relevance threshold may be determined based on various factors. A determination is made at1612as to whether to select an additional text portion for analysis. According to various embodiments, additional text portions may continue to be selected until all available text portions have been analyzed for relevance. If it is determined not to select an additional text portion for analysis, then at1614the policy is evaluated based on the text portions selected as relevant. According to various embodiments, evaluating the policy may involve communicating with a text generation modeling system using the selected text portion. Additional details regarding policy evaluation are discussed with respect to the method1700shown inFIG.17. FIG.17illustrates a policy evaluation method1700, performed in accordance with one or more embodiments. The method1700may be used to evaluate a natural language document for compliance with a policy specified in natural language. A request to evaluate a document for compliance with a policy is received at1702. In some embodiments, the request may be received via a chat interface. For instance, the text generation interface system may receive text-based messages from a client machine and then provide to the client machine text-based responses generated by a machine learning model. Alternatively, the request may be received in some other way, such as via an API request. The request may be generated automatically or based on user input. In some embodiments, the request received at1702may identify a policy. A policy may be provided via user input and included in a chat interface. Alternatively, or additionally, a policy may be identified by reference to a file or other configuration information accessible to the system. The policy may include one or more criteria of any type capable of being expressed in natural language and applicable to documents written in natural language. For instance, a criterion may specify that documents of a particular type must include or exclude a particular stipulation, disclaimer, requirement or other type of language. Context information for the document is determined at1704. In some implementations, determining context information for the document may involve creating a prompt that instructs a text generation model implemented at a text generation modeling system to identify the relevant information from the document. Such a prompt may be included by combining information about the document with a context information template. An example of such a template in the legal context is as follows:The following is the beginning of a contract.{% for contract_section in paragraphs %}{{contract_section.text}}{% endfor %}Please write a sentence stating the contract's title, the type of contract it is, the names of the parties to the contract, and the terms that will be used to refer to the parties in the rest of the contract.<|endofprompt|> One or more portions of the document are selected for analysis at1706. According to various embodiments, a document may be divided into portions suitable for analysis. For instance, a contract may be divided into clauses. A document portion may be composed of one or more sentences, paragraphs, sections, pages, or other suitable units. In some embodiments, the division of a document into portions may depend on a maximum size associated with a chunk that may be included in a text generation prompt. Additional details regarding the division of text into chunks are discussed with respect to the method600shown inFIG.6. A filter prompt is determined at1708based on the selected one or more text portions. In some embodiments, the filter prompt may include an instruction to a large language model to identify any of the selected portions of the document that are potentially relevant to the policy identified at1702. In some implementations, the filter prompt may be determined by combining the selected one or more portions of the document, some or all of the context information determined at1704, the policy identified at702, a previously generated restatement (i.e., “comprehension”) of the policy generated by the large language model, and a prompt template. The prompt template may include one or more fillable portions in which this information can be inserted. An example of such a prompt template is as follows:Below are portions of two documents. One is our company's policies for contracts. They represent the kinds of terms we want all our contracts to adhere to. The other document is part of a contract that our company may enter into. We want to FIG. out if the policy in the policy document applies to which Clauses in the contract.$$POLICY PROVISION$$<policy><id>{{policy.id}}</id><text>{{policy.text}}</text></policy>$$/POLICY PROVISION$$The following are the Contract Clauses and they are from a {{context}}$$CONTRACT CLAUSES$${%-for clause in contract_clauses %}<clause><id>{{clause.id}}</id><text>{{clause.text}}</text></clause>{% endfor—%}$$/CONTRACT CLAUSES$$For each of the Contract Clause and the Policy Provision, decide whether the Policy Provision applies to this type of Contract Clause. A Policy Provision applies to the Clause when they deal with the same type of right or obligation. The Policy Provision can apply to a Clause when the clause complies with the Policy Provision and when the Clause conflicts with the Policy Provision.A few guidelines for your answer:It will often be the case that a Policy Provision will apply to none of the Contract Clauses in this part of the contract. This is not a problem. When this happens, simply respond by saying APPLIES TO NONE.Sometimes, multiple Policy Provisions will apply to the same Contract Clause, or one Policy Provision will apply to multiple Contract Clauses. If that happens, please list all such matches in your answer.If a Policy Provision applies only when multiple Contract Clauses are considered together, state that the Policy Provision applies to all such Contract Clauses. Give your answer in the following format for the Policy Provision:<policy><policy_comprehension>[restate the Policy in clear terms to show that you understood the Policy. Do not simply restate the policy verbatim, put it in your own words as clearly as you can to show that you understand it.]</policy_comprehension><what_to_look_for>[briefly summarize what sorts of Contract Clauses you should be looking for to which the Policy Provision applies. These should be the kind of Clauses that either implement or conflict with the Policy i.e. Clauses that either restate the Policy (or part of it) in different words or Clauses that violate the policy. What would a Clause look like that would bring the contract into compliance with the Policy?]</what_to_look_for><clauses>[if there are any Clauses to which the Policy Provision applies, list their IDs here (e.g. CC6 CC8 CC9). If the Policy Provision applies to none of the clauses, write “none” (all lower case) here. Do not include any explanation within this tag, just the list. If a Clause applies to only part of the Policy, you must always include it.]</clauses><context_clauses>[where a clause in the “clauses” list above is part of a list that has an introductory clause, and that introductory clause is needed for someone to understand what this clause means, list any such introductory clauses here in the same pipe character separated format. If the clauses above speak for themselves and no further clauses are needed to provide context, write “none” (all lower case) here. Do not include any explanation within this tag, just the list]</context_clauses><explanation>[A very brief explanation of why you said the Policy Provision either applies to these Clauses, or, if it didn't apply to any, very briefly explain why. Remember a Clause that does not comply with the Policy always applies.]</explanation></policy><|endofprompt|><policy>{% if cached_comprehension %}{{cached_comprehension}}{% endif %} A subset of the one or more portions of the document that are relevant to the policy is identified at1710. In some embodiments, the subset of the portions that are relevant may be identified by transmitting the filter prompt determined at1708to a remote text generation modeling system. The remote text generation modeling system may then transmit as a response a completed filter prompt. In the completed filter prompt, none, some, or all of the document portions selected at1706may be identified as relevant. In some embodiments, document portions may be identified as relevant based on the remote text generation modeling system including the entire text of the portion identified as relevant in a suitable portion of the completed filter prompt. Alternatively, or additionally, the remote text generation modeling system may include an identifier for a document portion instead of including text from the document portion. A determination is made at1712as to whether to select an additional document portion for analysis. According to various embodiments, document portions may be analyzed in parallel or in sequence, and in any suitable order. Document portions may continue to be selected until all available portions of the document have been analyzed. Alternatively, document portions may continue to be analyzed until a terminating condition is met, such as the system reaching a conclusion about the application of a policy to a document. If it is determined not to select an additional document portion for analysis, then at1714a policy analysis prompt is determined based on the subset of the portions of the document identified as relevant. In some embodiments, the policy analysis prompt may be determined by combining the text of the policy identified at1702, some or all of the text information determined at1704, and the document portions identified as relevant at1710with a prompt template that includes one or more fillable portions for including such information. An example of a prompt template is as follows:The following Contract Clauses are suspected to conflict with a company Policy that applies to its contracts.Please analyze the Policy below and the Clauses and answer the following questions for each clause:1. Actually conflicting? Decide whether the Clause conflicts with the Policy. Answer Yes or No. The following are guidelines in deciding whether a Clause conflicts with a Policy:We suspect these clauses do conflict, so err on the side of saying it conflicts when in doubt.Take into account the other Clauses listed here—if they cover areas of the Policy that this clause misses in such a way that, taken together, make the contract comply with the policy, then there is no conflict for this Clause (though other clauses may still conflict).Even seemingly small or insignificant differences count as conflicts because they are of legal significance, e.g. if the Policy says something shall be delivered by truck, and the clause says by car—that would count as a conflict.Clauses that are not related to the policy (deal with wholly different subject matter) should not count as conflicting with the policy. They are just orthogonal to the policy and neither conflict nor comply with it, and should be marked as No actual conflict.A Clause that partially complies with the Policy, but would need changes to fully comply must necessarily conflict with the Policy.2. Differences between the Policy and the Clause? If it does conflict, explain all the ways the Clause does and doesn't conflict with the Policy (or write N/A if it doesn't conflict at all).3. Risks? Explain the risks of keeping the Clause as-is. The mere fact that it “conflicts with a company policy” does not count as a risk here, you must explain substantively what the risks are given that the Policy is violated/conflicted with in this way.4. New clause suggestion. Suggest a new clause that would comply with the Policy, while preserving as much of the original Clause as possible. You must reproduce the whole clause here including the changes you suggest. Where appropriate, you should use the same capitalized Defined Terms that the original clause uses.Here is the Policy:{{policy.text}}Here is the Contract Clause from a {{context}}:<clauses>{% for clause in contract_clauses %}<clause><id>{{clause.id}}</id><text>{{clause.text}}</text></clause>{% endfor %}</clauses>Please put your answer in the following XML format with the answers for each clause in its own separate <conflict_check> tag. Never refer to a <clause> by its <clause_id>, instead only refer to a <clause> as the “Clause” rather than “CC——————”. Include a conflict_check tag for every clause, even for those that are not actually conflicting:<conflict_check><clause_id>[the ID of the clause]<policy_comprehension>[list out the details of the policy to show you understand it. List all the rights and obligations the policy requires and which parties those rights and obligations belong to step by step. For example, if the Policy was “The supplier shall issue an accurate and valid invoice for the prices of Goods and/or Services in accordance with the contract, or, where no express provision is detailed, shall send an electronic invoice, monthly in arrears, clearly referencing the Contract”, you should write something like:“The supplier must do one of two things:1. Issue an invoice that isa. accurateb. validc. includes the prices of Goods and/or Servicesd. is in accordance with the contractOR, if no express provision is detailed2. Send an electronic invoice thata. is sent monthly detailing charges for the past monthb. clearly references the Contract”]<similarities_and_differences>[List all the ways in which the Contract Clause complies and/or does not comply with the Policy Provision. Be sure to check every element of the Policy that you listed in <policy_comprehension> and explain how the Clause is the same or different for that element. If you need to refer to the clause, just call it “this clause”]<actual_relevance>[int rating from 1-10 of whether this Policy is actually relevant to this Clause at all, with 10 being highly relevant and 1 being not relevant at all. Give only the int rating. Do not attempt to explain in this tag]<actual_conflict>[Based on your analysis above, decide Yes or No on whether the Clause actually conflicts with the Policy. Remember to consider all the clauses together in deciding whether there is a conflict]<risks>[Risks associated with adopting the non-complying Contract Clause as-is, or N/A]<suggested_revision>[Full text of a contract clause that would comply with the Policy Provision while retaining as much of the original clause as possible, or N/A. The full text of the contract clause should not simply be a verbatim recitation of the policy. Do not use special formatting to show what has changed. Never refer to the “policy” in this revised clause.]</conflict_check><|endofprompt|>Here is the XML and nothing else: A policy evaluation message is determined at1716based on a completed policy analysis prompt. In some embodiments, the completed policy analysis prompt may be determined by sending the input policy analysis prompt determined at1714to an remote text generation modeling system, for instance via an API request. The remote text generation modeling system may then complete the policy analysis prompt and return it to the text generation interface system. In some implementations, the policy evaluation message may include an indication as to whether a particular clause or document portion is relevant to the policy. For example, the relevance clause may be ranked on a scale of 1-10. If the clause is relevant to the policy, then the policy evaluation message may indicate whether the clause complies with or conflicts with the policy. In some embodiments, a determination that a clause does not comply with or conflicts with a policy may lead to the policy evaluation message including one or more explanations regarding the discrepancy. For example, the policy evaluation message may include an explanation as to the difference between the policy and the clause. As another example, the policy evaluation message may include an explanation of one or more risks of non-compliance. In some embodiments, a determination that a clause does not comply with or conflicts with a policy may lead to the policy evaluation message including a proposed revision. For example, a new clause may be determined that is as close as possible to the original while nevertheless complying with the policy. As another example, a difference between the original clause and the proposed new clause may be included for the purpose of comparison. According to various embodiments, clause-level evaluation of compliance with a policy may be aggregated to the document level. For instance, if a document is required to include a particular disclaimer but the system determines that no clause in the document is relevant to the disclaimer, then the document may be identified as being noncompliant with the policy, and a proposal may be provided that the disclaimer be added to the document. FIG.18illustrates a method1800for identifying text components for document structure discovery, performed in accordance with one or more embodiments. According to various embodiments, the method1800may be performed at a computing device such as one or more devices within the text generation interface system210shown inFIG.2. At1802, a request is received to determine a structure for a document. According to various embodiments, the request may be received in association with a document processing procedure. For example, the request may be received in association with a document summarization method such as the method900shown inFIG.9. As another example, the request may be received in association with a request to update a database and/or query and filter a database based on document text, as is shown inFIGS.10-12. As yet another example, the request may be received in association with a request to evaluate one or more documents based on one or more policies, as is shown inFIGS.13-17. One or more text portions for the document are determined at1804. In some embodiments, the one or more text portions may be determined as discussed with respect to the document parsing method300shown inFIG.3. Each portion of text may correspond to a sentence, a paragraph, a section, or some other suitable division. In some embodiments, a text portion may be identified by use of a tag in a markup language such as XML. For example, in the following text passage, two different text portions (i.e., CC8 and CC9) were identified via XML tags.<CC8>ARTICLE 1 DEFINITIONS</CC8><CC9>1.1 “Approval Achievement Date” means the earlier of the: (i) date on which Acme receives marketing approval for a Development Product in one-half of the countries included in the Sublicensed Territory, as defined in the Sublicense Agreement; or (ii) the payment by Acme to BigCo of Development Fees hereunder of $1.0 million.</CC9> A regular expression prompt template is determined at1806. In some implementations, a regular expression prompt template may include at least two components. First, the regular expression prompt template may include one or more fillable portions that may be filled with text from a document to create a regular expression prompt. A fillable portion may be specified via a markup language. For instance, a fillable portion may include language such as <text portion>, which may be replaced with an actual text portion to create a regular expression prompt. Second, the regular expression prompt template may include one or more natural language instructions instructing a large language model to generate one or more regular expressions. In some embodiments, the natural language instructions may be implemented in natural language, not computer code. The natural language instructions may include information such as a format to be used for generating the one or more regular expressions, an example of a regular expression to generate, and the like. The natural language instructions may also include other information, such as an instruction to associate a regular expression with a document structure level, a markup tag, or other such information. An example of a regular expression prompt template that may be used to generate regular expressions is as follows. In the following example, the fillable portion “{% for clause in clauses %}<CC{{loop.index0}}>{{clause.text}}</CC{{loop.index0}}>{% endfor %}” indicates where to insert the input text portions to create the regular expression prompt from the regular expression prompt template.#PurposeYou are an advanced legal AI assistant, proficient in understanding and generating an XML schema that outlines a contract. Your task is to analyze clauses from a contract and create an XML representation of the contract's structure, identifying how the sections and subsections are denoted and organized.##InstructionsPlease consider the following instructions:1. Examine the contract structure. Contracts generally use prefixing to indicate the hierarchy of sections. Use the provided clauses to determine:The number of layers in the contractThe prefixes used to denote hierarchy levelsRegex patterns that can be used to identify these prefixes2. Format your response as XML tags. Each tag must have the following attributes:level: indicates a level of sectioning in the contractpattern: regex pattern that can be used to identify the prefix or formattingexample: an example of the prefix ##Examples Input Example:<CC0>APP ANNIE MASTER SUBSCRIPTION AGREEMENT (“MSA”)</CC0><CC1>1. Definitions. Any capitalized terms not defined in this MSA will have the meaning set forth in the Agreement. </CC1><CC2>1.1 “App Annie” means the App Annie entity set forth in the Order Form. </CC2><CC3>1.2 “Customer” means the entity that signs the Order Form and expressly excludes any related entities, affiliates, subsidiaries, partners, customers, clients, or third-party agents.</CC3><CC4>1.3 “Subscription Start Date” has the meaning set forth in the initial Order Form.</CC4><CC5>1.4 “Order Form” means an ordering document for the Services that incorporates this MSA by reference and is entered into by the parties. 1.5 “Services” means those services identified in the Order Form.</CC5><CC6>1.6 “Subscription Term” means the term of the subscription identified in the applicable Order Form, including all renewals, for the Services.</CC6><CC7>2. Payment.</CC7><CC8>2.1 Customer agrees to pay the fees set forth in the Order Form. Unless otherwise expressly stated in an Order Form, all payments are due in United States Dollars. Customer will pay all wire, electronic transfer, and administrative fees associated with its payment of fees under the Agreement; such fees may not be deducted from the amount payable to App Annie hereunder. Payment obligations are non-cancelable, fees paid are non-refundable, and Customer shall not withhold, reduce, or set-off fees owed under the Agreement.</CC8><CC9>(a) Payment obligations may be cancelable only upon App Annie's written permission;</CC9>Output Example:<contract><section level=“1” pattern=“\d+\.” example=“1.”><subsection level=“2” pattern=“\d+\.\d+\s” example=“1.1”><subsection level=“3” pattern=“\([a-z]\)” example=“(a)”></subsection></subsection></section></contract>##InputThe input includes parts of a contract split into individual clauses. Your task is to verify if the clauses were correctly separated. Note that a single clause may contain multiple subclauses or sections, and your job is to generate regex patterns that can further segment these clauses.The clauses are represented as follows:{% for clause in clauses %}<CC{{loop.index0}}>{{clause.text}}</CC{{loop.index0}}>{% endfor %}##OutputHere is your response of the output with the XML schema showcasing the structure of this contract:<|endofprompt|><contract> One or more regular expression prompts are determined at1808based on the regular expression prompt template and the one or more text portions. In some embodiments, a regular expression prompt may be determined by replacing a markup portion of a regular expression prompt template identifying a location at which to insert one or more text portions with one or more of the text portions determined at1804. In some embodiments, a single regular expression prompt template may be generated. For instance, text portions may be selected from the beginning of the document, from the end of the document, or throughout the document until a designated length threshold is reached. In some embodiments, multiple regular expression prompt templates may be generated. For instance, some or all of the text portions may be divided into different regular expression prompt templates, which may then be used independently to identify regular expressions. The one or more regular expression prompts are transmitted to a large language model for completion at1810. In some embodiments, the regular expression prompt may be transmitted to the large language model via the model API interface252shown inFIG.2. In some embodiments, the large language model may the execute the one or more natural language instructions using the text portions included in the prompt to determine one or more regular expressions. The large language model may then complete the prompt by adding these regular expressions in accordance with the instructions. One or more response messages are received from the large language model at1812. The response messages are parsed to identify one or more regular expressions at1814. In some embodiments, parsing a response message may involve extracting from the response message a portion corresponding to a regular expression. In the event that more than one response message is received, as may be the case if more than one prompt is created and sent, then regular expressions extracted from the different response messages may be deduplicated. According to various embodiments, regular expressions may be specified in any suitable regular expression language. Examples of such languages include, but are not limited to: Python, Java, JavaScript, R, C, and C++. In particular embodiments, regular expressions may be provided in the context of an overview of the document structure, with the regular expressions identifying text that signifies a new section. For example, the following text passage determined by a large language model based on the input text portions identified above includes three different regular expressions corresponding to different levels of the document structure:<contract><section level=“1” pattern=“ARTICLE\s\d+” example=“ARTICLE 1”><subsection level=“2” pattern=“\d+\.\d+\s” example=“1.1”><subsection level=“3” pattern=“\(i\)|\(ii)\)” example=“(i)”></subsection></subsection></section></contract> The one or more text portions are disaggregated at1816based on the one or more regular expressions. In some embodiments, disaggregating the one or more text portions may involve applying the one or more regular expressions to the text portions to subdivide the text portions into smaller portions where appropriate and to provide structure metadata for the text portions. Additional details regarding the disaggregation and structuring of the text portions are discussed with respect to the method1900shown inFIG.19and the method2000shown inFIG.20. FIG.19illustrates a method1900for disaggregating text for document structure discovery, performed in accordance with one or more embodiments. According to various embodiments, the method1900may be performed at a computing device such as one or more devices within the text generation interface system210shown inFIG.2. A request to disaggregate one or more text portions for a document based on one or more regular expressions is received at1902. In some embodiments, the request may be generated as discussed with respect to the operation1816shown inFIG.18. A regular expression is selected for analysis at1904. In some embodiments, the regular expressions may be determined as discussed with respect to the operation1814shown inFIG.18. Regular expressions may be selected for analysis in any suitable order. In some embodiments, regular expressions may be selected for analysis in order of their place in a hierarchical structure, in a top-down fashion. For example, a regular expression that identifies a document heading may be selected for analysis before one that identifies a document subheading, which in turn may be selected for analysis before one that identifies a text passage that falls within a document subheading. In some embodiments, regular expressions may be selected for analysis in order of their place in a hierarchical structure, in a bottom-up fashion. For example, a regular expression that identifies a document heading may be selected for analysis after one that identifies a document subheading, which in turn may be selected for analysis after one that identifies a text passage that falls within a document subheading. A text portion is selected for analysis at1906. According to various embodiments, text portions may be selected in sequence or in any suitable order. Text portions may be analyzed sequentially or in parallel. A determination may be made at1908as to whether the regular expression matches the selected text portion. The regular expression may be applied to the text portion by executing one or more programming instructions that receive as input both the text portion and the regular expression. If it is determined that the regular expression matches the selected text portion, then the selected text portion is subdivided into one or more sub-portions at1910. The selected text portion may be subdivided in accordance with the regular expression. For example, the regular expression may include two or more components corresponding to the one or more sub-portions. As another example, the regular expression may match a first part of the text portion and not match a second part of the text portion, with the first and second parts then corresponding to different sub-portions. In some embodiments, text sub-portions determined by subdivision at1910may be treated as text portions for the purpose of further regular expression analysis. That is, when a text sub-portion is determined, that text sub-portion may be added to the list of text portions so that it may be analyzed to determine whether it matches any regular expressions and should be subdivided again. One or more metadata elements for the sub-portions are determined at1912. In some embodiments, a sub-portion of text may be associated with one or more metadata elements that identify, for instance, the regular expression corresponding with the sub-portion, an identifier for the sub-portion, or any other suitable information. In some embodiments, a metadata portion may be a new tag for a text portion. For instance, after applying the regular expressions to the text in the example provided above, the disaggregated text portions present after the application of the regular expressions may be identified via tags (e.g., XML) tags as shown in the following example:<CC8>ARTICLE 1 DEFINITIONS</CC8><CC9>1.1 “Approval Achievement Date” means the earlier of the: </CC9><CC10>(i) date on which Acme receives marketing approval for a DevelopmentProduct in one-half of the countries included in the Sublicensed Territory, as defined in the Sublicense Agreement; or </CC10><CC11>(ii) the payment by Acme to BigCo of Development Fees hereunder of$</CC11><CC12>1.0 million.</CC12> A determination is made at1914as to whether to select an additional text portion for analysis. According to various embodiments, additional text portions may be selected until all text portions have been analyzed. For instance, additional text portions may be selected until a determination is made that the selected regular expression has been applied to all of the text portions. A determination is made at1916as to whether to select an additional regular expression for analysis. In some embodiments, analysis may continue until all regular expressions have been selected. In some embodiments, the operations shown inFIG.19, or indeed in any method discussed herein, may be performed in an order different from that shown. For instance, inFIG.19, a regular expression is shown as being first selected and then iteratively applied to text portions. However, alternatively, a text portion may be selected first and then iteratively divided via one or more regular expressions. At1918, a document structure is determined based on the disaggregated text portions and metadata elements. In some embodiments, the document structure may be determined as discussed with respect to the method2000shown inFIG.20. FIG.20illustrates a method2000of determining a document structure, performed in accordance with one or more embodiments. According to various embodiments, the method2000may be performed at a computing device such as one or more devices within the text generation interface system210shown inFIG.2. A request to determine a document structure for a document associated with a set of disaggregated text portions is received at2002. In some implementations, the request may be generated as discussed with respect to the operation1918shown inFIG.19. A document structure prompt template is identified at2004. In some implementations, a document structure prompt template may include at least two components. First, the document structure prompt template may include one or more fillable portions that may be filled with information selected from disaggregated text portions. A fillable portion may be specified via a markup language. For instance, a fillable portion may include language such as <text portion>, which may be replaced with information selected from a disaggregated text portion to create a document structure prompt. Second, the document structure prompt template may include one or more natural language instructions instructing a large language model to generate structural information. In some embodiments, the natural language instructions may be implemented in natural language, not computer code. The natural language instructions may include information such as a format to be used for generating the one or more structural information, an example of structural information to generate, and the like. An example of a document structure prompt template is as follows. In the following example, the fillable portion “{{example}}” may be used to provide an example of the hierarchical arrangement of text portions. Similarly, the fillable portions “{{root_clause.clause.text}}” and “{% for clause in clauses %}”, “{{clause.idx}}”, “{{clause.text }}”, “{{clause.idx}}”, and “{% endfor %}” indicate where to insert text and metadata information (e.g., a clause index) for the text portions.#PurposeYou are a competent legal A.I. assistant. Your role is to generate an XML schema that accurately represents the hierarchical structure of a legal contract.##StructureBelow is an indicative representation of the contract's structure, and an example of how each prefix corresponds to the formats and prefixes used in the document. Use this information to discern the hierarchical levels of clauses while analyzing a contract section.Here is a representation of the contract hierarchy:{{example}}##InstructionsWhen constructing the XML schema's hierarchy, adhere to these instructions:1. Evaluate the given contract clauses to discern the structure of the contract. Prefixes, numbering patterns, or formatting like bullet points can signify different levels of hierarchy in the contract. Aim to classify all CC #tags with a ‘|v|’ attribute, which denotes the hierarchical level they belong to, based on the context and prefix used.2. At times, the text of a single coherent contract clause may be divided across multiple <CC #> tags. If the split parts form one continuous thought when combined, they should be merged in the XML output. Indicate this merging by including all relevant CC #values separated by a space in the ‘ids’ attribute, e.g., <clause ids=“C# CC #”>.Note: Consider a ‘continuous thought’ to be a statement or clause that conveys one complete idea or concept. For instance, a definition of a term is one continuous thought, even if it's spread over multiple CC #tags. Similarly, if a sentence in one tag abruptly ends, and the sentence in the next tag logically continues the thought, they should be considered as one continuous thought.Don't make the mistake of merging clauses solely because they fall under the same subsection and use similar prefixes. Each clause should be considered a separate entity unless the continuity of thought is broken across tags. In such cases, look for signs of abrupt discontinuation or new points starting within the same level of the hierarchy.Additionally, observe if a single coherent prefix or numbering pattern is split across multiple tags. This can indicate a need to merge those clauses, as they likely form one continuous thought. However, remember that continuity of thought is paramount, and clauses should be merged even without a distinct prefix if they clearly belong together.3. Some clauses serve as definitions for specific terms in the contract. In these instances, add an ‘is_definition’ attribute to the XML flag to indicate that it is a defined term, e.g., is_definition=“Defined Term”. Generally, this should be done only when it is evident that a contract clause is from a definitions section and is explicitly formatted to define a specific word or phrase.4. A clause may also contain references to other sections of the contract. In these cases, include a ‘references’ attribute in the XML tag to specify the sections being referred to. The references should be separated by a ‘|’ in the attribute, e.g., references=“4.2|12|14”.5. Your output must include every contract clause provided in the input. Every CC #tag from the input must be represented in the response, ensuring the complete contract is accurately captured in the schema. ##Examples Input Example:<contract_root>APP ANNIE MASTER SUBSCRIPTION AGREEMENT (“MSA”)</contract_root>. . .<CC1>1. Definitions. Any capitalized terms not defined in this MSA will have the meaning set forth in the Agreement. </CC1><CC2>1.1 “App Annie” means the App Annie entity set forth in the Order Form. </CC2><CC3>1.2 “Customer” means the entity that signs the Order Form and expressly excludes any related entities, affiliates, subsidiaries, partners, customers, clients, or third-party agents. </CC3><CC4>1.3 “Subscription Start Date” has the meaning set forth in the initial Order Form. </CC4><CC5>1.4 “Order Form” means an ordering document for the Services that incorporates this MSA by reference and is entered into by the parties. </CC5><CC6>1.5 “Services” means those services identified in the Order Form. </CC6><CC7>1.6 “Subscription Term” means the term of the subscription identified in the applicable Order Form, including all renewals, for the Services. </CC7><CC8>2. Payment. </CC8><CC9>2.</CC9><CC10>1 Customer agrees to pay the fees set forth in the Order Form. Unless otherwise expressly stated in an Order Form, all payments are due in United States Dollars. Customer will pay all wire, electronic transfer, and administrative fees associated with its payment of fees under the Agreement; such fees may not be deducted from the amount payable to App Annie hereunder. Payment obligations are non-cancelable, fees paid are non-refundable, and Customer shall not withhold, reduce, or set-off fees owed under the Agreement. </CC10><CC11>2.2 If Customer in good faith disputes the accuracy of any portion of an App Annie invoice, then Customer shall pay all undisputed amounts when due, but may withhold any portion that is disputed in good faith pending resolution of the dispute, provided that Customer provides App Annie with written notice of such dispute within thirty (30) days of receipt of the invoice and provides reasonable detail for the basis of such dispute; otherwise such invoice will be deemed undisputed and due. If it is determined that Customer owes the disputed charges, then such charges will be paid with interest accrued beginning on the date such charges were originally due at the rate of </CC11><CC12>1.5% per month or the maximum rate permitted by law, whichever is lower, up until the date of receipt of payment. </CC12><CC13>2.3 Customer is responsible to maintain complete and accurate billing and contact information with App Annie to avoid termination or interruption of the Services. If Customer fails to pay any amount owed by the date such amount is due then App Annie may, without limiting its rights and remedies: </CC13><CC14>(a) suspend or terminate Customer's use of the Services until such amounts are paid in full; and </CC14><CC15>(b) charge Customer interest on the outstanding amount at the rate of </CC15><CC16>1.5% per month or the maximum rate permitted by law, whichever is lower. Customer agrees to reimburse</CC16><CC17>App Annie for all costs, expenses and attorneys' fees to collect past due balances and interest. </CC17>Output Example:<clause ids=“CC1”|v|=“1”><clause ids=“CC2”|v|=“2” is_definition=“App Annie”><clause ids=“CC3”|v|=“2” is_definition=“Customer”><clause ids=“CC4”|v|=“2” is_definition=“Subscription Start Date”><clause ids=“CC5”|v|=“2” is_definition=“Order Form”><clause ids=“CC6”|v|=“2” is_definition=“Services”><clause ids=“CC7”|v|=“2” is_definition=“Subscription Term”><clause ids=“CC8”|v|=“1”><clause ids=“CC9 CC10”|v|=“2”><clause ids=“CC11 CC12”|v|=“2”><clause ids=“CC13”|v|=“2”><clause ids=“CC14”|v|=“3”><clause ids=“CC15 CC16 CC17”|v|=“3”>##InputHere are the contract clauses for a contract:<contract_root>{{root_clause.clause.text}}</contract_root>. . .{% for clause in clauses %}<CC{{clause.idx}}>{{clause.text}}</CC{{clause.idx}}>{% endfor %}##Output:Here is your response:<|endofprompt|> A tree representation for the document is initialized at 2006. According to various embodiments, the tree representation may be implemented in one or more of a variety of ways. For example, the tree representation may be implemented as a data structure in a programming language. As another example, the tree representation may be implemented as a structured document. For instance, the tree representation may be implemented as a JSON document, as an XML document, or as another type of markup language document. A subset of the disaggregated text portions is selected at2008. In some embodiments, the subset of the disaggregated text portions may be selected by selecting disaggregated text portions that fall below a designated size threshold. In this way, the selected subset may be combined with the document structure prompt template to determine a document structure prompt that is sufficiently small so as to be completed by a large language model without exceeding a maximum token size for the large language model. An initial level for the subset of the disaggregated text portions is determined at2010. In some embodiments, the disaggregated text portions may be divided into subsets. In such a situation, without having an initial level in a hierarchy identified for the subset of the disaggregated text portions, the large language model may have no way of knowing where the subset of the disaggregated text portions sits in the hierarchy. Accordingly, the initial level may be identified prior to determining a document structure prompt. For instance, the initial level may indicate a level in the hierarchy or tree corresponding to the first disaggregated text portion in the subset of the disaggregated text portions. Such information may be identified, for instance, via the method1800shown inFIG.18. In this way, continuity between successive subsets of disaggregated text portions may be maintained. For example, in the following example, the model may be informed that clause <CC16> starts at the same level as <CC16> (e.g., level 3), so as to properly connect the clauses to the rest of the text portions across chunk breakpoints:<CC14>2.1<CC14><CC15>(i) . . . <CC15>——Chunk Breakpoint——<CC16>(ii) . . . <CC16><CC17>2.2<CC17> A document structure prompt is determined at2012based on the document structure prompt template and the selected subset of disaggregated text portions. In some embodiments, the document structure prompt may be determined by filling one or more fillable portions of the document structure prompt template with the subset of the disaggregated text portions selected at2008. The document structure prompt is transmitted to a large language model at2014. According to various embodiments, the document structure prompt template may be transmitted to the large language model via the model API interface252shown inFIG.2. In some embodiments, the large language model may the execute the one or more natural language instructions using the text portions included in the prompt to determine the structural information. The large language model may then complete the prompt by adding the structural information in accordance with the instructions. According to various embodiments, the large language model may determine one or more of a variety of types of information about a disaggregated text portion. For example, the large language model may determine information such as an original identifier, an updated identifier, structure level information, definitional information, reference number information, and/or any other suitable information. In some embodiments, an identifier for a disaggregated text portion may include and/or be based on structural metadata identification included in a text element of the disaggregated text portion. For instance, a portion of document text may include information such as “II.A.1” indicating that the text portion corresponds to the first subsection of Section A of Part II of the document. In some embodiments, an identifier for a disaggregated text portion may include and/or be based on a sequential arrangement of text within the document. For instance, text portions within a document may be associated with a sequential index. In some embodiments, an original identifier for a text portion may be assigned when text portions are originally processed. However, since a text portion may be subdivided as discussed with respect to operation1910shown inFIG.19, two or more text portions may be associated with the same original identifier. Accordingly, the large language model may determine updated identifiers to ensure that different text portions are assigned different identifiers. In some embodiments, structure level information may identify an outline depth or other such structural metadata. For instance, a portion of document text corresponding to “II.A.1” may be identified as belonging to a third structure level. In some embodiments, reference number information may include one or more references to other portions of a document within a disaggregated text portion. For instance, subsection “II.A.1” of a document may include a text element that refers to subsection “II.B.3” of the document. Such a reference may be identified by analyzing the text and then recorded via a metadata reference from the disaggregated text portion to the referenced document portion. In some implementations, definitional information may include information defined in a text element of the disaggregated text portion, which may be relevant for interpreting other portions of the document. For instance, if the disaggregated text portion includes a text element stating that “a material breach of contract is one that causes damages in excess of $10,000”, then such information may be useful in interpreting another portion of the document that refers to “a material breach of contract”. Definitional information may be extracted by the large language model and placed in a format such as a markup language for use in further analysis of the document. A document structure response message is received from the large language model at 2016. In some embodiments, the document structure response message may include a version of the document structure prompt template that has been completed by the large language model. For instance, the document structure response message may include some or all of the identifiers, structure level information, definitional information, reference information, and/or other suitable information. An example of the type of document structure information that may be provided by the large language model is shown in the following text passage, which identifies information such as the level and the definition status for the clauses corresponding to the provided clause identifiers:<clause ids=“CC8”|v|=“1”><clause ids=“CC9”|v|=“2” is_definition=“Approval Achievement Date”><clause ids=“CC10”|v|=“3”><clause ids=“CC11 CC12”|v|=“3”> The document structure response message is parsed at2018to place the selected subset of disaggregated text portions in the tree representation. In some embodiments, parsing the document structure response message may involve extracting any or all of the disaggregated text portions as well as the information determined by the large language model. Such information may then be used to update the tree representation. For example, a data structure or markup language representation may be updated to include a portion that represents a disaggregated text portion including some or all of the information determined about the disaggregated text portion by the large language model. In particular embodiments, placing the selected subset of disaggregated text portions in the tree representation may involve specifying one or more parent-child relationships. For example, based on the previous example, clauses CC10, CC11, and CC12 are children of clause CC9, which is in turn a child of clause CC8. A determination is made at2020as to whether to select an additional subset of disaggregated text portions for analysis. According to various embodiments, additional subsets of disaggregated text portions may be selected until all available disaggregated text portions have been processed. Such processing may be completed in sequence or in parallel. If it is determined not to select an additional subset of disaggregated text portions for analysis, then at2022the tree representation is stored. In some embodiments, the tree representation may be stored in a database system, a file repository, or in any suitable format for information retrieval. Additional details regarding the application of the tree representation are discussed with respect to the method2100shown inFIG.21. FIG.21illustrates a method2100of determining structured document text, performed in accordance with one or more embodiments According to various embodiments, the method2000may be performed at a computing device such as one or more devices within the text generation interface system210shown inFIG.2. A request to determine structured document text for a document associated with a tree representation is received at2102. In some embodiments, the request may be received in the context of a process for determining novel text, such as an application for generating correspondence, answering a question, or evaluating a document for compliance with a policy. One or more tree representation text portions within the tree representation are identified for analysis at2104. In some embodiments, the one or more tree representation text portions may be identified based on textual relevance to a particular application. For instance, the applications mentioned in the prior paragraph include operations in which relevant text is identified. Text passages identified as relevant may be analyzed based on structural information to determine display text enhanced with structural information using the method2100. A tree representation text portion is selected from the identified tree representation text portions at2106. According to various embodiments, tree representation text portions may be selected in any suitable order. For example, tree representation text portions may be selected in sequence within a document. As another example, tree representation text portions may be selected in order of relevance. At2108, a text element included within the selected tree representation text portion is identified. In some embodiments, the text element may include the portion of the tree representation text portion that is from the input document. Such information may be stored directly in the tree representation or may be indexed there and retrieved from a different location. Structural information associated with the selected tree representation text portion is determined at2110. In some embodiments, the structural information may include, for instance, a structure level associated with the text portion. For example, a text portion may be identified as residing at “level 3” of a document. One or more parent or sibling tree representation text portions are identified for the selected tree representation text portion at2112. In some embodiments, the tree representation may store parent-child relationships. For instance, in the example above, contract clause CC9 was identified as a child of contract clause CC8. One or more parent text portions may be identified for presentation so as to provide appropriate contextual information derived from the text structure. Similarly, one or more sibling text portions may be identified in the event that such information is useful. Definitional information for the selected tree representation text portion is determined at2114. According to various embodiments, definitional information may indicate that a particular text portion is a definition. The definitional information may identify information such as the term being defined and the definition for the defined term. One or more tree representation references for the selected tree representation text portion is determined at2116. In some embodiments, a tree representation reference may include an identifier associated with a different tree representation portion (e.g., CC15) referenced by the focal tree representation text portion. Such references may be used to retrieve text for the referenced text portion or portions. Display text for the tree representation text portion is determined at2118. According to various embodiments, the display text may include some or all of the information determined and identified as discussed with respect to the operations2108through2116. An example of the display text determined in keeping with the examples provided above is as follows, with the text arrows being used to indicate structure levels and the ellipsis being used to indicate text that is not displayed:Exhibit 10.1. . .ARTICLE 1 DEFINITIONS1.1 “Approval Achievement Date” means the earlier of the:(i) date on which Acme receives marketing approval for a Development Product in one-half of the countries included in the Sublicensed Territory, as defined in the Sublicense Agreement; or(ii) the payment by Acme to BigCo of Development Fees hereunder of $1.0 million. In some embodiments, definitional and/or reference information may be used to augment the display text with text portions other than that selected. For example, if the following contract clause were identified as relevant, then Section 1.1 and one or more of its children may be displayed since the definition for “Approval Achievement Date” was used in this clause.<CC15>1.7 “Development Territory” means (i) until the Approval Achievement Date, the Sublicensed Territory, as defined in the Sublicense Agreement; and (ii) after the Approval Achievement Date, the Sublicensed Territory, as defined in the Sublicense Agreement, other than Poland.</CC15> In some embodiments, parent/child information may be used to augment the display text with text portions other than that selected. For example, if the section 2.1(ii) were identified as relevant, then Section 2.1 may be displayed also since it is a parent of 2.1(ii):<CC18>2.1 Subject to the terms and conditions of this Agreement, Acme hereby agrees to use its commercially reasonable efforts in good faith to take, or cause to be taken, all actions, and to do or cause to be done, all things necessary, proper or desirable or advisable under applicable laws to develop and commercialize the Development Products, with a goal of eventual approval of Development Products in the Development Territory. In exchange for the payment by Acme of the Development Fee to BigCo, BigCo hereby agrees to pay Acme the following payments: </CC18><CC19>(i) within thirty Business Days from the date of this Agreement, BigCo will make an upfront payment of $225,000 to Acme; and </CC19><CC20>(ii) within thirty days of the verified achievement of the Phase II Milestone, (such verification shall be conducted by an independent third party mutually acceptable to the parties hereto), BigCo will make a payment of $775,000 to Acme. </CC20> A determination is made at2120as to whether to select an additional tree representation text portion for analysis. According to various embodiments, tree representations may continue to be selected for analysis until a terminating condition is reached. For example, tree representations may continue to be selected until all tree representations identified as relevant have been selected. As another example, tree representations may continue to be selected until the amount of display text reaches a threshold, such as a maximum amount of text that can be included in a prompt. Upon determining that an additional tree representation text portion is not to be identified, then the display text is stored for analysis at2122. According to various embodiments, the display text may then be used in any of a variety of applications, examples of which are discussed throughout the application, for instance with respect toFIGS.8-18. According to various embodiments, the operations shown inFIG.21, or indeed in any method described herein, may be performed in an order different than that shown. For example, one or more operations may be performed in parallel. As another example, relevant tree representation text portions may first be collected by iteratively identifying (1) tree representation text portions initially deemed relevant, (2) tree representation text portions referenced by other tree representation text portions deemed relevant, (3) parent, child, or sibling nodes of tree representation text portions deemed relevant. Then, the display text may be determined for the tree representation text portions identified as relevant. In some embodiments, one or more of the operations shown inFIG.21may be omitted. For example, operation2114may be omitted for tree representations that do not correspond to definitions. Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory (“ROM”) devices and random-access memory (“RAM”) devices. A computer-readable medium may be any combination of such storage devices. In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities. In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. For example, some techniques and mechanisms are described herein in the context of large language models. However, the techniques of disclosed herein apply to a wide variety of language models. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents.
180,408
11861322
DETAILED DESCRIPTION It will be readily understood that the instant components, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments. The instant features, structures, or characteristics as described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of network data, such as, packet, frame, datagram, etc. The term “message” also includes packet, frame, datagram, and any equivalents thereof. Furthermore, while certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message, and the application is not limited to a certain type of signaling. The example embodiments are directed towards a process of managing revisions of translations for IVR prompts using a grid structure (e.g., a table, spreadsheet, etc.) via a software program. A baseline translation may already be built for the IVR prompts. However, revisions may be made to the baseline translations over time. Rather than update the baseline translation at each revision interval, the example embodiments store revised translations of IVR prompts in a grid structure. When triggered, the software can The software can identify an IVR prompt that has multiple revisions and carry forward only the most recent (i.e., the newest) revision into the baseline translation document. Here, the grid structure allows multiple rounds of revisions to the IVR prompt translations to be stored. Furthermore, grid structure also provides storage of the baseline translations (from a previous interval). The software automates the filtering and the comparing process thereby relieving a user from having to manually perform such activities. Furthermore, the software can also extract and store the revised translations from different documents into the grid structure. Furthermore, errors and omissions within the revised translations and the baseline translations may create incorrect content that can cause inconsistencies in the speech of the IVR application leading to caller frustration or confusion during playback/reading of the voice prompts. To address this, the example embodiments also provide a software that performs an automated accuracy validation of the IVR prompt translations. For example, the software can identify when an audio file from the IVR application is missing, identify when a translation has too many characters or not enough characters, identify when English duplicates are present but translated duplicates are not present (indicating the likelihood of a user error), and the like. By automatically performing an accuracy check, the software can ensure that errors do not occur in the IVR application when the revised translations are integrated into the application. IVR systems are examples of computer-telephone integration (CTI). For example, a phone may communicate with a computer through the tones that are generated by each key on a telephone keypad. The tones are referred to as dual-tone multi-frequency (DTMF) signals. A computer may use hardware referred to as a telephony board or telephony card to understand DTMF signals produced by a phone. An IVR system may include a computer hooked up to a phone line through a telephony board/card and IVR software. The IVR software allows for pre-recorded greetings that are spoken during the call and menu options that a caller can select. In some cases, the IVR system may also provide video. Furthermore, the IVR system may implement dynamic concatenation which records phrases spoken by a user during the call and plays them back to the caller (e.g., dates, times, monetary amounts, account numbers, etc.) An example use case of an IVR application is to route calls within an organization. Instead of relying on a receptionist to answer and route calls, the application can answer incoming calls and route the calls to a correct extensions within an organization. The application can present the user with a list of menu options and questions about the nature of the call. In some cases, the application may answer frequently asked questions. There are many different IVR applications, and the examples herein should not be construed as being limited to any particular type of IVR application. During generation of the prompt content of an IVR application, a user may generate a spreadsheet, document, or other file which includes a list of IVR prompts for the IVR application. The user may upload the file to the system described herein. For example, the system may be a software tool running on a computing system such as a server, a user device, or the like. The system may receive the file, run different checks on the list of prompts, modify prompts, and output a modified file that is in a format that can be played by the application. Furthermore, the IVR prompts may also receive a translation into another language. The first request to translate the IVR prompts within the IVR application is referred to as a baseline translation. Over time, the baseline translation may go through multiple/frequent rounds of revisions. With numerous translation revisions taking place, there is a need for a process that efficiently carries forward only the most current revised prompt translations into a master translation file. In the example embodiments, when a translation is revised, the most current translation is added to a temporary grid structure (e.g., a translation template) which is stored in memory. The temporary grid structure may have a two-dimensional format that includes columns representing different attributes of a revised IVR prompt (name, IVR prompt content, translation content, timestamp, etc.) and rows representing individual prompts by name. The software may compare and filter the revised translations in the temporary grid structure to remove any translations that have become outdated and carry forward only the most recent revised translation into the master translation file. Prior to integrating the master translation file into the IVR application, the master translation file may be verified/validated for accuracy. Translations often happen at different times and by different users. As a result, some translations may be duplicated or may be missing. Also, some translations may be generated but may be missing an identifier of a corresponding audio file name in the IVR application. All of these issues can be checked, and warnings/alerts can be output by the software when a problem is found. As a result, a user can cure any issues prior to the master translations being added to the IVR application. FIG.1illustrates a system100for updating a master translation of IVR prompts of a software application according to example embodiments. Referring toFIG.1, the system100includes a host platform120which may host the master translation process described herein. For example, the master translation process may a service, an application, or the like, installed and executing on the host platform120which may be a user device, a server, a cloud platform, a database, and the like. According to various embodiments, the host platform120may manage a master translation file of IVR prompts which are included in an IVR application110. IVR data can be extracted from the IVR application110and stored in a template, as further described in the examples ofFIGS.2A-2F. For example, the IVR data may include audio file names112that represent the names of the different IVR prompts. In addition, the IVR data may include a baseline translation file114which includes translation data sets for IVR prompts within the IVR application110. The baseline translation file114may include a tabular data structure or grid such as a spreadsheet, a table, a document, or the like. The grid may include different columns and rows. Different columns may be assigned to an IVR prompt name, an IVR prompt content, a translation of the IVR prompt into a different language, a timestamp, and the like. Meanwhile, the rows may represent the different IVR prompts. The host platform120may store a copy of the audio file names112and the baseline translation file114. In addition, the host platform120may store and manage a temporary revision grid128which is a temporary storage structure (e.g., a table, a document, a spreadsheet, an array, etc.) that stores revised translations to be added to the baseline translation file114. The temporary revision grid128may include some (or all) of the same columns and rows as the baseline translation file114. Revised translation data sets may be generated by one or more users (e.g., via a user device). The revised translations may be stored in the temporary revision grid128and then deleted once they have been stored in the master translation file130and/or the IVR application110. The master translation process may include a step121in which revised translation data sets are collected, for example, from various user accounts. The revised translation data sets may be stored in the temporary revision grid128. The revised translation data sets may include a plurality of fields of data values including a prompt name, an IVR prompt content, a translation of the IVR prompt content, a timestamp, and the like. In step122, the revised translation data sets may be filtered to identify revised translation data sets that have identical prompt names. When two or more translation data sets have the same prompt name, they are directed to the same IVR prompt in the IVR application110. Here, duplicate prompt names can be identified, for example, by checking a box or adding a flag to the data set. When two or more revised translation data sets have been identified as having the same prompt name, in123, the host platform120may delete the oldest revised translation data set from among the two or more revised translation data sets. This allows the host platform to carry forward only the most recent revised translation data set while deleting any intervening translation data sets that are no longer the most up-to-date translations. In124an accuracy check can be performed to validate that the translations match the corresponding IVR prompt contents. The validated revised translation data set may be copied from the revision template128and stored in a master translation file130. Furthermore, the master translation file130may be integrated into the IVR application110, or otherwise stored in a repository in memory. FIG.2Aillustrates a process200A of deleting duplicate revised translation data sets of IVR prompts in accordance with example embodiments. Referring toFIG.2A, in201, a baseline translation of the IVR prompts included in an IVR application may be transferred into a grid structure, such as a temporary grid. The baseline translations may include initially generated translation data sets for IVR prompts within the IVR application. Each data set may include a prompt name, IVR prompt content, a translation of the IVR prompt content, a timestamp indicating when the translation was added, stored, etc., and the like. In202, one or more revised translation data sets may be transferred into the grid structure. Here, the revised translation data sets may be revisions to some or all of the initial translation data sets stored in the baseline translations. The host platform may use the baseline translation data sets to identify duplicate translation data sets in the revision template128. According to various embodiments, multiple revisions may occur to the baseline translation data set. Each revision may modify different translation data sets. However, in some cases, the same translation data set may be modified multiple times. In this case, the host platform may carry forward only the most recent revised translation data set from among the multiple revised translation data sets, before any of the revised translation data sets are incorporated into the IVR application. Here, in203, the host platform may identify duplicate revised translation data sets in the temporary revision grid. For example, the host platform may identify two or more revised translation data sets that have the same prompt name. In204, the host platform may identify which revised translation data set from among the two or more revised translation data sets that have the same prompt name has the oldest timestamp, and label the oldest revised translation data set for deletion. For example, a box may be checked, et. In205, the host platform may transfer only the most recent translation data sets into a master translation file while the oldest/duplicate revised translation data sets are not transferred. In206, the master translation file may be stored in memory, such as a document repository or the like. FIG.2Billustrates a temporary grid structure200B which includes a plurality of revised translation data sets220A,220B, and220C which have been collected and stored in a temporary grid structure. Each of the revised translation data sets include different revisions to a baseline translation data set of an IVR application. The grid structure includes a field211a name of an IVR prompt, field212for the English text of the IVR prompt content, field213for the translated test of the IVR prompt content into a different language, and field214for a timestamp at when the translation was added/stored in the system. In some embodiments, the translation data structures may also include an export number which identifies the order in which the revised translation data sets were exported into the temporary revision grid. Here, the baseline translation data set is not shown. Referring toFIG.2C, a process200C is performed in which the host platform may identify duplicate revised translation data sets within the temporary grid structure, prior to the revised translation data sets being incorporated into the IVR application. For example, revised translation data sets221A and221C are duplicates because both share the same prompt name (AddServiceMenu_1). Likewise, revised translation data sets222A and222B are duplicates because both share the same prompt name (AddServiceMenu_2). Furthermore, revised translation data sets223A,223B, and223C are each duplicates of the other two because all three share the same prompt name (AddServiceMenu_3). Here, the host platform may filter all of the collected revised translation data sets and compare prompt names to identify which revised translation data sets are duplicates. As shown inFIG.2D, a process200D is performed in which the host platform may sort and order the revised translation data sets based on prompt names during a first sort operation and label some translation data sets for delete. In some embodiments, the host platform may perform a second sort just to ensure accuracy based on order numbers during a second sort operation. Furthermore, for the revised translation data sets that have duplicate prompt names, the host platform may identify the oldest and label the oldest with a delete flag, tag, or the like. In this example, the host platform adds an identifier to a delete column216to identify revised translation data sets221A,222A,223A, and223B which should be deleted before the revised translation data sets are incorporated in the IVR application. The revised translation data sets stored in the temporary translation grid may be added to a master translation file130as shown inFIG.2E. In this example, the revised translation data sets labeled with the delete identifier (e.g., data sets221A,222A,223A, and223B inFIG.2D) may not be carried over into the master translation file130. Instead, only the remaining (non-deleted) translation data sets may be transferred and pasted into the master translation file. In some embodiments, the transfer may extract the remaining revised translation data sets from the temporary translation grid and auto-populate the remaining revised translation data sets into the master translation file. Here, the columns may be aligned between the master translation file130and the temporary translation grid. FIG.2Fillustrates a process200F of integrating revised translation data sets into audio files within an IVR application230. Here, a master translation process242(hosted by a host platform240) may extract translation data sets from the master translation data file130, identify which audio file names correspond to the revised translation data sets, and store the content of IVR prompt, the translation, and the timestamp in the IVR application230. Here, the master translation process242may map revised translation data sets to audio file names based on prompt names within the revised translation data sets. For example, a revised translation data set having a prompt name that is identical to an audio file name may be mapped together and IVR data from the revised translation data set may be transferred and stored into the corresponding slot of the IVR application230. FIG.3Aillustrates a process300A of performing an accuracy verification on revised translation data sets prior to the revised translation data sets being integrated into an IVR application, according to example embodiments. Referring toFIG.3A, in301, the host platform may transfer revised translation data sets from a master translation file into a revision grid. The revision grid may be a temporary data structure which includes columns representing prompt name, IVR prompt content, a translation of the IVR prompt content, a timestamp, and the like. In302, the host platform may compare the prompt names of the revised translation data sets that are stored in the revision grid to audio file names that are downloaded from the IVR application. If an IVR prompt name is not included in the audio file names, the host platform can detect that an audio file name is missing, for example, as a result of human error, or the like. If any audio file names are missing, an alert may be generated which identifies the missing audio file name via a user interface. In303, the host platform may identify revised translation data sets that have duplicated IVR prompt content. Here, the host platform may first identify revised translation data sets that have identical original IVR prompt content in a first language (e.g., English). Next, the host platform may identify revised translation data sets that have identical translated prompt content (e.g., Spanish, French, Russian, etc.) In304, the host platform may identify whether the translation data sets with matching IVR prompt content also include matching translated prompt content. If the IVR prompt content is the same between two translation data sets but the translated IVR prompt content is different, this indicates a potential user error or some other mistake may have occurred. Here, the host platform may generate an alert, for example, by adding a flag or an alert to a visualization of the revision grid. Next, in305, the host platform may perform a word count of each IVR prompt content and its corresponding translated prompt content, for each of the revised translation data sets. For any revised translation data sets where the number count differs by a predetermined amount, the host platform may generate another flag or alert to the revision grid. This identifies that a user should manually review these translation data sets for accuracy. FIG.3Billustrates a verification process300B that may be performed when the master translation file130is to be incorporated into an IVR application. Here, a verification program322host by a host platform320may verify that the revised translation data sets in the master translation file130have corresponding audio file names in the baseline translation310. If an audio file name is missing from the baseline translation310, the audio file name may be identified and labeled via a user interface to enable a user to manually adjust the baseline translation310to include the audio file name. In this example, the verification program322identifies that audio file names311,312,313, and315have matching prompt names in the master translation file130. However, the verification program322also identifies that a prompt name316in the master translation file130is missing (not found) a corresponding audio file name in the baseline translation file310. Thus, an alert can be output to a user interface330which may include the revision grid. FIG.3Cillustrates an example of a revision grid300C which includes a plurality of revised translation data sets to be added to an IVR application according to example embodiments. In this example, the host platform performs a duplication check to identify which translation data sets include duplicate prompt content and which translation data sets include duplicate translated prompt content. When a revised translation data set has a duplicate English versions of the IVR prompt content it should also have a duplicate translated version. However, in many cases, the two are not duplicates because they are created by different users at different times (and refer to different prompt names). In this case, the host platform may detect when a revised translation data set342has a duplicate English prompt content but a different translated prompt content as another revised translation data set341. Here, the host platform may output an alert inside of a column352in the revision grid300C to indicate that a user should manually view the translation content to ensure it is correct. In addition, the host platform may perform a word count of the IVR prompt content and the translated prompt content for each revised translation data set. When the word count between the English version of the prompt content and the translated version of the prompt content differs by a predetermined number of characters (e.g., 40% bigger or smaller) the host platform may output an alert such as shown in revised translation data set345indicating that a user should manually check the prompt content and the translated prompt content to ensure it is accurate. In revised translation data set345, the translated prompt content is much bigger than the original IVR prompt content (in English). Once the user has reviewed the alerts shown inFIG.3C, the revision grid may be integrated into the baseline translation file of the IVR application, for example, via button on the user interface, etc. FIG.4Aillustrates a method410of reducing revised translations of IVR prompts according to example embodiments. For example, the method410may be performed by a computing device such as a desktop computer, a server, a database, a cloud platform, and the like. Referring toFIG.4A, in411, the method may include transferring a copy of a plurality of revised translation data sets to be added to the software application into a grid structure, each revised translation data set comprising a prompt name in a first field, an interactive voice response (IVR) prompt in a second field, a translation of the IVR prompt into a different language in a third field, and a timestamp in a fourth field. Other fields may be included such as export number fields (e.g., the identification of the IVR prompt in the IVR application, etc.), duplicate identifier fields, delete identification fields, and the like. The grid structure may be a two-dimensional grid structure such as a table, a spreadsheet, an array, or the like. In412, the method may include identifying two revised translation data sets in the grid structure that comprise a duplicate prompt name in first fields thereof. Here, the IVR prompt name may be identical in each of the two revised translation data sets, while the IVR prompt content and/or the IVR translation may be the same or may be different. In either case, an extra data set is identified. In some embodiments, more than two duplicates are identified (e.g., three or more, etc.) In413, the method may include deleting an oldest revised translation data set among the two identified translations data sets from the grid structure. Here, the system may compare the timestamps of the two revised translation data sets to identify which data set is older than the other. The oldest data set can be deleted even though it is yet to be added to the IVR application because it is outdated and has been replaced by the more current revised translation. The deletion process may include deleting the row corresponding to the oldest translation data set from the grid structure and moving the rows underneath the deleted row up by one. In414, the method may further include storing the grid structure without the deleted oldest revised translation data set in a repository. In some embodiments, the grid structure comprises a two-dimensional spreadsheet which includes a plurality of rows assigned to the plurality of revised translation data sets, respectively, and a plurality of columns assigned to the first, second, third, and fourth fields. In some embodiments, the grid structure further comprises an extra column with identifiers of the plurality of translation data sets and an extra row with names of the first, second, third, and fourth fields. In some embodiments, the transferring may include copying the plurality of translation data sets from a plurality of revision documents, and auto-populating the plurality of copied translation data sets into the grid structure. In some embodiments, the identifying may further include adding a duplicate flag to a fifth field of the oldest translation data set indicating that the oldest translation data set is to be deleted. In some embodiments, the method may further include transferring a copy of a baseline translation data set currently present within a software application into the grid structure, where each baseline translation data set comprises a prompt name in a first field, an IVR prompt in a second field, a translation of the IVR prompt into a different language in a third field, and a timestamp in a fourth field. In some embodiments, the method may further include matching revised translation data sets in the grid structure with corresponding baseline translation data sets in the grid structure, replacing the matched baseline translation data sets with the updated revised translation data sets in the grid structure to generate an updated translation data set, and storing the updated translation data set in a master translation grid. In some embodiments, the method may further include integrating the updated translation data set stored in the master translation grid into the software application. FIG.4Billustrates a method420of validating an accuracy of IVR prompt translations according to example embodiments. For example, the method410may be performed by a computing device such as a desktop computer, a server, a database, a cloud platform, and the like. In some cases, the method420may be performed in sequence with the method410described inFIG.4A. As another example, the methods410and420may be performed simultaneously with parts/steps of each interleaved with the other. In some embodiments, both methods may be performed by the same system/software or they may be performed by different systems/software. Referring toFIG.4B, in421, the method may include transferring a copy of a plurality of revised translation data sets to be added to an interactive voice response (IVR) application into a grid structure, each revised translation data set comprising a prompt name in a first field, an IVR prompt in a second field, a translation of the IVR prompt into a different language in a third field, and a timestamp in a fourth field. Here, the grid structure may be a temporary storage structure such as a table, a spreadsheet, a document, an array, or the like. In422, the method may include executing, via a processor, an accuracy validation on the plurality of revised translation data sets, wherein, for each revised translation data set, the processor identifies whether a respective translation in a different language in a third field is an accurate translation of a respective IVR prompt in a second field based on attributes of the respective translation and the respective IVR prompt, and in423, the method may include displaying results of the accuracy validation via a user interface. In some embodiments, the grid structure may include a two-dimensional spreadsheet which includes a plurality of rows assigned to the plurality of revised translation data sets, respectively, and a plurality of columns assigned to the first, second, third, and fourth fields, respectively. In some embodiments, the executing may include performing a character count of a respective IVR prompt in a second field and a respective translation of the IVR prompt in a third field of a revised translation data set, and determining whether the respective translation is an accurate translation based on the character counts. In this example, the executing may include identifying that the respective translation of the IVR prompt is an inaccurate translation in response to the character count of the respective translation being greater than the character count of the IVR prompt by a predetermined threshold, and displaying an alert in association with the revised translation data set via the user interface. In some embodiments, the executing may include identifying two revised translation data sets that comprise identical IVR prompts in second fields thereof, and different translations in third fields thereof, respectively, and displaying an alert in association with at least one of the two revised translation data sets via the user interface. In some embodiments, the method may further include extracting a plurality of audio file names from the IVR application and transferring the plurality of audio file names from the IVR application into the grid structure. In this example, the method may further include matching the plurality of audio file names from the IVR application to corresponding translation data sets from the plurality of translation data sets based on prompt names of the corresponding translation data sets. In addition, the method may further include identifying a revised translation data set that is not matched to any of the plurality of audio file names, determining that an audio file name of the identified revised translation is not found, and displaying an alert identifying that an audio file is not found via the user interface. The above embodiments may be implemented in hardware, in a computer program executed by a processor, in firmware, or in a combination of the above. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium known in the art. An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (“ASIC”). In the alternative, the processor and the storage medium may reside as discrete components. For example,FIG.5illustrates an example computer system architecture500, which may represent or be integrated in any of the above-described components, etc. FIG.5is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the application described herein. Regardless, the computing node500is capable of being implemented and/or performing any of the functionality set forth hereinabove. For example, the computing node500may be a network server of a larger enterprise network that connects multiple user workstations to the Internet, a private network, or the like. In computing node500there is a computer system/server502, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server502include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server502may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server502may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.5, computer system/server502in cloud computing node500is shown in the form of a general-purpose computing device. The components of computer system/server502may include, but are not limited to, one or more processors or processing units (processor)504, a system memory506, and a bus that couples various system components including the system memory506to the processor504. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server502typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server502, and it includes both volatile and non-volatile media, removable and non-removable media. System memory506, in one embodiment, implements the flow diagrams of the other figures. The system memory506can include computer system readable media in the form of volatile memory, such as random-access memory (RAM)510and/or cache memory512. Computer system/server502may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system514can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus by one or more data media interfaces. As will be further depicted and described below, memory506may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments of the application. Program/utility516, having a set (at least one) of program modules518, may be stored in memory506by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules518generally carry out the functions and/or methodologies of various embodiments of the application as described herein. As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method, or computer program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Computer system/server502may also communicate with one or more external devices520such as a keyboard, a pointing device, a display522, etc.; one or more devices that enable a user to interact with computer system/server502; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server502to communicate with one or more other computing devices. Such communication can occur via I/O interfaces524(which may be referred to herein as an output and/or an input). Still yet, computer system/server502can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter526. As depicted, network adapter526communicates with the other components of computer system/server502via a bus. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server502. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Although an exemplary embodiment of at least one of a system, method, and non-transitory computer readable medium has been illustrated in the accompanied drawings and described in the foregoing detailed description, it will be understood that the application is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions as set forth and defined by the following claims. For example, the capabilities of the system of the various figures can be performed by one or more of the modules or components described herein or in a distributed architecture and may include a transmitter, receiver or pair of both. For example, all or part of the functionality performed by the individual modules, may be performed by one or more of these modules. Further, the functionality described herein may be performed at various times and in relation to various events, internal or external to the modules or components. Also, the information sent between various modules can be sent between the modules via at least one of: a data network, the Internet, a voice network, an Internet Protocol network, a wireless device, a wired device and/or via plurality of protocols. Also, the messages sent or received by any of the modules may be sent or received directly and/or via one or more of the other modules. One skilled in the art will appreciate that a “system” could be embodied as a personal computer, a server, a console, a personal digital assistant (PDA), a cell phone, a tablet computing device, a smartphone or any other suitable computing device, or combination of devices. Presenting the above-described functions as being performed by a “system” is not intended to limit the scope of the present application in any way but is intended to provide one example of many embodiments. Indeed, methods, systems and apparatuses disclosed herein may be implemented in localized and distributed forms consistent with computing technology. It should be noted that some of the system features described in this specification have been presented as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, graphics processing units, or the like. A module may also be at least partially implemented in software for execution by various types of processors. An identified unit of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. It will be readily understood that the components of the application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments of the application. One having ordinary skill in the art will readily understand that the above may be practiced with steps in a different order, and/or with hardware elements in configurations that are different than those which are disclosed. Therefore, although the application has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent. While preferred embodiments of the present application have been described, it is to be understood that the embodiments described are illustrative only and the scope of the application is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto.
44,750
11861323
Common reference numerals are used throughout the figures to indicate similar features. DETAILED DESCRIPTION Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples. As described above, normalisation is usually performed by passing the input number, a, through a leading zero counter (LZC)102and then left shifting the input number a (in a left shifter104) by the number, s, output by the LZC102, as shown inFIG.1. The normalised output is denoted r. Both values r and s are output by the normalisation operation. This normalisation process can be a relatively slow operation. In some applications, the normalisation operation may be referred to as a ‘renormalisation’ (e.g. within a floating point unit, following denormalisation). For the purposes of the following description the terms ‘normalisation’ and ‘renormalisation’ are considered to be equivalent and interchangeable and the methods and hardware logic described herein may be used in either normalisation or renormalisation. Improved hardware logic for performing normalisation is described below in which the left shifting operation starts before the completion of the leading zero count such that at least a part of the left shifting operation is performed in parallel with the leading zero count operation. Two examples201,202are shown inFIG.2. In the first example201, the leading zero count, which is performed by LZC204, is performed in parallel with the left shifting, which is performed by the renormaliser block206. As shown inFIG.2, in this example201, the two operations, leading zero count and left shifting, are performed independently of each other to generate the two outputs r and s. In the second example202, the left shifting operation (in left shifter208) starts after an output has been received from the LZC204but before the leading zero count operation has been completed. In this example, a subset of the most significant bits (MSBs) of the LZC (i.e. the MSBs of s) are provided to the left shifter208and then the left shifting is completed by the renormaliser block210. The term ‘subset’ is used herein to refer to a proper subset, such that a subset of the MSBs of s does not comprise all the bits of s. The hardware logic which performs the left shifting operation may be referred to as “left shifting logic”. In the first example201this comprises the renormaliser block206and in the second example this comprises the left shifter208and the renormaliser block210. The MSBs of the LZC output can be computed more quickly and easily than the least significant bits (LSBs). This means that in the second example202, the MSBs can be received quickly by the left shifter208and normalisation can be started before the LSBs have been computed in the LZC. By performing at least a proportion of the leading zero count in parallel with the left shifting as described herein the hardware logic operates faster, although it may result in a larger area of hardware logic. By selecting the degree of overlap, which may be defined in terms of the number of bits, h, from the LZC that are input to the left shifter208, the design may be arranged to satisfy particular speed and area constraints. At one extreme, as shown in the first example201inFIG.2, h=0 and at the other extreme, where all but one of the bits from the output, s, of the LZC are used, h=[log2n], where n is the number of bits in the input number, a. The term h may be referred to as the hybridity and as detailed, h∈[0, α−1], α=[log2n]+1. The value of h is a natural number (where natural numbers are considered herein to include zero). The first example201shown inFIG.2may be referred to as a fully parallel implementation (h=0) and this is described in detail first. The hybrid implementation, as shown in the second example202, (1≤h<└ log2n┘+1) is described in detail subsequently and this description refers back to the description of the fully parallel implementation as the LZC204in each implementation may operate in the same way in both the fully parallel and hybrid implementations and similarly the renormaliser blocks206,210may operate in a similar way (e.g. the operations of the renormaliser block210in the hybrid implementation may be a subset of the operations of the renormaliser block206in the fully para lei implementation). The LZC204in the fully parallel implementation201may use any suitable method to compute the output, s, which is the number of leading zeros in the input number, a. In various examples, the LZC204may be implemented based on the following equation for calculating the bits, si, of the output, s, of the LZC204: si=Σk=12α−i−1Bn−1:n−(2k−1)2iBn−(2k−1)2i−1:n−k2i+1(1) Where:i is the bit index for the output s and i∈[0, α−1]Σ stands for a sum of OR gatesamare the bits of the input number a, where m is the bit index and m∈[0, n−1]Bβ:γ=aβ.aβ−1. . .aγ+1.aγ(where . represents the AND operation)Bβ:γ=aβ+aβ−1+ . . . +aγ+1+aγ(where + represents the OR operation)and if either (or both) of β or γ are negative: Bβ:γ=0 andBβ:γ=1. Considering an example where n=8, h=0 this gives α=4 and so the 4 bits calculated by the LZC are as follows: s3=B7:0 s2=B7:4B3:0 s1=B7:6B5:4+B7:2B1:0 s0=B7:7B6:6+B7:5B4:4+B7:3B2:2+B7:1B0:0 Expanding these out: s3=a7.a6. . .a1.a0 s2=a7.a6. . .a5.a4.(a3+a2+a1+a0) s1=a7.a6.(a5+a4)+a7.a6.a5.a4.a3.a2(a1+a0) s0=a7.a6+.a7.a6.a5.a4.+.a7.a6.a5.a4.a3.a2+.a7.a6.a5.a4.a3.a2.a1.a0=((a7.a6)+(a7.a6).(.a5.a4))+(a7.a6.a5.a4).((a3.a2)+(a3.a2).(a1.a0)) And so this example LZC may be implemented in the arrangements of hardware logic gates301-304shown inFIGS.3and4. The first arrangement301shows an example arrangement of hardware logic gates to compute the value of s3from the 8-bit input number, a. As shown inFIG.3, s3is calculated using three stages305-307of AND gates308and a plurality of NOT gates310. In each stage, pairs of values from the previous stage are combined together using an AND gate308. The NOT gates310are used to invert the original input bits, am. The second arrangement302shows an example arrangement of hardware logic gates to compute the value of s2. As shown inFIG.3, s2is calculated using three stages311-313of AND gates308and OR gates314and a plurality of NOT gates310. In each stage, pairs of values from the previous stage are combined together using an AND gate308or an OR gate314. In this example, only a subset of the input bits are inverted using NOT gates310. The third arrangement303shows an example arrangement of hardware logic gates to compute the value of s1. As shown inFIG.4, s1is calculated using three stages316-318of AND gates308and OR gates314and a plurality of NOT gates310. In the first two stages316-317, pairs of values from the previous stage are combined together using an AND gate308or an OR gate314and unlike in the previous two arrangements, in this arrangement, input bits may be used more than once. For example, in the first stage316input bits a5and a4are used twice, to generate both a5+a4anda5.a4; however as a5+a4and are the logical negation of each other, it is not necessary to use an OR gate to generate the first term and then two NOT gates (which may also be referred to as ‘negators’) and an AND gate to generate the second term. Swapping AND or OR gates for NOT gates in the hardware logic saves space (as NOT gates are smaller in size) In the second stage317, an output from the first stage (a7.a6) is used twice to generate botha7.a6(a5+a4) anda7.a6.a5.a4. The final stage318is an AND-OR logic function320(which may be written AO21) and takes three inputs, combines two in an AND gate before combining the output of the AND gate and the third input in an OR gate. The fourth arrangement304shows an example arrangement of hardware logic gates to compute the value of s0. As shown inFIG.4, s0is calculated using three stages321-323of AND gates308and OR gates314and a plurality of NOT gates310. In the first stage321, pairs of input bits are combined together using AND gates308and a plurality of NOT gates314. Like in the third arrangement, input bits may be used more than once. In this example, both the second and third stages322-323involve use of AO21 logic functions320. Although the four arrangements301-304are shown totally separately inFIGS.3and4it will be appreciated that they may be combined or overlaid such that, for example, the valuea7.a6is only calculated once and then used in each of the calculations of sit rather than this value being calculated independently many times within the hardware logic. The renormaliser block206in the fully parallel implementation calculates the normalised output, r, without any input from the LZC204, as shown inFIG.2. In various examples, the renormaliser block206may be implemented based on the following equation for calculating the bits, rj, of the output, r, of the renormaliser block206: rj=An−1,j+Σk=1jBn−1:n−kAn−k−1,j−k(2) Where: j is the bit index for the normalised output, r, and j∈[0, n−1] Aβ,γ=aβ.aγ Considering the same example as previously where n=8, h=0, the 8 bits calculated by the renormaliser block are as follows: r0=A7,0 r1=A7,1+B7:7A6,0 r2=A7,2+B7:7A6,1B7:6A5,0 r3=A7,3+B7:7A6,2B7:6A5,1B7:5A4,0 r4=A7,4+B7:7A6,3B7:6A5,2B7:5A4,1B7:4A3,0 r5=A7,5+B7:7A6,4B7:6A5,3B7:5A4,2B7:4A3,1B7:3A2,1 r6=A7,6+B7:7A6,5B7:6A5,4B7:5A4,3B7:4A3,2B7:3A2,1+B7:2A1,0 r7=B7:0 And these may be expanded and implemented in arrangements of OR, AND, NOT and/or AO21 logic functions in a similar manner to those described above with reference toFIGS.3and4. In calculating the values of rj, again pairs of input bits are combined in the first stage and then groups of 2 or 3 outputs of each stage are combined in subsequent stages (e.g. where the AO21 logic function is used to combine 3 outputs). To simplify the implementation of equation (2) above, this may be re-written in the form of a recursion relation: rij:k=rij:t+Bj:trit−1:k(3) where: i,j,k,t are indices which each have a value in the range 0 to n−1, j≥k, and k+1≥t≥j (such that 1≤t≤n−1), rij:kis the ith output bit of the renormaliser and Bj:tis as before, true only if aj, . . . , at=0. The indices used in equations (3) to (6) are not necessarily the same as the indices used previously (e.g. indices i and j are used earlier); however, it will be clear that where a reference is made back to one of equations (3) to (6), the indices being referred to are those used in the equations. The value of I therefore divides a[j,k] into the two parts which may be denoted ‘high’ (for a[j:t]) and ‘low’ (for a[t−1,k]) such that equation (3) can be rewritten as: rihigh & low=rihigh+Bhighrilow(4) Where the function Bhighis equal to one only if there are no 1s in the high part. Although the value of t may be selected arbitrarily whilst satisfying k+1≤t≤j, if t is selected to split a[j,k] into equal portions, the number of recursion steps is minimised. Equation (3) is written in ‘sum of product’ form and the recursion relation may alternatively be written in ‘product of sum’ form as: rij:k=(rij:t+Bj:t)(rij:t+rit−1:k)  (5) Starting from rij:j=Aj,j−n+1+i, rin−1:0can be constructed in hardware logic in └ log2(i+1)┘ steps using the recursion relation (of equation (3) or (5)) to form rij:kfor larger and larger intervals of [j,k] and Bj:kcan be constructed logarithmically using an AND tree. An example of this for the previously described example where n=8, i.e. for ri7:0, is shown inFIG.5and this uses the recursion shown in equation (3) above. As can be seen fromFIG.5, the value of ri7:0can be calculated using a number of stages of hardware logic formed from AND gates308, NOT gates310and AO21 logic functions320. At the first stage, pairs of input bits are combined using AND gates and in subsequent stages, two or three input bits and/or outputs from a previous stage are combined using an AND gate (for 2 bits) or an AO21 logic function (for 3 bits). NOT gates are also used to invert values as appropriate, i.e. to generateaifrom ai. Not all of the logic arrangement shown inFIG.5is required for calculating all values of ru7:0since for some values of i and x (where x can have a value between 1 and 7 in this example), the value of i−x may be negative and in which case the corresponding input bit ai−xcan be replaced by 0 and so the corresponding parts of the logic tree can be omitted. For example, for i=6, ai−7can be replaced by 0 and so the hardware logic which calculates ri1:0can be simplified to comprising a single AND gate which calculates a1.ai−6=a1.a0. For smaller values of i, more of the logic arrangement is omitted, such that for i=0 and i=1 the logic arrangements are as shown inFIG.6, with the first arrangement601corresponding to i=0 and the second arrangement602corresponding to i=1. It can be seen that the hardware logic is considerately smaller than that shown inFIG.5. The logic arrangement for ri7:0shown inFIG.5uses the recursion shown in equation (3) above. In other examples, the recursion shown in equation (5) may be used which results in the logic arrangement for ri7:0shown inFIG.7. As can be seen fromFIG.7, the value of ri7:0can again be calculated using a number of stages of hardware logic formed from AND gates308, OR gates314, NOT gates310and OA22 logic functions720. OA22 logic functions720combine two pairs of inputs using OR gates and then combine the outputs of the OR gates using an AND gate. It can be seen fromFIG.7that the input rij:tbranches to provide an input to both the OR gates in the OA22 logic function720. In further examples, a combination of the recursions used in equations (3) and (5) may be used such that at some levels within the tree equation (3) is used and at other levels, equation (5) is used. By building up the hardware logic using the recursion relation of equation (3) and/or (5) the delay is approximately proportional to log (i). As described above with reference toFIG.5, not all of the logic arrangement shown inFIG.7is required for calculating all values of ri7:0since for some values of i and x (where x can have a value between 1 and 7 in this example), the value of i−x may be negative and in which case the corresponding input bit ai−xcan be replaced by 0 and so the corresponding parts of the logic tree can be omitted. In various examples separate hardware logic may be provided to calculate each of the ri; however, as with the case of the LZC arrangements of hardware logic, the hardware logic for different n may be combined or overlaid such that values may be calculated only once and then used in multiple calculations of ri, rather than a value being calculated independently many times within the hardware logic. In other examples, some values may still be calculated more than once, but values may be shared between logic arrangements. By using the fully parallel implementation, as described above, it is possible to halve the delay in calculating outputs r and s compared to known methods of normalisation (e.g. as shown inFIG.1). The above detailed description relates to the first example201shown inFIG.2which is the fully parallel implementation (h=0). The same principles and equations may be used in the hybrid implementation, as shown in the second example202, (1≤h<α); however, the hardware logic used for renormalisation (i.e. the left shifter208and renormalisation block210) can be simplified as a result of receiving one or more bits (h bits) from the LZC204. These hybrid implementations are described in detail below. The LZC204in the hybrid implementation (like in the fully parallel implementation201) may use any suitable method to compute the output, s, which is the number of leading zeros in the input number, a. In various examples, the LZC204may be implemented based on equation (1) for calculating the bits, si, of the output, s, of the LZC204, and this implementation may be as described above with reference to the fully parallel implementation and shown inFIGS.3and4. Unlike in the fully parallel implementation, in the hybrid implementations, one or more of the MSBs of the LZC output (but not all bits of the LZC output) are provided to a left shifter208which may be implemented using a multiplexer. In any hybrid implementation, h bits are output by the LZC204to the left shifter208where 1≤h<α. Where n is a power of 2 (i.e. n=2ywhere y is a natural number) the value of h used in a hybrid implementation may be selected to be greater than one in order that the amount of logic in the renormaliser is reduced compared to the fully parallel (h=0) known solution (as shown inFIG.1). The left shifter208receives the h-bits from the LZC204and left shifts the input number a by the number of places indicated by the received bits. The left shifter208may, for example, be implemented using a multiplexer. It will be appreciated that as the left shifter208only receives one or more, but not all, the output bits from the LZC204, there may still be one or more leading zeros in the output from the left shifter208. For example, for a 3-bit LZC if a single MSB equal to one is received by the left shifter (h=1), then the left shifter shifts the input number by 22bits to the left. However, if the single MSB in this example that is received is equal to zero, no left shifting is performed in the left shifter208. In either case, the output from the left shifter208has a maximum of 3 leading zeros as the two LSBs of the LZC are unknown. The renormaliser block210in a hybrid implementation calculates the normalised output, r, with some input from the LZC204(i.e. a subset of the bits, starting with the MSB) but without receiving the full output s from the LZC, as shown inFIG.2. In various examples, the renormaliser block210may be implemented based on the following equation for calculating the bits, rj, of the output, r, of the renormaliser block210: rj=A′n−1,j+Σk=1MIN(j,2α−h−1)B′n−1:n−kA′n−k−1,j−k(6) Where:a′ is the output of the left shifter208 A′β,γ=a′β.a′γ B′β:γ=a′β.a′β−1. . .a′γ+1.a′γ a′=a<<(sα−12α−1+ . . . +sα−h2α−h) Considering the same example as previously where n=8, but this time using the hybrid approach with h=2, the 8 bits calculated by the renormaliser block are as follows: r0=A′7,0 r1=A′7,1+B′7:7A′6,0 r2=A′7,2+B′7:7A′6,1+B′7:6A′5,0 r3=A′7,3+B′7:7A′6,2+B′7:6A′5,1+B′7:5A′4,0 r4=A′7,4+B′7:7A′6,3+B′7:6A′5,2+B′7:5A′4,1 r5=A′7,5+B′7:7A′6,4+B′7:6A′5,3+B′7:5A′4,2 r6=A′7,6+B′7:7A′6,5+B′7:6A′5,4+B′7:5A′4,3 r7=A′7,7+B′7:7A′6,6+B′7:6A′5,5+B′7:5A′4,4=B7:4 And these may be expanded and implemented in arrangements of OR, AND, NOT and AO21 logic functions in a similar manner to those described above with reference toFIGS.3and4. In calculating the values of rj, again pairs of input bits are combined in the first stage and then groups of 2 or 3 outputs of each stage are combined in subsequent stages (e.g. where the AO21 logic function is used to combine 3 outputs). It can be seen by comparing these equations to those above for the fully parallel version, that by using the hybrid approach with h=2, the equations are truncated such that the equation for r4is missing the last term, the equation for r5is missing the last two terms, the equation for r6is missing the last three terms and the equation for r7is missing the last four terms. These terms can be discounted because the information provided by the bits received from the LZC narrows down the possible positions of the leading one. In a similar manner to equation (2), equation (6) can also be simplified by re-writing it in the form of a recursion relation (e.g. as shown in equations (3)-(5) above), however it is only necessary to construct rin−1:n−2α−hsince it is known (as a result of the bits received from the LZC204) that the leading 1 occurs in a′[n−1:n−2α−h] or a′=0. As described above with reference to the fully parallel version, starting from rij:j=Aj,j−n+1+i, rin−1:n−2α−hcan be constructed in [log2(i+1)] steps using a recursion relation (e.g. equation (3) or (5)) to form rij:kfor larger and larger intervals of [j,k] and Bj:kcan be constructed logarithmically using an AND tree. Two examples of this for the previously described example, i.e. for n=8, α=4 and h=2, are shown inFIG.8. For this example it is only necessary to construct ri7:4and the first example801uses the recursion shown in equation (3) above whilst the second example802uses the recursion shown in equation (5) above. The reduction in hardware logic required in the renormaliser block210where a hybrid implementation is used can clearly be seen by comparing the first example801inFIG.8andFIG.5(which shows the fully parallel equivalent implementation) and by comparing the second example802inFIG.8andFIG.7(which again shows the fully parallel equivalent implementation). In further examples (not shown inFIG.8), a combination of the recursions used in equations (3) and (5) may alternatively be used such that at some levels within the tree equation (3) is used and at other levels, equation (5) is used. As described above with reference toFIGS.5and7, not all of the logic arrangement shown in either example inFIG.8is required for calculating all values of ri7:4since for some values of i, the input bit ai−x(where in this case x can have a value of 1 or 2 or 3) will be replaced by zero and so the corresponding parts of the logic tree can be omitted. Using the methods described above, the LSBs of output r are output from the renormaliser block210significantly quicker than in known normalisers. As the value of h decreases, the size of the renormaliser block210increases and the size of the left shifter208decreases (at a slower rate). The critical delay (i.e. the delay of the slowest signal from input to output of the component) does not change significantly as h is varied. In some instances where the LSBs of r are output ahead of remaining bits of r, these LSBs may be processed by further logic (e.g. input to a rounding process) ahead of the output of the rest of r. In comparison to the hybrid implementations, the fully parallel implementation described above is larger in size, but is significantly faster to calculate the final outputs (i.e. all of r and s), with the delay expected to be about ½ to ⅔ of the delay of known renormalisers. However, use of a hybrid approach provides design flexibility (i.e. to trade off size of hardware logic and speed of output of the LSBs of r). FIGS.3-8which are described above show specific arrangements of logic gates (in particular AND, OR and NOT gates and AND-OR and OR-AND logic functions). It will be appreciated that there may be alternative arrangements of logic gates which achieve the same logic functions as those shown. The term ‘processor’ and ‘computer’ are used herein to refer to any device, or portion thereof, with processing capability such that it can execute instructions. The term ‘processor’ may, for example, include central processing units (CPUs), graphics processing units (GPUs or VPUs), physics processing units (PPUs), radio processing units (RPUs), digital signal processors (DSPs), general purpose processors (e.g. a general purpose GPU), microprocessors, any processing unit which is designed to accelerate tasks outside of a CPU, etc. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes set top boxes, media players, digital radios, PCs, servers, mobile telephones, personal digital assistants and many other devices. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. Memories storing machine executable data for use in implementing disclosed aspects can be non-transitory media. Non-transitory media can be volatile or non-volatile. Examples of volatile non-transitory media include semiconductor-based memory, such as SRAM or DRAM. Examples of technologies that can be used to implement non-volatile memory include optical and magnetic memory technologies, flash memory, phase change memory, resistive RAM. A particular reference to “logic” refers to structure that performs a function or functions. An example of logic includes circuitry that is arranged to perform those function(s). For example, such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnect, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. Logic may include circuitry that is fixed function and circuitry can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. Logic identified to perform one function may also include logic that implements a constituent function or sub-process. In an example, hardware logic has circuitry that implements a fixed function operation, or operations, state machine or process. Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Any reference to ‘an’ item refers to one or more of those items. The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and an apparatus may contain additional blocks or elements and a method may contain additional operations or elements. Furthermore, the blocks, elements and operations are themselves not impliedly closed. The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. The arrows between boxes in the figures show one example sequence of method steps but are not intended to exclude other sequences or the performance of multiple steps in parallel. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Where elements of the figures are shown connected by arrows, it will be appreciated that these arrows show just one example flow of communications (including data and control messages) between elements. The flow between elements may be in either direction or in both directions. It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.
28,635
11861324
DETAILED DESCRIPTION For purposes of the description hereinafter, the terms “end,” “upper,” “lower,” “right,” “left,” “vertical,” “horizontal,” “top,” “bottom,” “lateral,” “longitudinal,” and derivatives thereof shall relate to the disclosed subject matter as it is oriented in the drawing figures. However, it is to be understood that the disclosed subject matter may assume various alternative variations and step sequences, except where expressly specified to the contrary. It is also to be understood that the specific devices and processes illustrated in the attached drawings, and described in the following specification, are simply exemplary embodiments or aspects of the disclosed subject matter. Hence, specific dimensions and other physical characteristics related to the embodiments or aspects disclosed herein are not to be considered as limiting unless otherwise indicated. No aspect, component, element, structure, act, step, function, instruction, and/or the like used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more” and “at least one.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like) and may be used interchangeably with “one or more” or “at least one.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based at least partially on” unless explicitly stated otherwise. As used herein, the terms “communication” and “communicate” may refer to the reception, receipt, transmission, transfer, provision, and/or the like of information (e.g., data, signals, messages, instructions, commands, and/or the like). For one unit (e.g., a device, a system, a component of a device or system, combinations thereof, and/or the like) to be in communication with another unit means that the one unit is able to directly or indirectly receive information from and/or transmit information to the other unit. This may refer to a direct or indirect connection (e.g., a direct communication connection, an indirect communication connection, and/or the like) that is wired and/or wireless in nature. Additionally, two units may be in communication with each other even though the information transmitted may be modified, processed, relayed, and/or routed between the first and second unit. For example, a first unit may be in communication with a second unit even though the first unit passively receives information and does not actively transmit information to the second unit. As another example, a first unit may be in communication with a second unit if at least one intermediary unit (e.g., a third unit located between the first unit and the second unit) processes information received from the first unit and communicates the processed information to the second unit. In some non-limiting embodiments or aspects, a message may refer to a network packet (e.g., a data packet and/or the like) that includes data. It will be appreciated that numerous other arrangements are possible. As used herein, the terms “issuer institution,” “portable financial device issuer,” “issuer,” or “issuer bank” may refer to one or more entities that provide accounts to customers for conducting transactions (e.g., payment transactions), such as initiating credit and/or debit payments. For example, an issuer institution may provide an account identifier, such as a primary account number (PAN), to a customer that uniquely identifies one or more accounts associated with that customer. The account identifier may be embodied on a portable financial device, such as a physical financial instrument, e.g., a payment card, and/or may be electronic and used for electronic payments. The terms “issuer institution” and “issuer institution system” may also refer to one or more computer systems operated by or on behalf of an issuer institution, such as a server computer executing one or more software applications. For example, an issuer institution system may include one or more authorization servers for authorizing a transaction. As used herein, the term “account identifier” may include one or more types of identifiers associated with a user account (e.g., a PAN, a card number, a payment card number, a payment token, and/or the like). In some non-limiting embodiments or aspects, an issuer institution may provide an account identifier (e.g., a PAN, a payment token, and/or the like) to a user that uniquely identifies one or more accounts associated with that user. The account identifier may be embodied on a physical financial instrument (e.g., a portable financial instrument, a payment card, a credit card, a debit card, and/or the like) and/or may be electronic information communicated to the user that the user may use for electronic payments. In some non-limiting embodiments or aspects, the account identifier may be an original account identifier, where the original account identifier was provided to a user at the creation of the account associated with the account identifier. In some non-limiting embodiments or aspects, the account identifier may be an account identifier (e.g., a supplemental account identifier) that is provided to a user after the original account identifier was provided to the user. For example, if the original account identifier is forgotten, stolen, and/or the like, a supplemental account identifier may be provided to the user. In some non-limiting embodiments or aspects, an account identifier may be directly or indirectly associated with an issuer institution such that an account identifier may be a payment token that maps to a PAN or other type of identifier. Account identifiers may be alphanumeric, any combination of characters and/or symbols, and/or the like. An issuer institution may be associated with a bank identification number (BIN) that uniquely identifies the issuer institution. As used herein, the terms “payment token” or “token” may refer to an identifier that is used as a substitute or replacement identifier for an account identifier, such as a PAN. Tokens may be associated with a PAN or other account identifiers in one or more data structures (e.g., one or more databases and/or the like) such that they can be used to conduct a transaction (e.g., a payment transaction) without directly using the account identifier, such as a PAN. In some examples, an account identifier, such as a PAN, may be associated with a plurality of tokens for different individuals, different uses, and/or different purposes. For example, a payment token may include a series of numeric and/or alphanumeric characters that may be used as a substitute for an original account identifier. For example, a payment token “4900 0000 0000 0001” may be used in place of a PAN “4147 0900 0000 1234.” In some non-limiting embodiments or aspects, a payment token may be “format preserving” and may have a numeric format that conforms to the account identifiers used in existing payment processing networks (e.g., ISO 8583 financial transaction message format). In some non-limiting embodiments or aspects, a payment token may be used in place of a PAN to initiate, authorize, settle, or resolve a payment transaction or represent the original credential in other systems where the original credential would typically be provided. In some non-limiting embodiments or aspects, a token value may be generated such that the recovery of the original PAN or other account identifier from the token value may not be computationally derived (e.g., with a one-way hash or other cryptographic function). Further, in some non-limiting embodiments or aspects, the token format may be configured to allow the entity receiving the payment token to identify it as a payment token and recognize the entity that issued the token. As used herein, the term “provisioning” may refer to a process of enabling a device to use a resource or service. For example, provisioning may involve enabling a device to perform transactions using an account. Additionally or alternatively, provisioning may include adding provisioning data associated with account data (e.g., a payment token representing an account number) to a device. As used herein, the term “token requestor” may refer to an entity that is seeking to implement tokenization according to embodiments or aspects of the presently disclosed subject matter. For example, the token requestor may initiate a request that a PAN be tokenized by submitting a token request message to a token service provider. Additionally or alternatively, a token requestor may no longer need to store a PAN associated with a token once the requestor has received the payment token in response to a token request message. In some non-limiting embodiments or aspects, the requestor may be an application, a device, a process, or a system that is configured to perform actions associated with tokens. For example, a requestor may request registration with a network token system, request token generation, token activation, token de-activation, token exchange, other token lifecycle management related processes, and/or any other token related processes. In some non-limiting embodiments or aspects, a requestor may interface with a network token system through any suitable communication network and/or protocol (e.g., using HTTPS, SOAP, and/or an XML interface among others). For example, a token requestor may include card-on-file merchants, acquirers, acquirer processors, payment gateways acting on behalf of merchants, payment enablers (e.g., original equipment manufacturers, mobile network operators, and/or the like), digital wallet providers, issuers, third-party wallet providers, payment processing networks, and/or the like. In some non-limiting embodiments or aspects, a token requestor may request tokens for multiple domains and/or channels. Additionally or alternatively, a token requestor may be registered and identified uniquely by the token service provider within the tokenization ecosystem. For example, during token requestor registration, the token service provider may formally process a token requestor's application to participate in the token service system. In some non-limiting embodiments or aspects, the token service provider may collect information pertaining to the nature of the requestor and relevant use of tokens to validate and formally approve the token requestor and establish appropriate domain restriction controls. Additionally or alternatively, successfully registered token requestors may be assigned a token requestor identifier that may also be entered and maintained within the token vault. In some non-limiting embodiments or aspects, token requestor identifiers may be revoked and/or token requestors may be assigned new token requestor identifiers. In some non-limiting embodiments or aspects, this information may be subject to reporting and audit by the token service provider. As used herein, the term a “token service provider” may refer to an entity including one or more server computers in a token service system that generates, processes, and maintains payment tokens. For example, the token service provider may include or be in communication with a token vault where the generated tokens are stored. Additionally or alternatively, the token vault may maintain one-to-one mapping between a token and a PAN represented by the token. In some non-limiting embodiments or aspects, the token service provider may have the ability to set aside licensed BINs as token BINs to issue tokens for the PANs that may be submitted to the token service provider. In some non-limiting embodiments or aspects, various entities of a tokenization ecosystem may assume the roles of the token service provider. For example, payment networks and issuers or their agents may become the token service provider by implementing the token services according to non-limiting embodiments or aspects of the presently disclosed subject matter. Additionally or alternatively, a token service provider may provide reports or data output to reporting tools regarding approved, pending, or declined token requests, including any assigned token requestor ID. The token service provider may provide data output related to token-based transactions to reporting tools and applications and present the token and/or PAN as appropriate in the reporting output. In some non-limiting embodiments or aspects, the EMVCo standards organization may publish specifications defining how tokenized systems may operate. For example, such specifications may be informative, but they are not intended to be limiting upon any of the presently disclosed subject matter. As used herein, the term “token vault” may refer to a repository that maintains established token-to-PAN mappings. For example, the token vault may also maintain other attributes of the token requestor that may be determined at the time of registration and/or that may be used by the token service provider to apply domain restrictions or other controls during transaction processing. In some non-limiting embodiments or aspects, the token vault may be a part of a token service system. For example, the token vault may be provided as a part of the token service provider. Additionally or alternatively, the token vault may be a remote repository accessible by the token service provider. In some non-limiting embodiments or aspects, token vaults, due to the sensitive nature of the data mappings that are stored and managed therein, may be protected by strong underlying physical and logical security. Additionally or alternatively, a token vault may be operated by any suitable entity, including a payment network, an issuer, clearing houses, other financial institutions, transaction service providers, and/or the like. As used herein, the term “merchant” may refer to one or more entities (e.g., operators of retail businesses that provide goods and/or services, and/or access to goods and/or services, to a user (e.g., a customer, a consumer, a customer of the merchant, and/or the like) based on a transaction (e.g., a payment transaction)). As used herein, the term “merchant system” may refer to one or more computer systems operated by or on behalf of a merchant, such as a server computer executing one or more software applications. As used herein, the term “product” may refer to one or more goods and/or services offered by a merchant. As used herein, the term “point-of-sale (POS) device” may refer to one or more devices, which may be used by a merchant to initiate transactions (e.g., a payment transaction), engage in transactions, and/or process transactions. For example, a POS device may include one or more computers, peripheral devices, card readers, near-field communication (NFC) receivers, radio frequency identification (RFID) receivers, and/or other contactless transceivers or receivers, contact-based receivers, payment terminals, computers, servers, input devices, and/or the like. As used herein, the term “point-of-sale (POS) system” may refer to one or more computers and/or peripheral devices used by a merchant to conduct a transaction. For example, a POS system may include one or more POS devices and/or other like devices that may be used to conduct a payment transaction. A POS system (e.g., a merchant POS system) may also include one or more server computers programmed or configured to process online payment transactions through webpages, mobile applications, and/or the like. As used herein, the term “transaction service provider” may refer to an entity that receives transaction authorization requests from merchants or other entities and provides guarantees of payment, in some cases through an agreement between the transaction service provider and the issuer institution. In some non-limiting embodiments or aspects, a transaction service provider may include a credit card company, a debit card company, and/or the like. As used herein, the term “transaction service provider system” may also refer to one or more computer systems operated by or on behalf of a transaction service provider, such as a transaction processing server executing one or more software applications. A transaction processing server may include one or more processors and, in some non-limiting embodiments or aspects, may be operated by or on behalf of a transaction service provider. As used herein, the term “acquirer” may refer to an entity licensed by the transaction service provider and approved by the transaction service provider to originate transactions (e.g., payment transactions) using a portable financial device associated with the transaction service provider. As used herein, the term “acquirer system” may also refer to one or more computer systems, computer devices, and/or the like operated by or on behalf of an acquirer. The transactions may include payment transactions (e.g., purchases, original credit transactions (OCTs), account funding transactions (AFTs), and/or the like). In some non-limiting embodiments or aspects, the acquirer may be authorized by the transaction service provider to assign merchant or service providers to originate transactions using a portable financial device of the transaction service provider. The acquirer may contract with payment facilitators to enable the payment facilitators to sponsor merchants. The acquirer may monitor compliance of the payment facilitators in accordance with regulations of the transaction service provider. The acquirer may conduct due diligence of the payment facilitators and ensure that proper due diligence occurs before signing a sponsored merchant. The acquirer may be liable for all transaction service provider programs that the acquirer operates or sponsors. The acquirer may be responsible for the acts of the acquirer's payment facilitators, merchants that are sponsored by an acquirer's payment facilitators, and/or the like. In some non-limiting embodiments or aspects, an acquirer may be a financial institution, such as a bank. As used herein, the terms “electronic wallet,” “electronic wallet mobile application,” and “digital wallet” may refer to one or more electronic devices and/or one or more software applications configured to initiate and/or conduct transactions (e.g., payment transactions, electronic payment transactions, and/or the like). For example, an electronic wallet may include a user device (e.g., a mobile device) executing an application program and server-side software and/or databases for maintaining and providing transaction data to the user device. As used herein, the term “electronic wallet provider” may include an entity that provides and/or maintains an electronic wallet and/or an electronic wallet mobile application for a user (e.g., a customer). Examples of an electronic wallet provider include, but are not limited to, Google Pay®, Android Pay®, Apple Pay®, and Samsung Pay®. In some non-limiting examples, a financial institution (e.g., an issuer institution) may be an electronic wallet provider. As used herein, the term “electronic wallet provider system” may refer to one or more computer systems, computer devices, servers, groups of servers, and/or the like operated by or on behalf of an electronic wallet provider. As used herein, the term “portable financial device” may refer to a payment device, an electronic payment device, a payment card (e.g., a credit or debit card), a gift card, a smartcard, smart media, a payroll card, a healthcare card, a wrist band, a machine-readable medium containing account information, a keychain device or fob, an RFID transponder, a retailer discount or loyalty card, a cellular phone, an electronic wallet mobile application, a personal digital assistant (PDA), a pager, a security card, a computer, an access card, a wireless terminal, a transponder, and/or the like. In some non-limiting embodiments or aspects, the portable financial device may include volatile or non-volatile memory to store information (e.g., an account identifier, a name of the account holder, and/or the like). As used herein, the term “payment gateway” may refer to an entity and/or a payment processing system operated by or on behalf of such an entity (e.g., a merchant service provider, a payment service provider, a payment facilitator, a payment facilitator that contracts with an acquirer, a payment aggregator, and/or the like), which provides payment services (e.g., transaction service provider payment services, payment processing services, and/or the like) to one or more merchants. The payment services may be associated with the use of portable financial devices managed by a transaction service provider. As used herein, the term “payment gateway system” may refer to one or more computer systems, computer devices, servers, groups of servers, and/or the like operated by or on behalf of a payment gateway and/or to a payment gateway itself. As used herein, the term “payment gateway mobile application” may refer to one or more electronic devices and/or one or more software applications configured to provide payment services for transactions (e.g., payment transactions, electronic payment transactions, and/or the like). As used herein, the terms “client” and “client device” may refer to one or more client-side devices or systems (e.g., remote from a transaction service provider) used to initiate or facilitate a transaction (e.g., a payment transaction). As an example, a “client device” may refer to one or more POS devices used by a merchant, one or more acquirer host computers used by an acquirer, one or more mobile devices used by a user, and/or the like. In some non-limiting embodiments or aspects, a client device may be an electronic device configured to communicate with one or more networks and initiate or facilitate transactions. For example, a client device may include one or more computers, portable computers, laptop computers, tablet computers, mobile devices, cellular phones, wearable devices (e.g., watches, glasses, lenses, clothing, and/or the like), PDAs, and/or the like. Moreover, a “client” may also refer to an entity (e.g., a merchant, an acquirer, and/or the like) that owns, utilizes, and/or operates a client device for initiating transactions (e.g., for initiating transactions with a transaction service provider). As used herein, the term “computing device” may refer to one or more electronic devices that are configured to directly or indirectly communicate with or over one or more networks. A computing device may be a mobile device, a desktop computer, and/or any other like device. Furthermore, the term “computer” may refer to any computing device that includes the necessary components to receive, process, and output data, and normally includes a display, a processor, a memory, an input device, and a network interface. As used herein, the term “server” may refer to or include one or more processors or computers, storage devices, or similar computer arrangements that are operated by or facilitate communication and/or processing in a network environment, such as the Internet, although it will be appreciated that communication may be facilitated over one or more public or private network environments and that various other arrangements are possible. Further, multiple computers, e.g., servers, or other computerized devices, such as POS devices, directly or indirectly communicating in the network environment may constitute a “system,” such as a POS system of a merchant. The term “processor,” as used herein, may represent any type of processing unit, such as a single processor having one or more cores, one or more cores of one or more processors, multiple processors each having one or more cores, and/or other arrangements and combinations of processing units. As used herein, the term “system” may refer to one or more computing devices or combinations of computing devices (e.g., processors, servers, client devices, software applications, components of such, and/or the like). Reference to “a device,” “a server,” “a processor,” and/or the like, as used herein, may refer to a previously-recited device, server, or processor that is recited as performing a previous step or function, a different server or processor, and/or a combination of servers and/or processors. For example, as used in the specification and the claims, a first server or a first processor that is recited as performing a first step or a first function may refer to the same or different server or the same or different processor recited as performing a second step or a second function. Non-limiting embodiments or aspects of the disclosed subject matter are directed to systems, methods, and computer program products for normalizing embeddings, including, but not limited to, normalizing embeddings for cross-embedding alignment. For example, non-limiting embodiments or aspects of the disclosed subject matter provide a new preprocessing technique: spectral normalization. Spectral normalization may include decomposing an embedding set to provide a left singular vector, a right singular vector, and a diagonal matrix, determining an average singular value of the at least one embedding set, determining a respective substitute singular value for each respective singular value of the diagonal matrix based on configurable (e.g., tunable) hyperparameters, and replacing the embedding set based on the substitute diagonal matrix. Additionally, non-limiting embodiments or aspects of the disclosed subject matter enable mean centering, spectral normalization, and length normalization to be iteratively applied based on configurable (e.g., tunable) hyperparameters. Such embodiments provide techniques and systems that provide improved performance (e.g., increased F1-score) for cross-embedding alignment and downstream tasks (e.g., bilingual lexicon induction (BLI), cross-lingual document classification (CLDC), and/or the like). Additionally or alternatively, such embodiments provide techniques and systems that provide preprocessing for embedding sets that improves spectral properties, including decreased condition number, increased numeric rank, and decreased joint condition number. Additionally or alternatively, such embodiments provide techniques and systems that allow for gently adjusting the spectral properties of an embedding set (e.g., without bluntly removing singular values and/or forcing metrics such as condition number to infinity). Additionally or alternatively, such embodiments provide techniques and systems that enable preprocessing of embedding sets that is agnostic to the method of alignment used afterwards, and therefore can be applied in combination with any alignment method. Additionally or alternatively, such embodiments provide techniques and systems that can be applied to embedding sets in a variety of contexts, including cross-lingual alignment, mapping between embeddings representing the same entity in two different time periods (e.g., because the embedding space would be different because of different data between the two time periods), merchant classification, fraud detection, restaurant recommendation, product recommendation, and/or the like. For the purpose of illustration, in the following description, while the presently disclosed subject matter is described with respect to methods, systems, and computer program products for normalizing word embeddings, e.g., for cross-lingual alignment, one skilled in the art will recognize that the disclosed subject matter is not limited to the illustrative embodiments or aspects. For example, the methods, systems, and computer program products described herein may be used with a wide variety of settings, such as normalizing embeddings in any setting suitable for using such embeddings, e.g., mapping between embeddings representing the same entity in two different time periods (e.g., because the embedding space would be different because of different data between the two time periods), merchant classification, fraud detection, restaurant recommendation, product recommendation, and/or the like. Referring now toFIG.1A,FIG.1Ais a diagram of an exemplary system100afor normalizing embeddings for cross-embedding alignment, according to some non-limiting embodiments or aspects. As shown inFIG.1A, system100aincludes embedding normalization/alignment system102a, embedding database102b, and/or requesting system106a. Embedding normalization/alignment system102amay include one or more devices capable of receiving information from and/or communicating information to embedding database102band/or requesting system106a. For example, embedding normalization/alignment system102amay include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay be in communication with a data storage device (e.g., embedding database102b, another data storage device separate from embedding database102b, any combination thereof, and/or the like), which may be local or remote to embedding normalization/alignment system102a. In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay be capable of receiving information from, storing information in, communicating information to, or searching information stored in the data storage device. Embedding database102bmay include one or more devices capable of receiving information from and/or communicating information to embedding normalization/alignment system102aand/or requesting system106a. For example, embedding database102bmay include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, embedding database102bmay include a data storage device. In some non-limiting embodiments or aspects, embedding database102bmay be part of the same system as embedding normalization/alignment system102a(e.g., embedding database102bmay be part of embedding normalization/alignment system102a, part of another system that also includes embedding normalization/alignment system102a, and/or the like). In some non-limiting embodiments or aspects, embedding database102bmay be separate from embedding normalization/alignment system102a. Requesting system106amay include one or more devices capable of receiving information from and/or communicating information to embedding normalization/alignment system102aand/or embedding database102b. For example, requesting system106amay include a computing device, such as a computer, a portable computer, a mobile device, a client device, a server, a group of servers, and/or the like. In some non-limiting embodiments or aspects, requesting system106amay be part of the same system as embedding normalization/alignment system102a(e.g., requesting system106amay be part of embedding normalization/alignment system102a, part of another system that also includes embedding normalization/alignment system102a, and/or the like). In some non-limiting embodiments or aspects, requesting system106amay be separate from embedding normalization/alignment system102a. In some non-limiting embodiments or aspects, requesting system106amay be part of the same system as embedding database102b(e.g., requesting system106amay be part of embedding normalization/alignment system102athat also includes embedding database102b, part of another system that includes requesting system106aand embedding database102b, and/or the like). In some non-limiting embodiments or aspects, requesting system106amay be separate from embedding normalization/alignment system102a. The number and arrangement of systems and/or devices shown inFIG.1Aare provided as an example. There may be additional systems and/or devices; fewer systems and/or devices; different systems and/or devices; and/or differently arranged systems and/or devices than those shown inFIG.1A. Furthermore, two or more systems or devices shown inFIG.1Amay be implemented within a single system or device, or a single system or device shown inFIG.1Amay be implemented as multiple, distributed systems or devices. Additionally or alternatively, a set of systems (e.g., one or more systems) or a set of devices (e.g., one or more devices) of system100amay perform one or more functions described as being performed by another set of systems or another set of devices of system100a. Referring now toFIG.1B,FIG.1Bis an exemplary environment100bin which methods, systems, and/or computer program products, as described herein, may be implemented, according to some non-limiting embodiments or aspects. As shown in FIG.1B, environment100bincludes transaction service provider system102, embedding normalization/alignment system102a, issuer system104, customer device106, merchant system108, acquirer system110, and/or communication network112. In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay be the same as or similar to the description above in reference toFIG.1A. Additionally or alternatively, embedding normalization/alignment system102amay be capable of receiving information from and/or communicating information to transaction service provider system102, issuer system104, customer device106, merchant system108, and/or acquirer system110(e.g., via communication network112). In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay be part of the same system as transaction service provider system102(e.g., embedding normalization/alignment system102amay be part of transaction service provider system102, part of another system that also includes transaction service provider system102, and/or the like). In some non-limiting embodiments or aspects, embedding database102b, as described above in reference toFIG.1A, may be part of the same system as embedding normalization/alignment system102a(e.g., embedding database102bmay be part of embedding normalization/alignment system102a, part of another system (such as transaction service provider system102) that also includes embedding normalization/alignment system102a, and/or the like). In some non-limiting embodiments or aspects, requesting system106a, as described above in reference toFIG.1A, may be part of the same system as embedding normalization/alignment system102a(e.g., embedding database102bmay be part of embedding normalization/alignment system102a, part of another system (such as transaction service provider system102) that also includes embedding normalization/alignment system102a, and/or the like). In some non-limiting embodiments or aspects, requesting system106a, as described above in reference toFIG.1A, may be the same as, similar to, and/or part of another system, another device, another group of systems, or another group of devices, separate from or including embedding normalization/alignment system102a, such as issuer system104(e.g., one or more devices of issuer system104), customer device106, merchant system108(e.g., one or more devices of merchant system108), acquirer system110(e.g., one or more devices of acquirer system110), and/or the like. Transaction service provider system102may include one or more devices capable of receiving information from and/or communicating information to embedding normalization/alignment system102a, issuer system104, customer device106, merchant system108, and/or acquirer system110via communication network112. For example, transaction service provider system102may include a computing device, such as a server (e.g., a transaction processing server), a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, transaction service provider system102may be associated with a transaction service provider as described herein. In some non-limiting embodiments or aspects, transaction service provider system102may be in communication with a data storage device, which may be local or remote to transaction service provider system102. In some non-limiting embodiments or aspects, transaction service provider system102may be capable of receiving information from, storing information in, communicating information to, or searching information stored in the data storage device. Issuer system104may include one or more devices capable of receiving information and/or communicating information to transaction service provider system102, embedding normalization/alignment system102a, customer device106, merchant system108, and/or acquirer system110via communication network112. For example, issuer system104may include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, issuer system104may be associated with an issuer institution as described herein. For example, issuer system104may be associated with an issuer institution that issued a credit account, debit account, credit card, debit card, and/or the like to a user associated with customer device106. Customer device106may include one or more devices capable of receiving information from and/or communicating information to transaction service provider system102, embedding normalization/alignment system102a, issuer system104, merchant system108, and/or acquirer system110via communication network112. Additionally or alternatively, each customer device106may include a device capable of receiving information from and/or communicating information to other customer devices106via communication network112, another network (e.g., an ad hoc network, a local network, a private network, a virtual private network, and/or the like), and/or any other suitable communication technique. For example, customer device106may include a client device and/or the like. In some non-limiting embodiments or aspects, customer device106may or may not be capable of receiving information (e.g., from merchant system108or from another customer device106) via a short-range wireless communication connection (e.g., an NFC communication connection, an RFID communication connection, a Bluetooth® communication connection, a Zigbee® communication connection, and/or the like), and/or communicating information (e.g., to merchant system108) via a short-range wireless communication connection. Merchant system108may include one or more devices capable of receiving information from and/or communicating information to transaction service provider system102, embedding normalization/alignment system102a, issuer system104, customer device106, and/or acquirer system110via communication network112. Merchant system108may also include a device capable of receiving information from customer device106via communication network112, a communication connection (e.g., an NFC communication connection, an RFID communication connection, a Bluetooth® communication connection, a Zigbee® communication connection, and/or the like) with customer device106, and/or the like, and/or communicating information to customer device106via communication network112, the communication connection, and/or the like. In some non-limiting embodiments or aspects, merchant system108may include a computing device, such as a server, a group of servers, a client device, a group of client devices, and/or other like devices. In some non-limiting embodiments or aspects, merchant system108may be associated with a merchant as described herein. In some non-limiting embodiments or aspects, merchant system108may include one or more client devices. For example, merchant system108may include a client device that allows a merchant to communicate information to transaction service provider system102. In some non-limiting embodiments or aspects, merchant system108may include one or more devices, such as computers, computer systems, and/or peripheral devices capable of being used by a merchant to conduct a transaction with a user. For example, merchant system108may include a POS device and/or a POS system. Acquirer system110may include one or more devices capable of receiving information from and/or communicating information to transaction service provider system102, embedding normalization/alignment system102a, issuer system104, customer device106, and/or merchant system108via communication network112. For example, acquirer system110may include a computing device, a server, a group of servers, and/or the like. In some non-limiting embodiments or aspects, acquirer system110may be associated with an acquirer as described herein. Communication network112may include one or more wired and/or wireless networks. For example, communication network112may include a cellular network (e.g., a long-term evolution (LTE®) network, a third generation (3G) network, a fourth generation (4G) network, a fifth generation (5G) network, a code division multiple access (CDMA) network, and/or the like), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the public switched telephone network (PSTN)), a private network (e.g., a private network associated with a transaction service provider), an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks. In some non-limiting embodiments or aspects, processing a transaction may include generating and/or communicating at least one transaction message (e.g., authorization request, authorization response, any combination thereof, and/or the like). For example, a client device (e.g., customer device106, a POS device of merchant system108, and/or the like) may initiate the transaction, e.g., by generating an authorization request. Additionally or alternatively, the client device (e.g., customer device106, at least on device of merchant system108, and/or the like) may communicate the authorization request. For example, customer device106may communicate the authorization request to merchant system108and/or a payment gateway (e.g., a payment gateway of transaction service provider system102, a third-party payment gateway separate from transaction service provider system102, and/or the like). Additionally or alternatively, merchant system108(e.g., a POS device thereof) may communicate the authorization request to acquirer system110and/or a payment gateway. In some non-limiting embodiments or aspects, acquirer system110and/or a payment gateway may communicate the authorization request to transaction service provider system102and/or issuer system104. Additionally or alternatively, transaction service provider system102may communicate the authorization request to issuer system104. In some non-limiting embodiments or aspects, issuer system104may determine an authorization decision (e.g., authorize, decline, and/or the like) based on the authorization request. For example, the authorization request may cause issuer system104to determine the authorization decision based thereon. In some non-limiting embodiments or aspects, issuer system104may generate an authorization response based on the authorization decision. Additionally or alternatively, issuer system104may communicate the authorization response. For example, issuer system104may communicate the authorization response to transaction service provider system102and/or a payment gateway. Additionally or alternatively, transaction service provider system102and/or a payment gateway may communicate the authorization response to acquirer system110, merchant system108, and/or customer device106. Additionally or alternatively, acquirer system110may communicate the authorization response to merchant system108and/or a payment gateway. Additionally or alternatively, a payment gateway may communicate the authorization response to merchant system108and/or customer device106. Additionally or alternatively, merchant system108may communicate the authorization response to customer device106. In some non-limiting embodiments or aspects, merchant system108may receive (e.g., from acquirer system110and/or a payment gateway) the authorization response. Additionally or alternatively, merchant system108may complete the transaction based on the authorization response (e.g., provide, ship, and/or deliver goods and/or services associated with the transaction; fulfill an order associated with the transaction; any combination thereof; and/or the like). For the purpose of illustration, processing a transaction may include generating a transaction message (e.g., authorization request and/or the like) based on an account identifier of a customer (e.g., associated with customer device106and/or the like) and/or transaction data associated with the transaction. For example, merchant system108(e.g., a client device of merchant system108, a POS device of merchant system108, and/or the like) may initiate the transaction, e.g., by generating an authorization request (e.g., in response to receiving the account identifier from a portable financial device of the customer and/or the like). Additionally or alternatively, merchant system108may communicate the authorization request to acquirer system110. Additionally or alternatively, acquirer system110may communicate the authorization request to transaction service provider system102. Additionally or alternatively, transaction service provider system102may communicate the authorization request to issuer system104. Issuer system104may determine an authorization decision (e.g., authorize, decline, and/or the like) based on the authorization request, and/or issuer system104may generate an authorization response based on the authorization decision and/or the authorization request. Additionally or alternatively, issuer system104may communicate the authorization response to transaction service provider system102. Additionally or alternatively, transaction service provider system102may communicate the authorization response to acquirer system110, which may communicate the authorization response to merchant system108. For the purpose of illustration, clearing and/or settlement of a transaction may include generating a message (e.g., clearing message, settlement message, and/or the like) based on an account identifier of a customer (e.g., associated with customer device106and/or the like) and/or transaction data associated with the transaction. For example, merchant system108may generate at least one clearing message (e.g., a plurality of clearing messages, a batch of clearing messages, and/or the like). Additionally or alternatively, merchant system108may communicate the clearing message(s) to acquirer system110. Additionally or alternatively, acquirer system110may communicate the clearing message(s) to transaction service provider system102. Additionally or alternatively, transaction service provider system102may communicate the clearing message(s) to issuer system104. Additionally or alternatively, issuer system104may generate at least one settlement message based on the clearing message(s). Additionally or alternatively, issuer system104may communicate the settlement message(s) and/or funds to transaction service provider system102(and/or a settlement bank system associated with transaction service provider system102). Additionally or alternatively, transaction service provider system102(and/or the settlement bank system) may communicate the settlement message(s) and/or funds to acquirer system110, which may communicate the settlement message(s) and/or funds to merchant system108(and/or an account associated with merchant system108). The number and arrangement of systems, devices, and/or networks shown inFIG.1Bare provided as an example. There may be additional systems, devices, and/or networks; fewer systems, devices, and/or networks; different systems, devices, and/or networks; and/or differently arranged systems, devices, and/or networks than those shown inFIG.1B. Furthermore, two or more systems or devices shown inFIG.1Bmay be implemented within a single system or device, or a single system or device shown inFIG.1Bmay be implemented as multiple, distributed systems or devices. Additionally or alternatively, a set of systems (e.g., one or more systems) or a set of devices (e.g., one or more devices) of environment100bmay perform one or more functions described as being performed by another set of systems or another set of devices of environment100b. Referring now toFIG.2,FIG.2is a diagram of example components of a device200. Device200may correspond to one or more devices of transaction service provider system102, embedding normalization/alignment system102a, embedding database102b, one or more devices of issuer system104, customer device106, requesting system106a, one or more devices of merchant system108, and/or one or more devices of acquirer system110. In some non-limiting embodiments or aspects, transaction service provider system102, embedding normalization/alignment system102a, embedding database102b, issuer system104, customer device106, requesting system106a, merchant system108, and/or acquirer system110may include at least one device200and/or at least one component of device200. As shown inFIG.2, device200may include bus202, processor204, memory206, storage component208, input component210, output component212, and communication interface214. Bus202may include a component that permits communication among the components of device200. In some non-limiting embodiments or aspects, processor204may be implemented in hardware, software, firmware, and/or any combination thereof. For example, processor204may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), and/or the like), a microprocessor, a digital signal processor (DSP), and/or any processing component (e.g., a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), and/or the like), and/or the like, which can be programmed to perform a function. Memory206may include random access memory (RAM), read-only memory (ROM), and/or another type of dynamic or static storage device (e.g., flash memory, magnetic memory, optical memory, and/or the like) that stores information and/or instructions for use by processor204. Storage component208may store information and/or software related to the operation and use of device200. For example, storage component208may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, a solid state disk, and/or the like), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of computer-readable medium, along with a corresponding drive. Input component210may include a component that permits device200to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, a microphone, a camera, and/or the like). Additionally or alternatively, input component210may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, an actuator, and/or the like). Output component212may include a component that provides output information from device200(e.g., a display, a speaker, one or more light-emitting diodes (LEDs), and/or the like). Communication interface214may include a transceiver-like component (e.g., a transceiver, a receiver and transmitter that are separate, and/or the like) that enables device200to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface214may permit device200to receive information from another device and/or provide information to another device. For example, communication interface214may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi® interface, a Bluetooth® interface, a Zigbee® interface, a cellular network interface, and/or the like. Device200may perform one or more processes described herein. Device200may perform these processes based on processor204executing software instructions stored by a computer-readable medium, such as memory206and/or storage component208. A computer-readable medium (e.g., a non-transitory computer-readable medium) is defined herein as a non-transitory memory device. A non-transitory memory device includes memory space located inside of a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory206and/or storage component208from another computer-readable medium or from another device via communication interface214. When executed, software instructions stored in memory206and/or storage component208may cause processor204to perform one or more processes described herein. Additionally or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, embodiments or aspects described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.2are provided as an example. In some non-limiting embodiments or aspects, device200may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.2. Additionally or alternatively, a set of components (e.g., one or more components) of device200may perform one or more functions described as being performed by another set of components of device200. Referring now toFIG.3,FIG.3is a flowchart of an exemplary process300for normalizing embeddings for cross-embedding alignment, according to some non-limiting embodiments or aspects. In some non-limiting embodiments or aspects, one or more of the steps of process300may be performed (e.g., completely, partially, and/or the like) by embedding normalization/alignment system102aand/or transaction service provider system102(e.g., one or more devices of transaction service provider system102). In some non-limiting embodiments or aspects, one or more of the steps of process300may be performed (e.g., completely, partially, and/or the like) by another system, another device, another group of systems, or another group of devices, separate from or including embedding normalization/alignment system102aand/or transaction service provider system102, such as embedding database102b, issuer system104(e.g., one or more devices of issuer system104), customer device106, requesting system106a(e.g., one or more devices of requesting system106a), merchant system108(e.g., one or more devices of merchant system108), acquirer system110(e.g., one or more devices of acquirer system110), device200, a computing device, a server, and/or the like. As shown inFIG.3, at step302, process300may include receiving at least one embedding set. For example, embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may receive at least one embedding set. In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay receive the at least one embedding set from at least one of embedding database102band/or requesting system106a. For example, embedding database102bmay receive the at least one embedding set from requesting system106a, and/or embedding normalization/alignment system102amay receive the at least one embedding set from embedding database102b. Additionally or alternatively, embedding normalization/alignment system102amay receive the at least one embedding set from requesting system106a. In some non-limiting embodiments or aspects, each embedding set may include a set of embedding vectors. In some non-limiting embodiments or aspects, the at least one embedding set may include a first language embedding set and a second language embedding set. The first language embedding set may include a first set of word embedding vectors for a first language. Additionally or alternatively, the second language embedding set may include a second set of word embedding vectors for a second language. In some non-limiting embodiments or aspects, the at least one embedding set may include a first embedding set representing an entity in a first embedding space associated with a first time period and a second embedding set representing the entity in a second embedding space associated with a second time period different than the first time period. In some non-limiting embodiments or aspects, the entity may include at least one of a merchant, a customer (e.g., cardholder), an issuer, an acquirer, or a payment gateway. As shown inFIG.3, at step304, process300may include applying mean centering. For example, embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may apply mean centering to the at least one embedding set. In some non-limiting embodiments or aspects, applying mean centering may include determining a mean based on all embedding vectors of the set of embedding vectors. Additionally or alternatively, the mean may be subtracted from each embedding vector of the set of embedding vectors. In some non-limiting embodiments or aspects, embedding normalization/alignment system102amay apply geometric median normalization. For example, applying geometric median normalization may include determining (e.g., by embedding normalization/alignment system102a) a geometric median (x*) based on the embedding set and/or normalizing (e.g., by embedding normalization/alignment system102a) each embedding vector of the embedding set based on the geometric median (x*). For example, determining the geometric median may include inputting the embedding set into a Weiszfeld algorithm to determine the geometric mean. Additionally or alternatively, normalizing each embedding vector may include replacing each respective embedding vector with a respective modified embedding vector determined based on subtracting the geometric median (x*) from the respective embedding vector to provide a respective difference and dividing the difference by a magnitude (e.g., vector magnitude) of the difference. For the purpose of illustration, applying geometric median normalization may include applying the following algorithm, where A is an embedding set, as is an ith embedding vector of the embedding set A, x* is a geometric median, and Weiszfeld( ) is a Weiszfeld algorithm: Algorithm 1 1: x*←Weiszfeld(A) 2: for all ai∈A do ai←ai-x*ai-x* 3: return A For the purpose of illustration and not limitation, a Weiszfeld algorithm may include applying the following algorithm, where as is an ith embedding vector of the embedding set A (e.g., a1through an), x0is a starting point, xkis the value of x for the kth iteration, T( ) is equation 1, and xk+1is determined based on equation 2: Algorithm 2 Input: Anchor points, (a1, . . . an), x0∈Rdand ∈>01: k←02: while True do3: xk+1←T(xk)4: if ∥xk+1−xk∥2<∈ then5: return xk+16: k←k+1 Equation 1 T⁡(x)={T˜(x)=∑i=1nai-x-1⁢ai∑i=1nai-x-1if⁢x∉{a1,…⁢an}aiif⁢x=ai,i=1,…,n xk+1=T(xk),k∈NEquation 2 In some non-limiting embodiments or aspects, applying geometric median normalization may be in addition to or in lieu of applying mean centering. As shown inFIG.3, at step306, process300may include applying spectral normalization. For example, embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may apply spectral normalization to the at least one embedding set. In some non-limiting embodiments or aspects, applying spectral normalization to the at least one embedding set may include decomposing the at least one embedding set to provide a left singular vector, a right singular vector, and a diagonal matrix. For example, decomposing the at least one embedding set may include performing singular value decomposition (SVD) on the at least one embedding set. In some non-limiting embodiments or aspects, an average singular value of the at least one embedding set may be determined. For example, determining the average singular value may include determining a square root of an average squared singular value. In some non-limiting embodiments or aspects, for each respective singular value of the diagonal matrix, whether the respective singular value is greater than a configurable multiple of the average singular value may be determined. In some non-limiting embodiments or aspects, if a respective singular value is greater than the configurable multiple of the average singular value, a respective substitute singular value may be determined based on a quotient of the respective singular value divided by the configurable multiple of the average singular value. Additionally or alternatively, if a respective singular value is not greater than the configurable multiple of the average singular value, the respective substitute singular value may be determined to be a configurable value (e.g., 1, a predetermined integer, a predetermined value, and/or the like). In some non-limiting embodiments or aspects, a substitute diagonal matrix may include the respective substitute singular value for each respective singular value of the diagonal matrix. In some non-limiting embodiments or aspects, the at least one embedding set may be replaced with a product of the at least one embedding set, the right singular vector, and an inverse of the substitute diagonal matrix. In some non-limiting embodiments or aspects, for the purpose of illustration, applying spectral normalization may include applying the following algorithm, where A is an embedding set, svd( ) is a singular value decomposition function, U is a left singular vector, V is a right singular vector, Σ is a diagonal matrix, T is the transpose operator, η is an average singular value, D is a (substitute) diagonal matrix, d is the dimension of the embedding vectors, ∥A∥Fis the Frobenius norm of embedding set A, and β is a parameter (e.g., hyperparameter, selectable parameter, and/or the like) used to determine the configurable multiple of the average singular value: Algorithm 3 1: Compute svd(A)=UΣVT; Let D∈Rdbe a diagonal matrix. 2: Compute η=√{square root over (∥A∥F2/d)}, where d is the dimension of the word embedding 3: for i=1, . . . , d do 4: if (Σii>βη) then Dii←Σii/(βη) 5: else Dii=1 6: return AVD−1 For example, if a respective singular value (e.g., Σii) is greater than the configurable multiple of the average singular value (e.g., βη), a respective substitute singular value (e.g., Dii) may be determined based on a quotient of the respective singular value divided by the configurable multiple of the average singular value (e.g., Σii/(βη)). Additionally or alternatively, if a respective singular value (e.g., Σii) is not greater than the configurable multiple of the average singular value (e.g., βη), the respective substitute singular value (e.g., Dii) may be determined to be a configurable value (e.g., 1, a predetermined integer, a predetermined value, and/or the like). As shown inFIG.3, at step308, process300may include applying length normalization. For example, embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may apply length normalization to the at least one embedding set. In some non-limiting embodiments or aspects, applying length normalization may include adjusting each embedding vector of the set of embedding vectors to have a 2-norm (e.g., Euclidean norm) of 1. In some non-limiting embodiments or aspects, as shown inFIG.3, steps304,306, and308may be repeated for a configurable number of iterations. For example, embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may iteratively repeat applying mean centering, applying spectral normalization, and applying length normalization to the at least one embedding set for a configurable number of iterations. In some non-limiting embodiments or aspects, for the purpose of illustration, iteratively repeating may include applying the following algorithm, where m is a configurable number of iterations, A is an embedding set, Center is mean centering, SpecNorm is spectral normalization, and Unit Length Normalization is length normalization, as described herein: Algorithm 4 1: for m steps do 2: A←Center A 3: A←SpecNorm (A) 4: A←Unit length normalization of A 5: return A In some non-limiting embodiments or aspects, the parameters may be tuned. For example, the parameter β used to determine the configurable multiple of the average singular value and/or the parameter m for the configurable number of iterations may be tuned (e.g., by embedding normalization/alignment system102a) to at least one of avoid overfitting, improve performance, any combination thereof, and/or the like. For the purpose of illustration, Table 1 shows the mean average precision (MAP) achieved using different values of the parameter 13 (e.g., 1, 2, 3, 4, and 5) and the parameter m (e.g., 1, 2, 3, 4, and 5) for Procrustes alignment based on ten exemplary language pairs (e.g., English to another language or another language to English): TABLE 1m = 1m = 2m = 3m = 4m = 5β = 10.3630.3400.3280.3220.317β = 20.3850.3860.3860.3860.386β = 30.3810.3840.3840.3840.384β = 40.3810.3820.3820.3820.382β = 50.3800.3810.3810.3810.381 For the purpose of illustration, Table 2 shows the average Spearman rank coefficient score for a monolingual word similarity task using no normalization (e.g., none) and the disclosed techniques with different values of the parameter β and the parameter m: TABLE 2Noneβ = 2, m = 2β = 2, m = 3β = 2, m = 4β = 2, m = 50.6510.670770.671010.671080.67111 As shown inFIG.3, at step310, process300may include aligning embedding sets. For example, the at least one embedding set may include a first embedding set and a second embedding set, and embedding normalization/alignment system102a(e.g., a server, a part of transaction service provider system102, a part of a third-party system, and/or the like) may align the first embedding set with the second embedding set. In some non-limiting embodiments or aspects, aligning embedding sets may include applying at least one cross-lingual word embeddings (CLWE) alignment model. For example, the CLWE alignment model(s) may include at least one of a Procrustes model, a Bootstrap Procrustes (PROC-B) model, a multilingual unsupervised and supervised embeddings (MUSE) model, a canonical correlation analysis (CCA) model, a discriminative latent variable (DLV) model, a ranking-based optimization model, a cross-domain similarity local scaling (CSLS) model, a relaxed cross-domain similarity local scaling (RCSLS) model, a VECMAP model, a supervised alignment model, an unsupervised alignment model, a semi-supervised alignment model, any combination thereof, and/or the like. In some non-limiting embodiments or aspects, aligning embedding sets may include applying at least one CLWE alignment model even if the embedding sets do not represent languages. For example, the at least one embedding set may include a first embedding set representing an entity (e.g., a merchant, a customer/cardholder, an issuer, an acquirer, a payment gateway, or the like) in a first embedding space associated with a first time period and a second embedding set representing the entity in a second embedding space associated with a second time period different than the first time period. In some non-limiting embodiments or aspects, one or more CLWE alignment models (e.g., one or more of the exemplary CLWE alignment models listed above) may be used (e.g., by embedding normalization/alignment system102a) to align such non-language-based embedding sets, e.g., by treating each embedding set as if it were a language and treating each embedding vector of each embedding set as if it were a word of the respective language. Referring now toFIGS.4A-4C,FIGS.4A-4Care bar graphs400a,400b,400cshowing performance of exemplary implementations of the process ofFIG.3, according to some non-limiting embodiments or aspects. As shown inFIG.4A, the vertical axis may represent condition number, and the horizontal axis may include categories for the following four exemplary languages: English (EN), German (DE), Hindi (HI), and Japanese (JA). For each exemplary language, the condition number is represented by a respective bar for each of the following pre-processing techniques: no normalization401(e.g., None), iterative mean centering and spectral normalization and length normalization402(e.g., I−C+SN+L, which may be shorthand for the iterative combination of mean centering (C), spectral normalization (SN), and length normalization (L), as described herein), PCA removal403(e.g., PR), mean centering and length normalization404(e.g., C+L, which may be a single round/not iterative), iterative mean centering and length normalization405(e.g., I−C+L, which may be multiple (e.g., 5) rounds of iteration), and geometric median406(e.g., GeoMedian). Notably, the condition number for iterative mean centering and spectral normalization and length normalization402(e.g., I−C+SN+L) is less than each of the other techniques for all four exemplary languages, demonstrating improved performance. As shown inFIG.4B, the vertical axis may represent numeric rank, and the horizontal axis may include categories for the following four exemplary languages: English (EN), German (DE), Hindi (HI), and Japanese (JA). For each exemplary language, the numeric rank is represented by a respective bar for each of the following pre-processing techniques: no normalization411(e.g., None), iterative mean centering and spectral normalization and length normalization412(e.g., I−C+SN+L, which may be shorthand for the iterative combination of mean centering (C), spectral normalization (SN), and length normalization (L), as described herein), PCA removal413(e.g., PR), mean centering and length normalization414(e.g., C+L, which may be a single round/not iterative), iterative mean centering and length normalization415(e.g., I−C+L, which may be multiple (e.g., 5) rounds of iteration), and geometric median416(e.g., GeoMedian). Notably, the numeric rank for iterative mean centering and spectral normalization and length normalization402(e.g., I−C+SN+L) is greater than each of the other techniques for all four exemplary languages, demonstrating improved performance. As shown inFIG.4C, the vertical axis may represent joint condition number, and the horizontal axis may include categories for the following five exemplary language pairs (e.g., for translation from a first language to a second language): English to Bulgarian (EN-BG), English to German (EN-DE), English to Finnish (EN-FI), English to Hindi (EN-HI), and English to Korean (EN-KO). For each exemplary language pair, the condition number is represented by a respective bar for each of the following pre-processing techniques: no normalization431(e.g., None) and iterative mean centering and spectral normalization and length normalization432(e.g., I−C+SN+L). Notably, the joint condition number for iterative mean centering and spectral normalization and length normalization402(e.g., I−C+SN+L) is decreased compared to no normalization, demonstrating improved performance. Referring now toFIGS.5A and5B,FIGS.5A and5Bare line graphs500a,500bshowing performance of exemplary implementations of the process ofFIG.3, according to some non-limiting embodiments or aspects. As shown in each ofFIGS.5A and5B, the vertical axis may represent singular values, and the horizontal axis may represent the number of singular values. Notably, the scale of the vertical axis for graph500binFIG.5Bis narrower than the scale of the vertical axis for graph500ainFIG.5A, and the maximum value for the vertical axis for graph500binFIG.5Bis less than the maximum value for the vertical axis for graph500ainFIG.5A. As shown inFIG.5A, there are lines for singular values with respect to the number of singular values without using a normalization technique (e.g., None) for each of the following exemplary languages: Bulgarian (BG)501, German (DE)502, English (EN)503, Finnish (FI)504, Hindi (HI)505, and Korean (KO)506. For each of these lines, the singular values are steeply decaying as the number of singular values increases. As such, aligning these languages without using a normalization technique would likely result in forced alignment based on the top singular values due to the clustering of words, whether or not the words in those clusters actually aligned. As shown inFIG.5B, there are lines for singular values with respect to the number of singular values after applying iterative mean centering and spectral normalization and length normalization (e.g., I−C+SN+L) for each of the following exemplary languages: Bulgarian (BG)511, German (DE)512, English (EN)513, Finnish (FI)514, Hindi (HI)515, and Korean (KO)516. For each of these lines, the singular values are relatively uniform as the number of singular values increases. As such, an alignment model (e.g., CLWE alignment model) would have more freedom to align actually matching words without the burden of clustering described above with respect to not using a normalization technique. For the purpose of illustration, Table 3 shows the MAP achieved using different pre-processing techniques (no normalization (None), PCA removal (PR), geometric median (GeoMedian), mean centering and length normalization (C+L), iterative mean centering and length normalization (I−C+L, 5 iterations), mean centering and spectral normalization and length normalization (C+SN+L), and iterative mean centering and spectral normalization and length normalization (I−C+SN+L, 5 iterations)) for bilingual lexicon induction (BLI) based on eighteen exemplary language pairs (e.g., English to and from each of the following: Bulgarian (BG), Catalan (CA), Czech (CS), German (DE), Spanish (ES), French (FR), Korean (KO), Thai (TH), and Chinese (ZH)) using three different CLWE alignment models (CCA, PROC, and PROC-B): TABLE 3Normal-English to Other LanguagesOther Languages to EnglishizationCCAPROCPROC-BCCAPROCPROC-BNone0.3580.3650.3770.3980.3990.405PR0.3940.3910.4040.4340.4300.442GeoMedian0.3930.3910.4000.4330.4320.440C + L0.3930.3940.4080.4390.4370.445I-C + L0.3940.3950.4100.4390.4380.448C + SN + L0.3940.3960.4130.4440.4440.458I-C + SN + L0.3960.3980.4140.4450.4460.461 For the purpose of illustration, Table 4 shows the MAP achieved using different pre-processing techniques (no normalization (None) and iterative mean centering and spectral normalization and length normalization (ICSNL)) for BLI based on 28 language pairs using five different CLWE alignment models (CCA, PROC, PROC-B, DLV, and RCSLS) for dictionary sizes of 1,000 (1 K), 3,000 (3 K), and 5,000 (5 K) words: TABLE 4CCACCAPROCPROCPROC-BPROC-BDLVDLVRCSLSRCSLSDict.NoneICSNLNoneICSNLNoneICSNLNoneICSNLNoneICSNL1K.289.314.299.326.379.407.289.332.331.3313K.378.401.384.408.398.415.381.429.415.4275K.400.423.405.429——.403.452.437.460 Although the disclosed subject matter has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments or aspects, it is to be understood that such detail is solely for that purpose and that the disclosed subject matter is not limited to the disclosed embodiments or aspects, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the presently disclosed subject matter contemplates that, to the extent possible, one or more features of any embodiment or aspect can be combined with one or more features of any other embodiment or aspect.
76,795
11861325
DETAILED DESCRIPTION Floating point numbers are defined by various formats. Floating point execution units are often repurposed to perform operations on particular floating point number formats. As the precision of floating point numbers is increased, additional clock cycles may be used to complete operations. The additional clock cycles typically underutilize existing data paths associated with the respective floating point number formats and aggravate critical path execution processing time. Floating point execution units associated with different floating point formats may be repurposed to process such increased precision formats that may reduce the clock cycles, power, and/or hardware required. A critical path includes a group of functional operations and clock cycle requirements to obtain a desired result. Each functional operation on the critical path often increases the processing time for an operation or process to complete. Precision for floating point units relates to the amount of detail encapsulated in the binary number. For example, precision thresholds may include single precision (32 bits), double precision (64 bits), quadruple precision (128 bits) or any other number of associated bits. Normalization of floating point numbers arranges the fractional portion of the number to remove the leading zeros of the floating point format and adjusts the exponent accordingly. Floating point numbers with increased precision or precision greater than the normalizer bus bit width typically require additional normalization cycles to properly normalize the extended precision number. For example, a double precision number may be normalized on a single precision normalizer. As such, additional clock cycles may be required for a residue check to be completed. A quadruple precision number may be normalized on a single or double precision normalizer. The floating point number may be a binary floating point number or a hexadecimal floating point number. Embodiments described herein provide operations of a floating point unit. It should be appreciated that any arithmetic unit, floating point or otherwise, may implement teachings described herein or portions thereof. Circuitry refers to any combination of logic, wires, fundamental components, transistors, diodes, latches, switches, flip-flops, or other implements, that may be arranged to carry the intended output or disclosed operations. It should be appreciated that the term register may not refer to memory retention and merely include data or signal passthrough. A rounder or rounder circuitry may receive normalizer output and properly define the receive floating point number to the correct number of significant digits or otherwise. Extended precision operations may perform rounding in multiple clock cycles. That is, the floating point unit may be operated according to a clock or pulse indication, directing the floating point unit to process the next set of information. The clock is typically defined by any oscillator or oscillating signal. As such, portions of the floating point number fraction are often processed by the rounder circuitry according to discrete clock cycles to provide rounding result output from the floating point unit. Result registers may be preconfigured to receive rounder circuitry output in the form of predetermined portions of the floating point number. Clock-based operations may be performed according to any number of clock cycles performed consecutively or intermittently. Referring toFIG.1, a floating point unit100is shown in accordance with one or more embodiments of the present invention. Floating point unit100receives a first operand102and a second operand104. The floating point unit100may include an adder106or circuitry to perform another arithmetic operation on the first operand102and the second operand104. It should be appreciated that any number of arithmetic operations may be performed on the first operand102and the second operand104. Any number of additional operands may further be used. The hexadecimal floating point normalizer circuitry108shown inFIG.1normalizes a decimal position associated with the floating point number received from adder106. The hexadecimal normalizer circuitry108may be a double precision hexadecimal floating point normalizer or a hexadecimal floating point normalizer circuitry and include an input register107and a result register109. The normalized hexadecimal result register109is sized to output a hexadecimal result. For extended floating point numbers such as quadruple precision numbers, the hexadecimal normalizer circuitry108shown inFIG.1outputs a normalized fraction portion of the quadruple precision number in accordance with one or more embodiments of the present invention. The result register109is sized to output a normalized hexadecimal result, where the normalized hexadecimal result is the fraction portion of a floating point number or a portion of the fraction portion. As an example, during a first clock cycle the result register109outputs a first portion of the fraction portion of the floating point number normalized by the hexadecimal floating point normalizer circuitry108in accordance with one or more embodiments. The result register109may output to a bus110having a bit width of 57 bits for the typical 56-bit hexadecimal fraction plus a leading zero too large bit. The 56-bit hexadecimal fraction may be split into a first segment and a second segment. The first segment may be an 8-bit segment and the second segment may be a 49-bit segment. The bus110connects the result register109of the hexadecimal floating point normalizer circuitry108to the hexadecimal floating point rounder circuitry112input register111. Input register111has a bit width, or accepted amount of bits, similar to the bus110. The hexadecimal floating point rounder circuitry112may be a double precision hexadecimal floating point rounder that may round the received fraction portion to from bus110to the required amount. The hexadecimal floating point rounder circuitry112computes the rounded result and outputs to output register113. The output register113is associated with a result bus114that provides the hexadecimal floating point rounder circuitry112result to result circuitry116. Result circuitry116may be used by a processor or other circuitry to use or display the calculated floating point number. The result circuitry116may have predetermined bit width inputs. For example, the result circuitry116may anticipate rounder outputs to have a 48-bit width during the first cycle and a 64-bit width during the second cycle. That is, the output register113may have a 64-bit width configured to output a 48-bit result and a 64-bit result depending on the clock cycle. The floating point unit100shown inFIG.1includes latch circuitry120having a bit width sized to retain the first segment of the first portion of the floating point number in accordance with one or more embodiments of the present invention. The first segment may be stored as buffered data or defined as buffered data with registers of the latch circuitry120. The latch circuitry120may include a supply bus118having a bit width sized to supply the first segment to the latch circuitry120. The supply bus118may be disposed before or after the hexadecimal floating point rounder circuitry112. The bit width of the latch circuitry may be eight bits for double precision hexadecimal floating point data paths that are processing quadruple precision floating point numbers. During the first processing cycle the first segment is stored in the latch circuitry120and the hexadecimal floating point rounder circuitry112provides the rounded second segment associated with the first portion in output register113. During the second processing cycle the latch circuitry120releases the stored first segment as buffered data to the input register111of the hexadecimal floating point rounder circuitry112. The input register111of the hexadecimal floating point rounder circuitry112receives the second portion of the fraction portion of the floating point number from bus110and combines the second portion with the buffered data such that the hexadecimal floating point rounder circuitry112outputs the 64-bit second cycle result to the result circuitry116. As such, through two clock cycles of the floating point unit, the result register116first receives the 48-bit fraction rounder result from result bus114and then receives the 64-bit rounder result from bus114. Referring toFIG.2A, a double extended precision hexadecimal floating point number200is shown. The double extended precision hexadecimal floating point number200includes an extended precision sign bit202. The double extended precision hexadecimal floating point number200includes extended precision hexadecimal exponent bits204having a bit width of seven bits. The extended precision hexadecimal precision floating point number200includes extended precision hexadecimal fraction bits206having a bit width of 56 bits. Referring toFIG.2B, a quadruple precision floating point number210is shown. The quadruple precision floating point number210may be a quadruple precision binary floating point number or another binary floating point number. The quadruple precision floating point number210includes a quadruple precision sign bit212. The quadruple precision floating point number210includes quadruple precision exponent bits214having a bit width of fifteen bits. The quadruple precision floating point number210includes 112 quadruple precision fraction bits216. The fraction portion216may include a first portion230and a second portion232. The fraction portion216may be split in half to form the first portion230and the second portion232. That is, the first portion230may include 56 bits and the second portion232may include 56 bits. The first portion230may include an additional leading zero too large bit or one bit leading zero too large flag, defining an anticipated leading zero control signal, making the first portion23057 bits. The leading zero too large bit may be used as an indication of the hexadecimal normalizer circuitry108control instructions used to normalize the floating point number. The second portion232may include an additional leading zero too large bit, making the second portion23257 bits. As such, the quadruple precision floating point number210may be processed by the hexadecimal floating point rounder circuitry112in two clock cycles. The first portion230defines a first segment234and a second segment236. The first segment234is stored in the latch circuitry120during a first cycle of the floating point unit100in accordance with one or more embodiments of the present invention. The first segment234may be eight bits. It should be appreciated that first portion230, second portion232, first segment234, and second segment236are designations of bits or bit groupings. The groupings may be in any order, out of order, rearranged, or interchanged. Referring toFIG.3, a method300is shown in accordance with portions of one or more embodiments of the present invention. The method300begins in block302. It should be appreciated that any of the blocks of method300may be omitted, repeated, rearranged, and any of the blocks of method300may be completed in sequence or in parallel. In block304, hexadecimal floating point normalizer circuitry108of the floating point unit100receives a floating point result. In block306, the floating point result is analyzed to determine whether the floating point result is a hexadecimal floating point number200or a binary floating point number210. If the result is not a hexadecimal floating point number200, the standard data path for that type may be used in block307. The floating point result may be analyzed by the hexadecimal normalizer circuitry108or another processor and circuitry associated with the floating point unit100. If the floating point result is a binary floating point number210, the floating point unit100may receive the fraction portion216of the floating point number210in block308. At block310, the fraction portion216is portioned or divided according to a bit width of bus110. The hexadecimal normalizer circuitry108may portion or divide the fraction portion216during normalization or the fraction portion216may be multiplexed or otherwise divided or portioned with additional circuitry. The bit width of bus110may be 57 bits sized as necessary to accommodate the 57 bit fraction portion206of hexadecimal floating point numbers200. As such, the fraction portion216may be separated for example into a first portion230of 57 bits and a second portion232of 57 bits. The hexadecimal floating point normalizer circuitry108outputs the normalizer result through result register109and bus110based on the first portion230in block312. A first segment234of the hexadecimal floating point normalizer circuitry108result is stored in latch circuitry120in block314. The second segment236may be rounded according to hexadecimal floating point rounder circuitry112in block316. The first rounder result may be outputted based on the first normalizer result. That is, the first rounder result (48 bits) based on the second segment236of the first portion230may be sent to result circuitry116during a first clock cycle or according to a first clock cycle of the floating point unit. In block318, the second normalizer result is outputted from result register109to bus110. The second normalizer result may be based on the second portion232. In block320, the hexadecimal floating point rounder circuitry112may output a second rounder result based on the second normalizer result and the buffered data stored in the latch circuitry120according to the first segment234. In block322, the first rounder result and the second rounder result, based on the first portion230and the second portion232may be combined to form the quadruple precision floating point number originally desired by the combination of operand102and operand104. The clock cycles, hardware, and/or power required to normalize and round increased precision floating point numbers by the floating point unit100may be reduced while maintaining anticipated hexadecimal floating point rounder circuitry112output by storing portions or segments of the floating point number. It should be appreciated that any number of clock cycles, intermediate or otherwise, may be implemented to provide similar results. The number formats discussed and disclosed may be scaled according to any necessary precision. As an example, the quadruple precision floating point number210may be an octuplet precision floating point number and the double extended precision hexadecimal floating point number200may be a quadruple extended precision hexadecimal floating point number. As such, the associated circuitry may similarly scale. Various embodiments of the invention are described herein with reference to the related drawings. Alternative embodiments of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein. In an exemplary embodiment, the methods described herein can be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” may include both an indirect “connection” and a direct “connection.” The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The instructions disclosed herein, which may execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
20,948
11861326
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples. DETAILED DESCRIPTION Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated, or if not so explicitly described. Techniques for flow control between non-volatile memory storage and remote hosts over a fabric are described. The techniques described herein virtualize submission and completion queues backed by limited memory resources. In an example, the submission and completion queues are virtualized and backed by shared first-in-first-out (FIFO) that provides a combined resource pool. This technique removes the need of arbitration between the submission/completion queues as the submission/completion queue entries are processed in a first-come-first-serve basis. These and further aspects are discussed below with respect to the drawings. FIG.1is a block diagram depicting a computer system100according to an example. The computer system100includes one or more remote hosts102, a front end fabric104, an NVMEoF controller105(also referred to as “controller105”), a back end fabric108, and one or more nonvolatile memory (NVM) subsystems110. For purposed of clarity by example, a single NVMEoF controller105is described. However, it is to be understood that the computer system100can include a plurality of NVMEoF controllers105. The remote hosts102are coupled to the controller105through the front end fabric104. The front end fabric104can employ an Ethernet data link layer or InfiniBand® (IB) data link layer. The remote hosts102can communicate with the controller105over the front-end fabric104using a remote direct memory access (RDMA) transport, such as RDMA over Converged Ethernet (RoCE), IB, Internet Wide Area RDMA (iWARP), or the like. The controller105is coupled to the NVM subsystems110through the back-end fabric108. The back-end fabric108can employ a different transport than the front-end fabric104. In an example, the back-end fabric108is a Peripheral Component Interconnect (PCI) Express® (PCIe) fabric. The controller105provides an interface between the remote hosts102and the NVM subsystems110. The controller105is coupled to the NVM subsystems110through the back end fabric108. The NVM subsystem110is configured to persistently store data using a NVM technology, such as solid state disk (SSD) storage technology. In an example, the NVM subsystem110includes a register interface compliant with an NVM Express® (NVMe) specification, such as NVM Express rev. 1.2. The controller105, the back-end fabric108, and the NVM subsystems110are collectively referred to as a target system150. The remote hosts102can issue commands targeting the target system150using NVMe layered over RDMA transport. The controller105receives the commands and provides an interface between the different transports used by the front-end and back-end fabrics104and108. FIG.2is a block diagram depicting a remote host102according to an example. The remote host102includes a central processing unit (CPU)202, one or more support circuits204, an input/output (IO) interface206, and a memory210coupled to a bus224. The CPU202can include one or more microprocessors. The support circuits204can include conventional cache, power supplies, clock circuits, data registers, IO interfaces, and the like. The IO interface206can be coupled to various IO devices, which can include conventional keyboard, mouse, and the like. The IO interface206can also include a network interface card (NIC)208configured for communicating through the front end fabric104. As such, the NIC208is configured to communicate using an RDMA transport. The memory210may store all or portions of one or more programs and/or data to implement aspects of the remote host102described herein. The memory210can include one or more of random access memory (RAM), read only memory (ROM), magnetic read/write memory, FLASH memory, solid state memory, or the like as well as combinations thereof. The memory210can store an operating system (OS)212, buffers218(sometimes referred to as remote buffers218), scatter gather lists (SGLs)220, and queues222. The OS212can include an NVM stack214and an RDMA stack216. Applications can interact with the OS212and use the NVM stack214to read data from or write data to the target system150. In an example, the NVM stack214complies with an NVMe specification and supports some necessary NVMe commands to implement the controller solution. The NVM stack214can interact with the RDMA stack216to layer NVMe commands over an RDMA transport for communication to the target system150over the front end fabric104. The memory210can store various queues222. The RDMA stack216can maintain RDMA queue pairs (QPs)224. The NVM stack214can maintain NVM QPs226. Notably, the NVM QPs226can include one or more pairs of submission queues and completion queues to store entries indicating submission and completion of issued commands. The NVMe commands can reference SGLs220, which in turn reference the buffers218. The RDMA stack216reads data to and from the buffers218using RDMA transactions. FIG.3is a block diagram depicting a portion of the target system150according to an example. The target system150includes an integrated circuit (IC)301. In an example, the IC301is a programmable IC, such as a field programmable gate array (FPGA). Alternatively, the IC301can be an application specific integrated circuit (ASIC). The IC301includes a front-end interface302, the controller105, and a back-end interface306. Although the IC301is shown as having a single controller105, the IC301can include more than one controller105. The front-end interface302can be coupled to a NIC319, which is turn is coupled to the front-end fabric104. In the example shown, the NIC319is external to the IC301. In other examples, the NIC319can be implemented within the IC301. The back-end interface306is configured for communication with one or more NVM subsystems110through the back-end fabric108. For example, the back-end interface306can be a PCIe fabric port. The controller105can interface with a memory308external to the IC301. In some examples, the controller105can also interface with a memory310implemented within the IC301in addition to the memory308. The controller105provides an interface between the remote hosts102coupled to the front-end fabric104and the NVM subsystem110coupled to the back-end fabric108. The controller105provides virtual submission and completion queues for the remote hosts102. The virtual queues are backed by a single shared memory. The controller105also provides for flow control to control access among the remote hosts102to the limited resources of the shared memory. In this manner, the controller105can support a large number of remote hosts given limited memory resources. The memory308stores RDMA queues318, virtual QPs320, NVM subsystem QPs326, and first-in-first-out (FIFO) buffers (FIFOs332). In operation, the controller105stores RDMA instructions received from the local hosts102, and RDMA instructions to be sent to the local hosts102, in the RDMA queues318. The controller105extracts NVMe commands from RDMA instructions received from the local hosts102. The controller105maintains at least one virtual QP320for each of the local hosts102. Each virtual QP320includes a submission queue (SQ)322and a completion queue (CQ)324. The virtual QPs320are virtual in that they do not store the actual NVMe command data. Rather, the virtual QPs320store queue attributes that describe virtual queues backed by the FIFOs332. The queue attributes can include, for example, head pointers and tail pointers that specify the heads and tails of the virtual queues. The actual data for the virtual queues are stored in the FIFOs332. The controller105stores the commands in a FIFO332. As the commands are received and stored in the FIFO332, the controller105updates the corresponding SQs322when inserting commands (e.g., adjusts tail pointers). The NVM subsystem110consumes commands from the FIFO332(e.g., on a first-in-first out basis). The controller105also updates the corresponding SQs322when commands are consumed (e.g., adjusts head pointers). The controller105also maintains completion data in a FIFO332. The controller105stores completion data from the NVM subsystem110in the FIFO332and updates the corresponding CQs324(e.g., adjusts tail pointers). The controller105also updates the corresponding CQs324when completion data is consumed and sent to the remote hosts102(e.g., adjusts head pointers). The NVM subsystem110can maintain SQs328and CQs330in the memory308. Commands consumed from the FIFOs332can be inserted into the SQs328. Completion data can be inserted into the CQs330upon completion. Completion data can be consumed from the CQs330and inserted into the FIFOs332. As opposes to the SQs322and the CQs324of the virtual QPs320, the SQs328and the CQs330can be non-virtual queues that store command and completion data. The controller105also implements flow control for commands from the remote hosts102. The controller105sends advertised queue attributes indicative of projected capacities of the SQs322to the remote hosts102. For example, the controller105can send the head and tail pointers of the SQs to the remote hosts102, which the remote hosts102can use to determine the current submission queue capacity. The aggregate of the projected capacities can be larger than the capacity of the FIFOs332. The controller105can monitor the free space of the FIFOs332. The controller105compares the free space against a threshold. Once the amount of free space satisfies the threshold, the controller105modifies the advertised queue attributes to reduce the projected capacities. In this manner, the controller105can provide backpressure on the remote hosts102to conserve the resources of the FIFOs332. FIG.4is a block diagram depicting an NVM subsystem110according to an example. The NVM subsystem110includes a fabric port402, an NVMe controller404, and one or more NVMe SSD devices406. The fabric port402provides an interface between the NVMe controller404and the back-end fabric108. For example, the fabric port402can be a PCIe port. The NVMe controller404implements an NVMe register interface for the NVMe SSD devices406. The NVMe SSD controller404processes the various NVMe commands, such as read commands, write commands, and the like. The NVMe SSD controller404consumes the SGLs312when executing the commands to perform DMA transactions with the controller105. FIG.5illustrates an example architecture of an FPGA500that includes a large number of different programmable tiles including multi-gigabit transceivers (“MGTs”)501, configurable logic blocks (“CLBs”)502, random access memory blocks (“BRAMs”)503, input/output blocks (“IOBs”)504, configuration and clocking logic (“CONFIG/CLOCKS”)505, digital signal processing blocks (“DSPs”)506, specialized input/output blocks (“l/O”)507(e.g., configuration ports and clock ports), and other programmable logic508, such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. Some FPGAs also include processing system (“PROC”)510. In some FPGAs, each programmable tile can include at least one programmable interconnect element (“INT”)511having connections to input and output terminals520of a programmable logic element within the same tile, as shown by examples included at the top ofFIG.5. Each programmable interconnect element511(also referred to as “interconnect element511”) can also include connections to interconnect segments522of adjacent programmable interconnect element(s) in the same tile or other tile(s). Each programmable interconnect element511can also include connections to interconnect segments524of general routing resources between logic blocks (not shown). The general routing resources can include routing channels between logic blocks (not shown) comprising tracks of interconnect segments (e.g., interconnect segments524) and switch blocks (not shown) for connecting interconnect segments. The interconnect segments of the general routing resources (e.g., interconnect segments524) can span one or more logic blocks. The programmable interconnect elements511taken together with the general routing resources implement a programmable interconnect structure (“programmable interconnect”) for the illustrated FPGA. In an example implementation, a CLB502can include a configurable logic element (“CLE”)512that can be programmed to implement user logic plus a single programmable interconnect element (“INT”)511. A BRAM503can include a BRAM logic element (“BRL”)513in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured example, a BRAM tile has the same height as five CLBs, but other numbers (e.g., four) can also be used. A DSP tile506can include a DSP logic element (“DSPL”)514in addition to an appropriate number of programmable interconnect elements. An IOB504can include, for example, two instances of an input/output logic element (“IOL”)515in addition to one instance of the programmable interconnect element511. As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the1/O logic element515typically are not confined to the area of the input/output logic element515. In the pictured example, a horizontal area near the center of the die (shown inFIG.5) is used for configuration, clock, and other control logic. Vertical columns509extending from this horizontal area or column are used to distribute the clocks and configuration signals across the breadth of the FPGA. Some FPGAs utilizing the architecture illustrated inFIG.5include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, processor block510spans several columns of CLBs and BRAMs. The processing system510can include various components ranging from a single microprocessor to a complete programmable processing system of microprocessor(s), memory controllers, peripherals, and the like. For example, the processing system510can include one or more CPUs550, a memory controller552, on-chip memory (OCM)556, and IO554, among other components. Note thatFIG.5is intended to illustrate only an exemplary FPGA architecture. For example, the numbers of logic blocks in a row, the relative width of the rows, the number and order of rows, the types of logic blocks included in the rows, the relative sizes of the logic blocks, and the interconnect/logic implementations included at the top ofFIG.5are purely exemplary. For example, in an actual FPGA more than one adjacent row of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic, but the number of adjacent CLB rows varies with the overall size of the FPGA. In an example, the IC301comprises a programmable IC having an architecture the same as or similar to the FPGA500. The front-end interface302, the controller105, the back-end interface306, and the memory310can be implemented within the FPGA500. The front-end interface302and the back-end interface306can be implemented using IO in the FPGA500, such as IO554, MGTs501, IOBs504, or a combination thereof. The controller105can be implemented using software558stored in the OCM556configured for execution by the CPU(s)550. Alternatively, the controller105can be implemented by a controller circuit560configured within the programmable fabric of the FPGA500. In yet another alternative, the controller105can be implemented using a combination of software558and circuitry560. FIG.6is a block diagram depicting the structure of the data maintained by the controllers105according to an example. In the present example, the target system150includes two controllers105-1and105-2. It is to be understood that the target system150can include more than two controllers105or a single controller105. Each controller105maintains at least one virtual queue pair for each remote host102. In the present example, each controller105maintains a single virtual queue pair for each of n remote hosts102, where n is an integer greater than one. In the example, each controller105maintains a virtual SQ322-1and a virtual CQ324-1for a remote host102-1and so on up to a virtual SQ322-nand a virtual CQ324-nfor a remote host102-n. As described above, each virtual SQ322comprises queue attributes (e.g., head and tail pointers) that define a virtual submission queue for a respective one of the remote hosts102. Likewise, each virtual CQ324comprises queue attributes (e.g., head and tail pointers) that define a virtual completion queue for a respective one of the remote hosts102. The virtual SQs322and the virtual CQs324do not store the command/completion data. The queue attributes of the virtual SQs322and the virtual CQs324can be used to determine a current capacity of the respective queues. Each controller105stores incoming commands in a submission FIFO332-1that is shared among all of the remote hosts102. Each controller105stores completion data in a completion FIFO332-2that is shared among all of the remote hosts102. The submission FIFO332-1and the completion FIFO332-2comprise a shared memory610(e.g., a shared portion of the memory308). In an example, the entries in the FIFOs332-1and332-2comprise a host identifier field, a queue identifier field, a slot number field, and a queue entry field. The host identifier field identifies the remote host issuing the command. The queue identifier field identifies the virtual queue. The slot number identifies the slot in the FIFO. The queue entry field includes the command or completion data. FIG.7is a flow diagram depicting a method700of processing NVMe commands according to an example. The method700is performed by the controller105. The method700begins at step702, where the controller105receives commands from the remote hosts for the NVM subsystem110(e.g., NVM commands embedded in RDMA instructions). At step704, the controller105stores the commands in the submission FIFO332-1that is shared among the remote hosts102. At step706, the controller105updates the virtual SQs322for the remote hosts102based on the commands stored in the submission FIFO332-1. For example, at step712, the controller105can adjust tail pointers of the virtual SQs322. At step708, the controller105provides the commands to the NVM subsystem110from the submission FIFO332-1. At step710, the controller105updates the virtual SQs322for the remote hosts102based on the commands consumed from the submission FIFO332-1. For example, at step714, the controller105can adjust head pointers of the virtual SQs322. FIG.8is a flow diagram depicting a method800of flow control according to an example. The method800can be performed by the controller105. At step802, the controller105sets advertised queue attributes to indicate initial capacities for the virtual SQs322. In an example, the aggregate of the projected capacities is more than the capacity of the submission FIFO332-1. At step804, the controller105sends the advertised queue attributes indicative of the projected capacities of the virtual SQs322to the remote hosts102. For example, the controller105can send head and tail pointers for the virtual SQs322to the remote hosts102. At step806, the controller105determines if the free space in the submission FIFO332-1satisfies a threshold. If so, the method800proceeds to step810. At step810, the controller105modifies the advertised queue attributes to indicate reduced capacities. In this manner, the controller105exerts backpressure on the remote hosts102to constrain the incoming commands so as to not overflow the submission FIFO332-1. The controller105can modify the head pointers so that a smaller than normal capacity is advertised. In an example, the controller105can compute an average or weighted number of entries in the submission FIFO332-1per remote host and then determine the capacities of the virtual SQs322based thereon. If at step806the free space in the submission FIFO332-1does not satisfy the threshold, the method800proceeds to step808, where the controller105sets the advertised queue attributes to indicate normal capacities. In an example, an aggregate of the normal capacities can exceed the total capacity of the submission FIFO332-1. The method800returns to step804from steps808and810. FIG.9is a flow diagram depicting a method900of processing NVMe completion data according to an example. The method900is performed by the controller105. The method900begins at step902, where the controller105receives completion data from the NVM subsystem110for the remote hosts102. At step904, the controller105stores the completion data in the completion FIFO332-2that is shared among the remote hosts102. At step906, the controller105updates the virtual CQs324for the remote hosts102based on the commands stored in the completion FIFO332-2. For example, at step912, the controller105can adjust tail pointers of the virtual CQs324. At step908, the controller105provides the commands to the remote hosts102from the completion FIFO332-2. At step910, the controller105updates the virtual CQs324for the remote hosts102based on the completion data consumed from the completion FIFO332-2. For example, at step914, the controller105can adjust head pointers of the virtual CQs324. While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
22,573
11861327
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of a processor for fine-grain sparse integer and floating-point operations provided in accordance with the present disclosure and is not intended to represent the only forms in which the present disclosure may be constructed or utilized. The description sets forth the features of the present disclosure in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be accomplished by different embodiments that are also intended to be encompassed within the scope of the disclosure. As denoted elsewhere herein, like element numbers are intended to indicate like elements or features. A neural network (e.g., when performing inference) may perform voluminous calculations in which activations (or “activation values”) (the elements of an input feature map (IFM)) are multiplied by weights. The products of the activations and weights may form multi-dimensional arrays which may be summed along one or more axes to form an array, or “tensor”, that may be referred to as an output feature map (OFM). Referring toFIG.1, special-purpose hardware may be employed to perform such calculations. Activations may be stored in a static random access memory (SRAM)105and fed into a multiplier accumulator (MAC) array, which may include (i) a plurality of blocks (which may be referred to as “bricks”110), each of which may include a plurality of multipliers for multiplying activations and weights, (ii) one or more adder trees for adding together products generated by the bricks, and (iii) one or more accumulators for accumulating sums generated by the adder trees. Each activation value may be broadcast to a plurality of multipliers conceptually arranged in a row in the representation ofFIG.1. A plurality of adder trees115may be employed to form sums. In operation, it may be that the weights fall within a range of values, and that the distribution of the values of the weights is such that relatively small weights are significantly more common than relatively large weights. For example, if each weight is represented as an 8-bit number, it may be that many of the weights (e.g., a majority of the weights, or more than ¾ of the weights) have a value of less than 16 (i.e., the most significant nibble is zero); the weights with nonzero most significant nibbles may then be referred to as “outliers”. In some embodiments, suitably constructed hardware may achieve improved speed and power efficiency by taking advantage of these characteristics of the weights. FIG.2Ashows a portion of a mixed processing circuit (referred to as “mixed” because it is suitable both for integer and for floating point operations). Referring toFIG.2A, in some embodiments a plurality of multipliers205is used to multiply weights by activations, e.g., one nibble at a time. Each multiplier may be a 4×4 (i.e., 4-bit by 4-bit) multiplier with a first input210configured to receive a respective weight nibble, and a second input215configured to receive an activation nibble (which may be broadcast to all of the multipliers). An embodiment with nine multipliers is shown; in some embodiments more multipliers are present (resulting in a more capable, but costlier, circuit) and in some embodiments fewer multipliers are present (resulting in a less capable, and less costly, circuit). A weight buffer220(of which only the output row is shown) may include a respective register for each of the multipliers205. The outputs of the multipliers205may be fed to a plurality of combining circuits225, each of which may include one or more multiplexers230, an adder, and an inverting-shifting circuit235. The system may include the same number of combining circuits225as multipliers205, or it may contain fewer combining circuits225than multipliers205(as shown), or it may contain more combining circuits225than multipliers205. In operation, each multiplier may produce, during each clock cycle, one partial product. These partial products may be added together to form integer products (which may also be partial products), each of the integer products may be processed by an inverting-shifting circuit235, and the result may be sent to an adder tree to be added to other integer products. For example, as illustrated inFIG.2A, the first two values in the output row of the weight buffer220may be the least significant nibble L0and the most significant nibble M0of a first weight, and the activation value nibble being broadcast may be the least significant nibble of a first (8-bit) activation value. The activation value nibble may be multiplied by the least significant nibble L0of the first weight to form a first partial product P0, and the activation value nibble may be multiplied by the most significant nibble M0of the first weight to form a second partial product P1. These partial products may be routed to a first combining circuit225(the left-most one inFIG.2A) by the connection fabric240. The connection fabric240may include the multiplexers230; it is drawn inFIG.2Aas a separate element to facilitate the illustration (using arrows) of the data routing it performs. In the first combining circuit225the product of (i) (both nibbles of) the weight and (ii) the activation value nibble may be calculated as an offset sum (calculated by the corresponding offset adder245) of the first partial product and the second partial product. As used herein, an “offset sum” of two values is the result of “offset addition”, which is the forming of the sum of (i) a first one of the two values and (ii) the second one of the two values, shifted to the left by a number of bits (e.g., by four bits), and an “offset adder” is an adder that performs the addition of two numbers with an offset (referred to as the “offset” of the offset adder) between the positions of their least significant bits. As used herein, the “significance” of a nibble (or, more generally, of a sub-word (discussed in further detail below)) is the position it occupies in the word of which it is a part (e.g., whether a nibble is a most significant nibble or a least significant nibble of an 8-bit word). As such, the most significant nibble of an 8-bit word has a significance four bits greater than the least significant nibble. Each inverting-shifting circuit235may convert between (i) a sign and magnitude representation and (ii) a two's complement representation, and it may shift the result as needed for proper addition to occur in the adder tree. For example, if the activation value nibble is a most significant nibble, then the output of the offset adder245may be shifted (e.g., by 4 bits, to the left), so that, in the adder tree, the bits of the output will align properly with the bits of other products (e.g., with a product of a weight with a least significant nibble of an activation value). Conversion between sign and magnitude representation and two's complement representation may be performed, for example, if the multipliers205are unsigned integer multipliers, and the adder tree is a two's complement adder tree. The arrangement of the weight nibbles in the weight buffer220may be the result of pre-processing, as illustrated inFIG.2B. The raw array of weights250may include a first row, of least significant nibbles (labeled “L0” and the like) and a second row, of most significant nibbles (labeled “M0” and the like), as illustrated. Some of the nibbles may be zero, as illustrated by the blank cells inFIG.2B. Preprocessing may rearrange these nibbles in populating the weight buffer (as indicated, for example, by the arrows inFIG.2B) so that the weight buffer contains a smaller proportion of zero-valued nibbles than the raw array of weights250. In the example ofFIG.2B, eight weights (each consisting of a least significant nibble and a most significant nibble) are rearranged so that the zero-valued nibbles are discarded, and the non-zero nibbles are placed into eight locations of one row of the weight buffer (with a ninth location containing zero), so that, when this row of the weight buffer is processed by the array of nine multipliers205(FIG.2A), eight of the multipliers are used, and only one (the ninth one) is unused. In some circumstances, the sparsity of the raw array of weights250may not be sufficient to allow all of the most significant nibbles to be in the same row of the weight buffer as the corresponding least significant nibbles, and some or all of the products may be formed in two clock cycles, with the activation value remaining the same for both cycles. The preprocessing may also generate a control signal array that may be used to control the connection fabric240(e.g., the multiplexers230) so that each partial product is sent to the appropriate input of an offset adder245according to the significance of the factors that formed it. As illustrated inFIG.2C, the mixed processing circuit may further include a plurality of variable shift units (or “variable shift circuits”)260which, in a floating-point mode of the mixed processing circuit, enable the mixed processing circuit to perform floating-point operations on floating-point activations and floating-point weights. Each such floating-point number may be an FP16 floating point number (using, e.g., a format according to the IEEE 754-2008 standard) having one sign bit, an 11-bit mantissa (or “significand”) (represented by 10 bits and one implicit lead bit or “hidden bit”), and a five-bit exponent. The 11-bit mantissa may be padded with one zero bit and split into three nibbles, a “high” (most significant) nibble, a “low” (least significant) nibble, and a “medium” nibble (of intermediate significance) (so that concatenating the high nibble, the medium nibble, and the low nibble, in order, results in the 12-bit (padded) mantissa). Floating-point multiplications may then be performed by the mixed processing circuit ofFIG.2Cby forming partial products of the high, medium, and low nibbles of the mantissa of each weight with the high, medium, and low nibbles of the mantissa of the activation, one pair of nibbles at a time (e.g., multiplying one nibble of the weight by one nibble of the activation), in each of the multipliers205. The (12 bit wide) output of each inverting-shifting circuit235may be fed to a respective variable shift unit260which, in floating-point mode, may shift the data it receives to the right by between 0 and N bits (where N may be 8, or a larger number, or smaller number, depending in part on the size of the mantissa used in the adder tree, which may be selected based on the accuracy to be achieved). Further selectable shifts are available by selecting one or the other input of the offset adder245, and by selecting the amount of shift applied in the inverting-shifting circuit235. As a result, the mixed processing circuit ofFIG.2Cis able to produce a suitably aligned output for each combination of significances of the four input nibbles to the two multipliers205feeding any one of the offset adders245during a given clock cycle (subject to the constraint that the significance of the two input values to the offset adder245—which shifts one input by four bits relative to the other—differs by four bits). FIG.3shows an example of preprocessing for an array of floating-point weights. In the floating-point representation, nibble sparsity (which may be relatively common for integer weights, with, e.g., a large fraction of the weights having a zero-valued most significant nibble) may be relatively rare, but a significant fraction of the weights may be equal to zero, with all three nibbles (low, medium, and high) being zero, as illustrated for the raw weight array305.FIG.3shows how the three nibbles of the mantissa of each nonzero weight (weights 0, 2, 4, 5, and 6) may be rearranged, first to form a first intermediate matrix310, then to form a second intermediate matrix315, and then to form the final matrix320, which may be suitable for storing in the weight buffer. In the final matrix, all of the nonzero elements are in the first two rows, and all of the products may be formed in two operations (e.g., in two clock cycles), whereas three operations would be used were the raw weight array305loaded into the weight buffer. Although some examples are presented herein for an embodiment with 8-bit weights, 8-bit activation values, a weight buffer that is four weights wide, and weights and activations that may be processed one nibble at a time, it will be understood that these parameters and other like parameters in the present disclosure are used only as a specific concrete example for ease of explanation, and that any of these parameters may be changed. As such, the size of a weight may be a “word”, for example, and the size of a portion of a weight may be a “sub-word”, with, in the embodiment ofFIG.2A, the size of the word being one byte and the size of a sub-word being one nibble. In other embodiments, a word may be 12 bits and a sub-word may be six bits, for example, or a word may be 16 bits, and a sub-word may be one byte. As used herein, “a portion of” something means “at least some of” the thing, and as such may mean less than all of, or all of, the thing. As such, “a portion of” a thing includes the entire thing as a special case, i.e., the entire thing is an example of a portion of the thing. As used herein, the term “or” should be interpreted as “and/or”, such that, for example, “A or B” means any one of “A” or “B” or “A and B”. Each of the terms “processing circuit” and “means for processing” is used herein to mean any combination of hardware, firmware, and software, employed to process data or digital signals. Processing circuit hardware may include, for example, application specific integrated circuits (ASICs), general purpose or special purpose central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), and programmable logic devices such as field programmable gate arrays (FPGAs). In a processing circuit, as used herein, each function is performed either by hardware configured, i.e., hard-wired, to perform that function, or by more general-purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium. A processing circuit may be fabricated on a single printed circuit board (PCB) or distributed over several interconnected PCBs. A processing circuit may contain other processing circuits; for example, a processing circuit may include two processing circuits, an FPGA and a CPU, interconnected on a PCB. As used herein, when a method (e.g., an adjustment) or a first quantity (e.g., a first variable) is referred to as being “based on” a second quantity (e.g., a second variable) it means that the second quantity is an input to the method or influences the first quantity, e.g., the second quantity may be an input (e.g., the only input, or one of several inputs) to a function that calculates the first quantity, or the first quantity may be equal to the second quantity, or the first quantity may be the same as (e.g., stored at the same location or locations in memory as) the second quantity. It will be understood that, although the terms “first”, “second”, “third”, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed herein could be termed a second element, component, region, layer or section, without departing from the spirit and scope of the inventive concept. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concept. As used herein, the terms “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent deviations in measured or calculated values that would be recognized by those of ordinary skill in the art. As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Further, the use of “may” when describing embodiments of the inventive concept refers to “one or more embodiments of the present disclosure”. Also, the term “exemplary” is intended to refer to an example or illustration. As used herein, the terms “use,” “using,” and “used” may be considered synonymous with the terms “utilize,” “utilizing,” and “utilized,” respectively. It will be understood that when an element or layer is referred to as being “on”, “connected to”, “coupled to”, or “adjacent to” another element or layer, it may be directly on, connected to, coupled to, or adjacent to the other element or layer, or one or more intervening elements or layers may be present. In contrast, when an element or layer is referred to as being “directly on”, “directly connected to”, “directly coupled to”, or “immediately adjacent to” another element or layer, there are no intervening elements or layers present. Any numerical range recited herein is intended to include all sub-ranges of the same numerical precision subsumed within the recited range. For example, a range of “1.0 to 10.0” or “between 1.0 and 10.0” is intended to include all subranges between (and including) the recited minimum value of 1.0 and the recited maximum value of 10.0, that is, having a minimum value equal to or greater than 1.0 and a maximum value equal to or less than 10.0, such as, for example, 2.4 to 7.6. Any maximum numerical limitation recited herein is intended to include all lower numerical limitations subsumed therein and any minimum numerical limitation recited in this specification is intended to include all higher numerical limitations subsumed therein. Although exemplary embodiments of a processor for fine-grain sparse integer and floating-point operations have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art. Accordingly, it is to be understood that a processor for fine-grain sparse integer and floating-point operations constructed according to principles of this disclosure may be embodied other than as specifically described herein. The invention is also defined in the following claims, and equivalents thereof.
19,379
11861328
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of a processor for fine-grain sparse integer and floating-point operations provided in accordance with the present disclosure and is not intended to represent the only forms in which the present disclosure may be constructed or utilized. The description sets forth the features of the present disclosure in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be accomplished by different embodiments that are also intended to be encompassed within the scope of the disclosure. As denoted elsewhere herein, like element numbers are intended to indicate like elements or features. A neural network (e.g., when performing inference) may perform voluminous calculations in which activations (or “activation values”) (the elements of an input feature map (IFM)) are multiplied by weights. The products of the activations and weights may form multi-dimensional arrays which may be summed along one or more axes to form an array, or “tensor”, that may be referred to as an output feature map (OFM). Referring toFIG.1, special-purpose hardware may be employed to perform such calculations. Activations may be stored in a static random access memory (SRAM)105and fed into a multiplier accumulator (MAC) array, which may include (i) a plurality of blocks (which may be referred to as “bricks”110), each of which may include a plurality of multipliers for multiplying activations and weights, (ii) one or more adder trees for adding together products generated by the bricks, and (iii) one or more accumulators for accumulating sums generated by the adder trees. Each activation value may be broadcast to a plurality of multipliers conceptually arranged in a row in the representation ofFIG.1. A plurality of adder trees115may be employed to form sums. In operation, it may be that the weights fall within a range of values, and that the distribution of the values of the weights is such that relatively small weights are significantly more common than relatively large weights. For example, if each weight is represented as an 8-bit number, it may be that many of the weights (e.g., a majority of the weights, or more than ¾ of the weights) have a value of less than 16 (i.e., the most significant nibble is zero); the weights with nonzero most significant nibbles may then be referred to as “outliers”. In some embodiments, suitably constructed hardware may achieve improved speed and power efficiency by taking advantage of these characteristics of the weights. FIG.2Ashows a portion of a mixed processing circuit (referred to as “mixed” because it is suitable both for integer and for floating point operations). Referring toFIG.2A, in some embodiments a plurality of multipliers205,210is used to multiply weights by activations, e.g., one nibble at a time. Each multiplier may be an 8×4 (i.e., an 8-bit by 4-bit) multiplier with a first input configured to receive an activation byte (which may be broadcast to all of the multipliers), and a second input configured to receive a respective weight nibble. The embodiment ofFIG.2Aincludes four multipliers (which may be referred to as standard multipliers205) and one reserve multiplier210(in some embodiments there are more or fewer standard multipliers205, or more reserve multipliers), two offset adders215, four shifters220, and four output multiplexers225. Each of the standard multipliers205and the reserve multiplier210may be an 8×4 (i.e., 8-bit by 4-bit) multiplier. The standard multipliers205and the reserve multiplier210may be identical circuits differing in how they are connected (as illustrated inFIG.2A) and in how they are used in operation (as discussed in further detail below). As such, each of these multipliers205,210may receive a first argument (e.g., the activation byte) and a second argument (e.g., the weight nibble, the first argument having a first argument size (e.g., 8 bits, the size of the activation byte) and the second argument having a second argument size (e.g., 4 bits, the size of a weight nibble). The embodiment ofFIG.2Amay be used to perform integer multiplication as follows. The weights are fed into the multipliers, one nibble at a time, from a weight buffer230(of which only the output row is shown) and an 8-bit activation value is broadcast to all of the multipliers205,210. Weights having nonzero most significant nibbles may be handled differently from weights having zero most significant nibbles (as used herein, a “zero nibble”, e.g., a “zero most significant nibble” is a nibble having a value of zero). In the example ofFIG.2A, the left-most multiplier forms the product of (i) an 8-bit activation value and (ii) a first weight, the first weight having a zero most significant nibble. The least significant nibble L0of the first weight is multiplied by the 8-bit activation value in the left-most standard multiplier205, producing a 12-bit product, which is converted to a 16-bit number (so that it may be added through the adder tree) by the left-most shifter220and fed to the left-most output multiplexer225. The second standard multiplier205from the left similarly forms the product of the 8-bit activation value and a second weight, having a zero most significant nibble and a nonzero least significant nibble L1. A third weight in the example ofFIG.2Ahas a nonzero most significant nibble M2, and a least significant nibble L2. This weight is multiplied by the 8-bit activation value in the third and fourth standard multipliers205from the left, with the third standard multiplier205from the left forming a first partial product by multiplying the 8-bit activation value by the least significant nibble L2, and the fourth standard multiplier205from the left forming a second partial product by multiplying the 8-bit activation value by the most significant nibble M2. An offset sum of the two partial products (the latter of which has a significance 4 bits greater than the former) is then formed in the offset adder215connected to the two multipliers, the offset of the offset adder215ensuring that the bits of the two partial products are properly aligned. As used herein, an “offset sum” of two values is the result of “offset addition”, which is the forming of the sum of (i) a first one of the two values and (ii) the second one of the two values, shifted to the left by a number of bits (e.g., by four bits), and an “offset adder” is an adder that performs the addition of two numbers with an offset between the positions of their least significant bits. As used herein, the “significance” of a nibble (or, more generally, of a sub-word (discussed in further detail below)) is the position it occupies in the word of which it is a part (e.g., whether a nibble is a most significant nibble or a least significant nibble of an 8-bit word). As such, the most significant nibble of an 8-bit word has a significance four bits greater than the least significant nibble. The product (i.e., the offset sum of the two partial products) is then produced, by the circuit ofFIG.2A, at the output of the third output multiplexer225from the left. If all four of the weights had zero most significant nibbles, then it would be possible to form the four products of (i) the activation value and (ii) the four least significant nibbles L0, L1, L2, and L3, in the four standard multipliers205, and to route the results to the outputs of the four output multiplexers225through the four shifters220. In the example ofFIG.2A, however, the fourth standard multiplier205is used to form the second partial product of the activation value with the third weight (consisting of L2and M2). In this example, the product of the fourth weight (which has the least significant nibble L3and a zero most significant nibble) is therefore formed in the reserve multiplier210. A similar configuration may be used if any one of the other three weights has a nonzero most significant nibble instead of the third weight, with one of the other weights (having a zero most significant nibble) being processed in the reserve multiplier210to free up a multiplier for forming the second partial product for the weight having a nonzero most significant nibble. As such, the circuit ofFIG.2Acan calculate the products of the activation value with any four weights, in one clock cycle, provided that at most one of the four weights has a nonzero most significant nibble. In another embodiment, similar toFIG.2Abut having two reserve multipliers210, it would be possible to calculate, in an analogous manner, the products of the activation value with any four weights, in one clock cycle, provided that at most two of the four weights had a nonzero most significant nibble. The arrangement of the weight nibbles in the weight buffer230may be the result of preprocessing, as illustrated inFIG.2B. The raw array of weights240may include a first row, of least significant nibbles (e.g., L0, L1, L2, and L3) and a second row, of most significant nibbles (containing, in the example ofFIG.2B, only one nonzero most significant nibble M2), as illustrated. The remaining most significant nibbles may be zero, as illustrated by the blank cells inFIG.2B. Preprocessing may rearrange these nibbles in populating the weight buffer (as indicated, for example, by the arrows inFIG.2B) so that the weight buffer contains a smaller proportion of zero-valued nibbles than the raw array of weights240. In the example ofFIG.2B, four weights (each consisting of a least significant nibble and a most significant nibble) are rearranged so that the zero-valued nibbles are discarded, and the non-zero nibbles are placed into five locations of one row of the weight buffer, so that this one row of the weight buffer may be processed at once (e.g., in one clock cycle) by the five multipliers205,210(FIG.2A). The preprocessing operation may also generate an array of control signals for controlling multiplexers (e.g., the output multiplexers225), in the mixed processing circuit, that perform routing of data, in accordance with the rearranging described above. FIGS.3A and3Billustrate how two copies of the circuit ofFIG.2Amay be combined to form a first floating-point processing circuit305and a second floating-point processing circuit310, each suitable for forming a floating-point product of an FP16 (half-precision) floating point) activation with an FP16 weight. Multipliers A and B of the first floating-point processing circuit305are the two left-most standard multipliers205of a first copy315of the circuit ofFIG.2A, multipliers C and D of the first floating-point processing circuit305are the two left-most standard multipliers205of a second copy320of the circuit ofFIG.2A, and multiplier E of the first floating-point processing circuit305is the reserve multiplier210of the first copy315of the circuit ofFIG.2A. Similarly, the multipliers A, B, C, D, E of the second floating-point processing circuit310include two standard multipliers205of the first copy315of the circuit ofFIG.2A, two standard multipliers205of the second copy320of the circuit ofFIG.2A, and the reserve multiplier210of the second copy320of the circuit ofFIG.2A. InFIG.3B, each reserve multiplier210is shown in the middle of a set of standard multipliers205for ease of illustration (instead of being shown to the right, as inFIG.2A). Each floating-point number may be an FP16 floating point number (using, e.g., a format according to the IEEE 754-2008 standard) having one sign bit, an 11-bit mantissa (or “significand”) (represented by 10 bits and one implicit lead bit or “hidden bit”), and a five-bit exponent. The 11-bit mantissa may be padded with one zero bit and split into three nibbles, a “high” (most significant) nibble, a “low” (least significant) nibble, and a “medium” nibble (of intermediate significance) (so that concatenating the high nibble, the medium nibble, and the low nibble, in order, results in the 12-bit (padded) mantissa). In describing these nibbles in the present disclosure, the qualifier “mantissa” may be omitted for brevity. The nine cells of the 3×3 table ofFIG.3Ashows the mapping of the products of the three nibbles of the activation value (corresponding to the three rows, labeled H_A (for the high nibble of the activation value), M_A (for the medium nibble of the activation value), and L_A (for the low nibble of the activation value)), and the three nibbles of the weight (corresponding to the three columns, labeled H_W (for the high nibble of the weight), M_W (for the medium nibble of the weight), and L_W (for the low nibble of the weight)) to corresponding multipliers in each of the first floating-point processing circuits305and the second floating-point processing circuits310. In the first floating-point processing circuit305, the standard multiplier labeled A may multiply (i) the high nibble H_A of the activation value and the medium nibble M_A of the activation value, received at the first (8-bit) input of the standard multiplier A by (ii) the high nibble H_W of the weight, as the corresponding rectangle, also labeled A, ofFIG.3Aindicates. Because the standard multiplier A has an 8-bit wide input and a 4-bit wide (nibble wide) input, it is capable of (i) multiplying the high nibble H_A of the activation value by the high nibble H_W and (ii) multiplying the medium nibble M_A of the activation value by the high nibble H_W of the weight, in one operation. In this manner five corresponding partial products (which may be referred to as partial products A, B, C, D, and E) may be formed. Partial product A has a significance four bits greater than the significance of partial product B, and these two partial products are added together in the offset adder215connected to standard multiplier A and standard multiplier B. Similarly, partial product D has a significance four bits greater than the significance of partial product C, and these two partial products are added together in the offset adder215connected to multiplier C and multiplier D. The reserve multiplier E may multiply the high nibble H_A of the activation by the low nibble L_W of the weight (the unused 4 bits of the first input of the reserve multiplier E may be set to zero). The sums produced by the two offset adders215, and the output of the reserve multiplier E may then be added together in the adder tree (which is connected to the outputs of the output multiplexer225). Although some examples are presented herein for an embodiment with 8-bit weights, 8-bit activation values, a weight buffer that is five weights wide, and weights and activations that may be processed one nibble at a time, it will be understood that these parameters and other like parameters in the present disclosure are used only as a specific concrete example for ease of explanation, and that any of these parameters may be changed. As such, the size of a weight may be a “word”, for example, and the size of a portion of a weight may be a “sub-word”, with, in the embodiment ofFIGS.2A and2B, the size of the word being one byte and the size of a sub-word being one nibble. In other embodiments, a word may be 12 bits and a sub-word may be six bits, for example, or a word may be 16 bits, and a sub-word may be one byte. As used herein, “a portion of” something means “at least some of” the thing, and as such may mean less than all of, or all of, the thing. As such, “a portion of” a thing includes the entire thing as a special case, i.e., the entire thing is an example of a portion of the thing. As used herein, the term “or” should be interpreted as “and/or”, such that, for example, “A or B” means any one of “A” or “B” or “A and B”. Each of the terms “processing circuit” and “means for processing” is used herein to mean any combination of hardware, firmware, and software, employed to process data or digital signals. Processing circuit hardware may include, for example, application specific integrated circuits (ASICs), general purpose or special purpose central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), and programmable logic devices such as field programmable gate arrays (FPGAs). In a processing circuit, as used herein, each function is performed either by hardware configured, i.e., hard-wired, to perform that function, or by more general-purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium. A processing circuit may be fabricated on a single printed circuit board (PCB) or distributed over several interconnected PCBs. A processing circuit may contain other processing circuits; for example, a processing circuit may include two processing circuits, an FPGA and a CPU, interconnected on a PCB. As used herein, when a method (e.g., an adjustment) or a first quantity (e.g., a first variable) is referred to as being “based on” a second quantity (e.g., a second variable) it means that the second quantity is an input to the method or influences the first quantity, e.g., the second quantity may be an input (e.g., the only input, or one of several inputs) to a function that calculates the first quantity, or the first quantity may be equal to the second quantity, or the first quantity may be the same as (e.g., stored at the same location or locations in memory as) the second quantity. It will be understood that, although the terms “first”, “second”, “third”, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed herein could be termed a second element, component, region, layer or section, without departing from the spirit and scope of the inventive concept. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concept. As used herein, the terms “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent deviations in measured or calculated values that would be recognized by those of ordinary skill in the art. As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Further, the use of “may” when describing embodiments of the inventive concept refers to “one or more embodiments of the present disclosure”. Also, the term “exemplary” is intended to refer to an example or illustration. As used herein, the terms “use,” “using,” and “used” may be considered synonymous with the terms “utilize,” “utilizing,” and “utilized,” respectively. It will be understood that when an element or layer is referred to as being “on”, “connected to”, “coupled to”, or “adjacent to” another element or layer, it may be directly on, connected to, coupled to, or adjacent to the other element or layer, or one or more intervening elements or layers may be present. In contrast, when an element or layer is referred to as being “directly on”, “directly connected to”, “directly coupled to”, or “immediately adjacent to” another element or layer, there are no intervening elements or layers present. Any numerical range recited herein is intended to include all sub-ranges of the same numerical precision subsumed within the recited range. For example, a range of “1.0 to 10.0” or “between 1.0 and 10.0” is intended to include all subranges between (and including) the recited minimum value of 1.0 and the recited maximum value of 10.0, that is, having a minimum value equal to or greater than 1.0 and a maximum value equal to or less than 10.0, such as, for example, 2.4 to 7.6. Any maximum numerical limitation recited herein is intended to include all lower numerical limitations subsumed therein and any minimum numerical limitation recited in this specification is intended to include all higher numerical limitations subsumed therein. Although exemplary embodiments of a processor for fine-grain sparse integer and floating-point operations have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art. Accordingly, it is to be understood that a processor for fine-grain sparse integer and floating-point operations constructed according to principles of this disclosure may be embodied other than as specifically described herein. The invention is also defined in the following claims, and equivalents thereof.
21,577
11861329
DETAILED DESCRIPTION Some aspects, features and implementations described herein may include machines such as computers, electronic components, optical components, and processes such as computer-implemented procedures and steps. It will be apparent to those of ordinary skill in the art that the computer-implemented procedures and steps may be stored as computer-executable instructions on a non-transitory tangible computer-readable medium. Furthermore, it will be understood by those of ordinary skill in the art that the computer-executable instructions may be executed on a variety of tangible processor devices, i.e., physical hardware. For ease of exposition, not every step, device or component that may be part of a computer or data storage system is described herein. Those of ordinary skill in the art will recognize such steps, devices and components in view of the teachings of the present disclosure and the knowledge generally available to those of ordinary skill in the art. The corresponding machines and processes are therefore enabled and within the scope of the disclosure. The terminology used in this disclosure is intended to be interpreted broadly within the limits of subject matter eligibility. The terms “logical” and “virtual” are used to refer to features that are abstractions of other features, e.g. and without limitation, abstractions of tangible features. The term “physical” is used to refer to tangible features, including but not limited to electronic hardware. For example, multiple virtual computing devices could operate simultaneously on one physical computing device. The term “logic” is used to refer to special purpose physical circuit elements, firmware, and/or software implemented by computer instructions that are stored on a non-transitory tangible computer-readable medium and implemented by multi-purpose tangible processors, and any combinations thereof. Storage systems are used to provide storage services for host applications. When a host application wants to have data stored on a given storage system, the necessary storage volumes are created on the storage system by interacting with a user interface to the storage system. Humans can interact with the storage system, and likewise other automated processes can interact with the storage system. Any interaction, whether it be between a human actor and a machine such as a storage system, or between two computer implemented systems, constitutes a “user experience” with a product. User experience design is the process of supporting user behavior through usability, usefulness, and desirability provided in the interaction with a product. Although an example system for codifying user experience designs and managing the codified user experience designs will occasionally be described in the context of codifying and managing user experience designs that are configured to enable users and storage systems to interact, it should be understood that embodiments may be used in many contexts, and are not limited to use in the context of codifying and managing user experience designs in the context of a storage system. An example of a user experience design might be, for example, a Graphical User Interface (GUI) component or set of screens that is configured to enable a user to access a particular feature on a storage system. User experiences are designed, for example using design systems100, to enable the graphical user interface to be used to achieve a particular objective. In the context of a GUI that is used to interface a software program, the term “user experience design”, as used herein, is used to refer to a set of graphic components and transitions between states that enable a user to navigate, through the GUI, to enable the user to access the intended feature of the software program. In the context of a CLI, the term “user experience design” is used to refer to a selected set of API calls that are arranged to enable the user to access the intended objective. Conventionally, user experience designs would be created by experience designers. For example, if a new feature is to be added to a software product, and the software product has a graphical user interface (GUI), often the GUI will need to be modified to enable the users to access the new feature of the software product. Stated differently, a new user experience will need to be created (designed) to enable the user to access the new feature of the software product. To create the new user experience, a software interaction design professional would create a version of how the GUI may be configured, to enable a person to access the new feature through the software product's GUI. The initial version of the changes to the GUI might be created by the design professional using a design tool such as Figma, Adobe XD, Sketch, or by manually diagramming the GUI experience. The user experience design would then be reviewed by the design professionals, the product managers responsible for implementing the new feature in the software product, and engineers responsible for actually implementing the GUI from the mockup provided by the design professional. After agreeing on the details of the user experience design, the engineers would implement the user experience design in software to add the user experience design to the software product GUI. The GUI would then be tested to ensure that the new feature of the product is actually accessible via the GUI. Often this process would iterate multiple times from any stage back to the original design phase, which can cause delays in implementing new features in the software product. Additionally, where the new feature is intended to be accessed using multiple different user experience designs, such as by a CLI as well as a GUI, each of the user experience design would need to go through this process. Moreover, the conventional process of creating user experience designs is a manual process that requires each participant to keep track of the latest version of the user experience design. In an environment where the user experience design is changing frequently, for example due to architecture changes, implementation approach changes, or due to market/customer requirement changes, this may be difficult to implement. For example, the design professionals and product development team may revise a user experience design, but the engineers tasked with implementing the user experience design may be working on an earlier version of the user experience design. According to some embodiments, a method and apparatus for codifying user experience designs and managing the codified user experience designs is provided. An example user Experience Design Codification and Management System (EDCMS) is shown inFIG.1. In some embodiments, designers create user experience designs170using external design systems100. The EDCMS195retrieves a user experience definition175based on the user experience design from the external design system100, and generates a comprehensive user experience specification180from the user experience definition175. Part of the comprehensive user experience specification180includes JavaScript Object Notation (JSON), eXtensible Markup Language (XML) or YAML code created based on the user experience definition175. The EDCMS195then packages and encodes the comprehensive user experience specification to create a codified user experience design185from the comprehensive user experience specification180. The codified user experience design185is then versioned and digitally signed, and the versioned and signed codified user experience190is stored in a user experience design repository145. By automatically generating a codified user experience design185from a user experience design170, it is possible to provide the engineers with a codified version of the intended user experience design170, which describes in JSON, XML, YAML, or another code format the user experience design170that is to be implemented. This eliminates communication errors that might occur between the design professionals and engineers, because the engineers are automatically provided with a packaged and encoded codified user experience design185, that is generated from the user experience design170. By signing and versioning the codified user experience specification190, and automatically entering the signed and versioned codified user experience190in a user experience design repository145where it can then be checked out/checked in, as necessary, it is possible to ensure that everyone is working to implement the correct version of user experience design170. This facilitates collaboration by preventing different members of the design team from working toward implementation of different versions of the user experience design170. FIG.1is a functional block diagram of an example Experience Design Codification and Management System (EDCMS)195, according to some embodiments. As shown inFIG.1, in some embodiments design professionals (people) use existing experience design tools100to create user experience designs170. Example existing design tools include design systems1001-100n, which might be for example an online design system tool such as Figma1001, Adobe XD1002, or a Sketch100n. Many external design systems100might be used, depending on the implementation. Additionally, in some embodiments, user experience designs may be created manually, i.e. without the help of tools such as Figma or Adobe XD, and then processed by a proxy105configured to perform image processing of the manually created design. It should be noted that the design systems1001-100n, are outside of the EDCMS195, as indicated by the vertical dashed line separating the external design systems100from the components of the EDCMS195. In some embodiments, the EDCMS195includes an experience design intake section configured to interact with the design systems1001-100n, to retrieve user definitions175based on the user experience designs170that have been created by the design professionals using these external systems100. For example, in some embodiments the EDCMS195includes a proxy1051-105nconfigured to interact with each respective design system1001-100n. As an example, if the Figma Service (design system1001) enables external access at a particular URL, the Figma proxy1051may be configured to access the external Figma Service URL, request a design created by a particular design professional or team of design professionals, and then download the requested user experience definition175. In some embodiments, each proxy operates in a stateless manner, and makes use of publicly available API interfaces for the experience design platforms100. AlthoughFIG.1shows a one-to-one correspondence between proxy105and design system100, it should be understood that in some embodiments a given proxy105may be configured to interact with more than one design system100, or that a single proxy105may be used to interact with all of the design systems100. According to some embodiments, the EDCMS195is configured to require the design professional to include experience metadata350(seeFIG.3) describing the intended environment of the software interaction experience. The experience metadata350, in some embodiments, includes information about who (the persona305) the software interaction experience is being designed for. Different types of software users (different personas) might be provided with different software interaction experiences. For example, a system administrator may be given a different set of experiences than a normal user. Other personas might be a data center manager, network manager, software engineer, or other similar title. Personas may also be specified by capturing what the roles do, such as server administrator, storage administrator, backup administrator, filesystem user, auditor, security administrator, etc. In addition to specifying the persona305, in some embodiments the experience metadata350also includes information about when, in the product lifecycle310, the person specified in the persona metadata305is expected to encounter the software interaction experience. In some embodiments, the experience metadata350includes information about the intended outcome of the user experience design170. An “outcome”, as that term is used herein, is used to refer to the objective of the software interaction experience. For example, if the software interaction experience has been created to enable a user to create a storage volume on a storage system, that would be the “outcome” that the design professional would specify in the outcome315aspect of the experience metadata350. Other outcomes might include initial configuration of a system, setting up sub-tenants on a leased storage system, creating and mapping Logical Unit Numbers (LUNS) to hosts, monitoring system behavior, creating custom dashboards, etc. Many possible outcomes exist, although it would also be expected that there would be many similar outcomes that design professionals would create for different software products. In some embodiments, the experience metadata350includes information about the particular mode of consumption320, i.e. how a user is to be presented with the software interaction experience. Example modes320might include a Graphical User Interface (GUI) such as on a browser or on a mobile application, an Application Program Interface (API), a Command Line Interface (CLI), or a Continuous Integration/Continuous Delivery (CI/CD) system, or another form or mode of consumption of a user experience. In some embodiments, the experience metadata350includes information about how the experience is achieved. This is the workflow325that is used to achieve the intended outcome. For a GUI based user experience design170, the workflow specifies the human interaction actions with screen states and transitions between states. FIG.3is a functional block diagram of example experience design metadata350, according to some embodiments. As shown inFIG.3, in some embodiments the user experience definition175metadata350includes the persona305, lifecycle310, outcome315, and mode of consumption320. In addition, the user experience definition175metadata350includes workflow metadata325specifying a series of states330, transitions between states, components340, and variability information (V). In the example workflow325shown inFIG.3, the workflow metadata325specifies a start state330and a subsequent transition to state3351. In the context of a GUI, the start state might be encountered when the user starts the software application and state3351might be displaying an initial screen on the GUI that includes component3401. The workflow metadata325specifies that, when the user interacts with component3401, that the experience should transition to state3352containing components3402and3403. The workflow metadata325further specifies state transitions that occur in connection with each of the components until an end state345is reached. In some embodiments, the end state345is associated with the outcome315. AlthoughFIG.3shows an example in which the workflow metadata325has one end state345, it should be understood that there may be more than one end state345, depending on the implementation. It should be understood that the example shown inFIG.3is merely one example of an experience design metadata, and it would be expected that different experience designs could vary considerably from the example shown inFIG.3. Additionally, whileFIG.3shows the example workflow in the form of a graph, the workflow may also be specified in a textual manner, for example in a plain text language file. FIG.2is a flow chart of an example process of creating a user experience design170that may be used by the example EDCMS195, according to some embodiments. Although the process of creating the user experience design170is implemented by design professionals using design systems1001-100n, conventionally the design professionals would not be required to include experience metadata350when creating a user experience design170. According to some embodiments, design professionals are prompted to include the experience metadata350when creating user experience designs170, to enable the EDCMS195to create a comprehensive user experience specifications180from the user experience designs170, which are then packaged and encoded, versioned and signed, and stored in a user experience design repository145. As shown inFIG.2, a design professional creates a user experience design170using one of the external design system1001-100n(block200). While creating the user experience design170, or in connection with entering the user experience design170into the EDCMS195, the design professional enters persona information305of the target audience in the user experience design170(block205). The design professional also enters lifecycle information310into the experience design170(block210). The lifecycle information, in some embodiments, identifies a product or service lifecycle stage associated with the user experience design170. In some embodiments, the design professional is prompted to enter the persona, lifecycle, and other similar metadata. In other embodiments, the design professional enters annotations made by the designer in the user experience design when interacting with the design system. For example, the designer might include standardized key=value formatted data, such as “mode=CLI” or “mode=GUI”. As another example, the standardized key=value formatted data might include “persona=admin”, “persona=storage admin”, “outcome=InitialDeploy”, “outcome=ConfigureTenant”, etc. Multiple ways of collecting the user experience design metadata might be used depending on the implementation. The design professional is also prompted to enter outcome information315into the user experience design170(block215). The outcome information315, in some embodiments, identifies a result achieved by the user experience design170. The designer is also prompted to enter the mode information320into the user experience design170(block220), which specifies whether the user experience design170is associated with a GUI, API, CLI, etc. The design professional also uses the design system100to enter workflow metadata325that enables the user experience design170to achieve the outcome (block225). If the mode320=GUI (block230), in some embodiments the workflow325includes the set of human interactions with screen states, state contents, state transitions, and state variability (block235). If the mode320=API (block240), in some embodiments the workflow325includes request/response interaction with API endpoints (block245). if the mode320=CLI (block250), in some embodiments the workflow325includes command-line interactions with CLI endpoints (block255). Once the user experience design170has been created, the EDCMS195accesses and obtains a copy of the user experience design170from the design system100. As used herein, the term “user experience definition175” is used to refer to a set of one or more files that are associated with the user experience design170, and which are retrieved by the EDCMS195from the external design system100after the user experience design170has been created on the external design system100. The particular format of the files which comprise the user experience definition175will depend on the syntax used by the external design system100to describe the user experience design170. In some embodiments, when the user experience definition175is retrieved by the EDCMS195, the EDCMS checks for the presence of the required experience metadata350and, if any user experience metadata350is missing, prompts are generated to request that the experience metadata350be added to the user experience definition. FIG.4is a flow chart of an example user experience definition175intake process implemented by the example EDCMS195, according to some embodiments. As shown inFIG.4, in some embodiments the intake process starts when the EDCMS195receives an instruction to process a user experience design170(block400). For example, the system195may have a user access system155such as an API or GUI that is configured to control execution of the EDCMS195, for example to enter instructions into the EDCMS195to cause the EDCMS195to process user experience designs170or to retrieve and interact with versioned and signed codified user experiences190maintained in the user experience design repository145. Accordingly, in some embodiments the EDCMS195receives an instruction to process a user experience design170(block400) via user access155. The EDCMS195determines which external design system100was used to create the user experience designs170(block405). In some embodiments, the external design system100is specified through user access155. In embodiments such where the EDCMS195includes multiple proxies105, and each proxy105is configured to interact with one of the external design systems100, the intake process selects the proxy105that is configured to interact with the external design system that was used to create the user experience design (block410). It should be understood that, in some embodiments, a given proxy105might be configured to interact with multiple external design systems100or all commonly used external design systems100. Accordingly, in embodiments where the EDCMS195only includes one proxy105, the step shown in Block410might be omitted. The intake process then forwards a request for the user experience definition175to the external design system100, requesting that the external design system forward a copy of the one or more files associated with the user experience design170to the EDCMS195(block420). The proxy then waits to receive the user experience definition175. If the user experience definition175is not received, for example within a timeout period (a determination of NO at block420) the EDCMS195reports an error (block425) for example via the user access155, and the intake process ends. If the user experience definition175is received (a determination of YES at block420) the user experience definition175is forwarded to an implementation processing layer of the EDCMS195(block430). In some embodiments, the implementation layer processes the user experience definition175to create a comprehensive user experience specification180. The implementation layer, in some embodiments, includes a persona and outcome mapping and normalization subsystem110, a finite state machine (FSM) generation subsystem115, a consistency checking and annotation subsystem120, a component and style capture subsystem125, and a specification capture subsystem130. Each of these subsystems is described in greater detail below. AlthoughFIG.1shows the user experience definition175being input to the persona and outcome mapping and normalization subsystem110, it should be understood that the user experience definition175may be simultaneously input to each of the subsystem110,115,120,125,130at the same time. Likewise, althoughFIG.1shows arrows extending between the subsystem110,115,120,125,130from top to bottom, it should be understood that the subsystems may be used in any order, and that the subsystems may process the user experience definition175independently, depending on the implementation. FIG.5is a flow chart of an example persona and outcome mapping and normalization process implemented by the persona and outcome mapping and normalization subsystem110, according to some embodiments. In some embodiments, the outcome mapping and normalization subsystem110captures the target persona from persona metadata305and the target outcome from outcome metadata315and translates the persona305and outcome315into a standard taxonomy of personas and outcomes. For example, if the target persona specified in persona metadata305of the user experience definition175was “sys admin”, and the standard taxonomy included “system administrator” as one of the standard personas, the outcome mapping and normalization subsystem110would change the experience metadata350such that the experience metadata350in the comprehensive user experience specification180referred to the intended persona using the term “system administrator”. In some embodiments, the persona and outcome mapping and normalization subsystem110uses data and textual analytic techniques to implement the mapping and normalization of persona metadata305and outcome metadata315. As shown inFIG.5, in some embodiments the persona and outcome mapping and normalization subsystem110extracts persona information from the persona metadata305of the user experience definition175(block500). In some embodiments, if the persona metadata305or other experience metadata350was not included in the user experience definition175that was retrieved from the design system100, the design professional may be prompted by the persona and outcome mapping and normalization subsystem110to enter the persona metadata305or other experience metadata350via the user access155. The persona and outcome mapping and normalization subsystem110then compares the extracted persona information with a taxonomy of known personas (block505) to determine if the extracted persona is similar to any known personas (block515). If the persona information extracted from the persona metadata305is similar to a known persona (a determination of YES at block515) the persona information is normalized using the known persona in the persona taxonomy (block520). In some embodiments, if the persona entered by the designer is normalized, a change notification is optionally provided to the designer indicating the change that was made to the persona via the user access155. If the persona information extracted from the persona metadata305is not similar to a known persona (a determination of NO at block515), the persona information may be added to the persona taxonomy (block520). Optionally, addition of the persona to the persona taxonomy may require confirmation of the addition via the user access155. The persona and outcome mapping and normalization subsystem110extracts outcome information from the outcome metadata315of the user experience definition175(block525) and compares the extracted outcome information with a taxonomy of known outcomes (block530) to determine if the extracted outcome is similar to any known outcomes (block535). If the outcome information extracted from the outcome metadata315is similar to a known outcome (a determination of YES at block535) the outcome information is normalized using the known outcome in the outcome taxonomy (block540). In some embodiments, if the outcome entered by the designer is normalized, a change notification is optionally provided to the designer indicating the change that was made to the outcome via the user access155. If the outcome information extracted from the outcome metadata315is not similar to a known outcome (a determination of NO at block535), the outcome information may be added to the outcome taxonomy (block545). Optionally, addition of the outcome to the outcome taxonomy may require confirmation of the addition via the user access155. The mapped and normalized persona and outcome are then added to the experience metadata350of the comprehensive user experience specification180(block550). AlthoughFIG.5shows the persona and outcome mapping and normalization subsystem110first processing the persona metadata305and then processing the outcome metadata315, it should be understood that the persona and outcome mapping and normalization subsystem110may process these forms of metadata350in either order or simultaneously, depending on the implementation. In some embodiments, the EDCMS195includes a finite state machine generation subsystem115configured to create a finite state machine from the workflow metadata325of the user experience definition175. In some embodiments, the finite state machine generation subsystem115uses the knowledge of the start state330, incrementally captures state transition events and actions, incrementally captures the contents of each state, and incrementally captures the variable/invariable nature of each state. In some embodiments, the finite state machine generation subsystem115uses the workflow metadata325to build a Mealy machine, in which state transitions depend on the current state plus inputs, or a Moore machine, in which state transitions do not depend on the inputs, but only depend on the current state, and produces a formal, intermediate representation of a finite-state machine. In some embodiments, the finite state machine generation subsystem115also runs one or more sanity checks on the finite state machine, to ensure that the finite state machine meets a set of pre-requisite properties for experience designs. Example sanity checks might include a set of Boolean rules, such as “before a page loads, x event is required to happen.” FIGS.6A and6Bare a flow chart of an example finite state machine generation process implemented by the example EDCMS ofFIG.1, according to some embodiments. As shown inFIG.6A, in some embodiments the finite state machine generation subsystem115extracts the workflow metadata325from the user experience definition175(block600). Starting with the start state, the finite state machine generation subsystem115selects the workflow state (block605) and builds a finite state machine from the extracted workflow state (block610). The process of building each state of the finite state machine is shown inFIG.6B, which is discussed below. The finite state machine generation subsystem115incrementally captures state transition events and actions (block615) incrementally captures the content of each state (block620), incrementally captures the variability or invariability of each state (block625), and performs sanity checks on the finite state machine (block630). Once the finite state machine has been built, the finite state machine is added to the comprehensive user experience specification180(block635). As shown inFIG.6B, for each state330,335,345, in the workflow325(block650), the finite state machine generation subsystem115determines whether there are any components in the current state (block655). If there are any components in the current state (a determination of YES at block655) the finite state machine generation subsystem115captures the components of the current state (block660). The finite state machine generation subsystem115also determines whether there are any styles in the current state (block665). If there are any styles in the current state (a determination of YES at block665) the finite state machine generation module115captures the styles of the current state (block670). The finite state machine generation subsystem115also determines whether there are any state transitions from the current state (block665). If there are any state transitions from the current state (a determination of YES at block665) the finite state machine generation subsystem115captures the state transitions from the current state (block670). These processes are iterated for each state in the workflow325. AlthoughFIG.6Bshows the finite state machine generation subsystem115operating on components, styles, and state transitions in sequential order, it should be understood that the finite state machine generation subsystem115may process these aspects of the workflow states in a different order or simultaneously, depending on the implementation. In some embodiments, the EDCMS195includes a consistency checking and annotation subsystem120. The consistency checking and annotation subsystem120, in some embodiments, determines which elements of the user experience definition175are variable, and which are absolutely required, and annotates the comprehensive user experience specification180to indicate which elements are able to be varied by the engineers when implementing the comprehensive user experience specification180. For example, inFIG.3, each state has a variability V specified, which indicates whether and to what extent the particular state is variable or whether any of the components of the state are variable. An example variability measure may be to specify that the particular color that was selected is variable, such that the component or the state may be adjusted to automatically use one of a standard set of colors. Another example variability measure may be to allow for some latitude as to the particular placement of the component on the interface design. The consistency checking and annotation subsystem120, in some embodiments, uses this variability information as well as heuristics, to annotate which aspects of the design are variable and by what percentage or other quantity the aspect may be varied in the final user experience. In some embodiments, the consistency checking and annotation subsystem120also checks aspects of the user experience definition175, such as components, with a set of known components. For example, if a “cancel transaction” component is always red, and the user experience definition175specifies that the “cancel transaction” component should be bright orange, the comprehensive user experience specification180may be annotated to indicate that the component is indicated as being bright orange, that the user experience definition175indicated that the color was variable, and that the normal color for this component is red. In that manner, when implementing the comprehensive user experience specification180, an engineer can immediately determine both that the color is changeable and know that the normal color for the component is red. FIG.7is a flow chart of an example consistency checking and annotation process implemented by the example EDCMS ofFIG.1, according to some embodiments. As shown inFIG.7, in some embodiments, the consistency checking and annotation subsystem120selects a first component of the user experience definition175(block700). Variability information about the component is then obtained from the experience metadata350of the user experience definition175. A determination is then made as to whether the component is indicated to be variable (block710). If the component is specified as being not variable (a determination of NO at block710), the component is annotated in the comprehensive user experience specification180as be not variable (block715). If the component is specified as being variable (a determination of YES at block710), an artifact is created by annotating the component with the variability information in the comprehensive user experience specification180as being variable (block720). The variability information may specify the type of variance that may be implemented, the percentage variability, or other ways that the particular component may be varied while staying within the design parameters of the original user experience definition175. The consistency checking and annotation subsystem120then determines if there are any additional components (block725). If there are additional components (a determination of YES at block725) another component is selected, and the process returns to block700. The process continues until all components of the user experience definition175have been processed (a determination of NO at block725). The component variability information determined by the consistency checking and annotation subsystem120is added to the comprehensive user experience specification180, either at the end of the process as shown inFIG.7or incrementally as each component is processed, depending on the implementation. In some embodiments, the consistency checking and annotation subsystem120uses the same process to also check the styles of each of the states, to determine whether the styles used in each of the states are variable. Style consistency and annotation can be implemented for each state using the same process shown inFIG.7, and can be implemented at the same time as the components are processed by the consistency checking and annotation subsystem120or can be processed separately from the components. In some embodiments, the EDCMS195includes a component and style capture subsystem125configured to capture, by value or by reference, all component instance definitions and related artifacts of the finite state machine. The component and style capture subsystem125, in some embodiments, conducts either a depth-first or breadth-first walk of the finite state machine graph, marking visited states along the way, to identify all components of the finite state machine. The component and style capture subsystem125compares the components used in the finite state machine with a store of known components in database150and, if a new component is detected that is not contained in the store of known components, adds the new component to the store of known components. In this manner, a store of known components can be incrementally built over time by the EDCMS195. In some embodiments, the data store of known components is used by the consistency checking and annotation subsystem120(described above) when checking components of a user experience definition175for consistency with previous versions of the same components. Components in the data store of known components may be indexed, within the namespace of the experience at hand, as well as by its version, signature, and other unique fields, depending on the implementation. In some embodiments, if a component or style in the experience definition matches a known component or style in the data store of known components or styles, the correspondence is noted in the comprehensive user experience specification. FIG.8is a flow chart of an example component capture process implemented by the example EDCMS ofFIG.1, according to some embodiments. AlthoughFIG.8describes some embodiments of the component and style capture subsystem125implementing a depth-first walk of the finite state machine, it should be understood that the component and style capture subsystem125could similarly implement a breadth-first walk of the finite state machine. Any process of visiting each state of the finite state machine may be used to capture all components used by the finite state machine, depending on the implementation. Similarly, althoughFIG.8shows the component and style capture subsystem125implementing a walk through the completed finite state machine, it should be understood that the states of the finite state machine may be individually forwarded to the component and style capture subsystem125for processing as they are added to the finite state machine by the finite state machine generation subsystem115, depending on the implementation. As shown inFIG.8, in some embodiments the component and style capture subsystem125retrieves the finite state machine, represented by a finite state machine graph, that has been added to the comprehensive user experience specification180(block800). The component and style capture subsystem125then conducts a depth-first walk of the finite state machine graph marking visited states along the way (block805). For each state on the finite state machine graph, the component and style capture subsystem125determines if the state has any components (block810). If the state has any components, the component and style capture subsystem125captures a first of the components by capturing the component instance definition and related artifacts (block815). A determination is then made as to whether the captured component is a known component (block820). If the captured component is not known (a determination of NO at block820), the component and style capture subsystem125adds the component definition and related artifacts to the data store of known components maintained by database150. Artifacts, in this context, may include variability information associated with the component added to the component by the consistency checking and annotation subsystem120. If the component is known (a determination of YES at block820) the component definition is already in the data store of known components that is maintained in database150(block830) and the component definition does not need to be added to the data store of known components. In some embodiments, an entry may be added to the component entry in the data store of known components to indicate that the component has been used in the comprehensive user experience specification180. The component and style capture subsystem125continues processing the current state by determining if the current state has any additional components (block835). If there are additional components (a determination of YES at block835), the component and style capture subsystem125selects a subsequent component and the process returns to block815. The process iterates until there are no additional components of the current state (a determination of NO at block835). A determination is then made as to whether there are additional states of the finite state machine to be processed (block840). If there are additional states to be processed (a determination of YES at block840) a subsequent state is selected, and the process returns to block810. The process ends when there are no additional states to process (a determination of NO at block840). Optionally, the comprehensive user experience specification180may be annotated to indicate which components are used by which state. This may be useful, for example, in instances where components have previously been coded by engineers, to enable the engineers to select previously implemented code for the particular components when implementing the comprehensive user experience specification180. In some embodiments, in addition to comparing components referenced by states of the finite state machine to know components, the component and style capture subsystem125also uses the same process shown inFIG.8to review each state visited for style definitions, and compares the style definitions with a data store of known style definitions. Thus, althoughFIG.8shows the process used by the component and style capture subsystem125to capture components, it should be understood that the process used inFIG.8is also used to capture style definitions of the states of the finite state machine. In some embodiments, the EDCMS195includes a specification capture engine130. In some embodiments, this subsystem is configured to convert all parts of the comprehensive user experience specification180, from the persona and mapping normalization subsystem, the finite state generation subsystem115, the consistency checking and annotation subsystem120, and from the component and style capture subsystem125, into a standard versioned stylized, codified specification. The specification, in some embodiments, is expressed in a human-readable and machine-readable languages such as JSON, XML, or YAML. FIG.9is a flow chart of an example specification capture process implemented by the example EDCMS ofFIG.1, according to some embodiments. As shown inFIG.9, in some embodiments the specification capture subsystem130retrieves persona and outcome information created by the persona and outcome mapping and normalization subsystem (block900); retrieves the finite state machine created by the finite state machine generation subsystem115(block905); retrieves annotations from the consistency checking and annotation subsystem120(block910); and retrieves components and style information from the component and style capture subsystem125(block915). The specification capture engine130creates the comprehensive user experience specification180(block920) in JSON, XML, YAML, or another machine readable and human readable language. The finite state machine defines states and transitions between states, which are able to be converted to JSON, XML, or YAML to be output in code form as a comprehensive user experience specification180for use by engineers to implement the user experience design170. Annotations may be added to the JSON, XML, or YAML code as comments, to thereby enable all aspects of the user experience definition175to be specified in the JSON, XML, or YAML that is used to implement the comprehensive user experience specification180. In some embodiments, the JSON, XML, or YAML elements of the comprehensive user experience specification180are compared with data format schemas in database150(block925) to ensure that the elements meet the data format schemas needed to implement the user experience definition175and to capture new schemas as they are created. Accordingly, in some embodiments a determination is made as to whether a schema of the comprehensive user experience specification180is a new schema (block930). If the schema is new schema (a determination of YES at block930) the schema is added to the schema datastore. If the schema is not a new schema (a determination of NO at block930) the schema is not required to be added to the schema datastore. In either instance, once the comprehensive user experience specification180has been created in JSON, XML, or YAML, it is forwarded to a management system of the EDCMS195. In some embodiments, the management system has a package generation and encoding subsystem135configured to receive the comprehensive user experience specification180and create a codified user experience design185. In some embodiment, the package generation and encoding subsystem135encodes the comprehensive user experience specification180as well as artifacts received from each of the implementation subsystems. In some embodiments, the package generation and encoding subsystem135operates in a request/response manner with each of the subsystems110,115,120,125,130, to capture partial results and store the partial results in database150. The package generation and encoding subsystem135also packages the comprehensive user experience specification180to enable all aspects of the comprehensive user experience specification180to be included in the codified user experience design185. FIGS.10A-10Eare flow charts of example package generation and encoding processes implemented by the example EDCMS ofFIG.1, according to some embodiments. As shown inFIG.10A, in some embodiments the package generation and encoding subsystem135transmits a request for artifacts to the persona and outcome mapping and normalization subsystem110(block1000). When the persona and outcome mapping and normalization subsystem110receives the artifact request (block1002) the persona and outcome mapping and normalization subsystem110determines whether there are any new artifacts (block1004). If there are no new artifacts (a determination of NO at block1004), the persona and outcome mapping and normalization subsystem110messages that there are no new artifacts. If there are new artifacts (a determination of YES at block1004), the persona and outcome mapping and normalization subsystem110transmits the artifacts to the package generation and encoding subsystem135(block1006). When the package generation and encoding subsystem135receives the artifact, the package generation and encoding subsystem135packages and encodes the artifact and adds the artifact to the codified user experience design185(block1008). The process then iterates until the codified user experience design185has been fully built and packaged. The package generation and encoding subsystem135uses a similar process to interact with the FSM generation subsystem115(seeFIG.10B), the consistency checking and annotation subsystem120(seeFIG.10C), the component and style capture subsystem (SeeFIG.10D) and the specification capture subsystem (SeeFIG.10E). As shown inFIG.10B, in some embodiments the package generation and encoding subsystem135transmits a request for artifacts to the FSM generation subsystem115(block1010). When the FSM generation subsystem115receives the artifact request (block1012) the FSM generation subsystem115determines whether there are any new artifacts (block1014). If there are no new artifacts (a determination of NO at block1014), the FSM generation subsystem115messages that there are no new artifacts. If there are new artifacts (a determination of YES at block1014), the FSM generation subsystem115transmits the artifacts to the package generation and encoding subsystem135(block1016). When the package generation and encoding subsystem135receives the artifact, the package generation and encoding subsystem135packages and encodes the artifact and adds the artifact to the codified user experience design185(block1018). The process then iterates until the codified user experience design185has been fully built and packaged. As shown inFIG.10C, in some embodiments the package generation and encoding subsystem135transmits a request for artifacts to the consistency checking and annotation subsystem120(block1020). When the consistency checking and annotation subsystem120receives the artifact request (block1022) the consistency checking and annotation subsystem120determines whether there are any new artifacts (block1024). If there are no new artifacts (a determination of NO at block1024), the consistency checking and annotation subsystem120messages that there are no new artifacts. If there are new artifacts (a determination of YES at block1024), the consistency checking and annotation subsystem120transmits the artifacts to the package generation and encoding subsystem135(block1026). When the package generation and encoding subsystem135receives the artifact, the package generation and encoding subsystem135packages and encodes the artifact and adds the artifact to the codified user experience design185(block1028). The process then iterates until the codified user experience design185has been fully built and packaged. As shown inFIG.10D, in some embodiments the package generation and encoding subsystem135transmits a request for artifacts to the component and style capture subsystem125(block1030). When the component and style capture subsystem125receives the artifact request (block1032) the component and style capture subsystem125determines whether there are any new artifacts (block1034). If there are no new artifacts (a determination of NO at block1034), the component and style capture subsystem125messages that there are no new artifacts. If there are new artifacts (a determination of YES at block1034), the component and style capture subsystem125transmits the artifacts to the package generation and encoding subsystem135(block1036). When the package generation and encoding subsystem135receives the artifact, the package generation and encoding subsystem135packages and encodes the artifact and adds the artifact to the codified user experience design185(block1038). The process then iterates until the codified user experience design185has been fully built and packaged. As shown inFIG.10E, in some embodiments the package generation and encoding subsystem135transmits a request for artifacts to the specification capture subsystem130(block1040). When the specification capture subsystem130receives the artifact request (block1042) the specification capture subsystem130determines whether there are any new artifacts (block1044). If there are no new artifacts (a determination of NO at block1044), the specification capture subsystem130messages that there are no new artifacts. If there are new artifacts (a determination of YES at block1044), the specification capture subsystem130transmits the artifacts to the package generation and encoding subsystem135(block1046). When the package generation and encoding subsystem135receives the artifact, the package generation and encoding subsystem135packages and encodes the artifact and adds the artifact to the codified user experience design185(block1048). The process then iterates until the codified user experience design185has been fully built and packaged. FIG.11is a flow chart of an example signature marking and versioning process implemented by the example EDCMS ofFIG.1, according to some embodiments. As shown inFIG.11, in some embodiments a signature marking and versioning subsystem140receives the comprehensive user experience specification180and signs and versions the comprehensive user experience to create a versioned and signed codified experience specification. In some embodiments, the signature is implemented using a hash to create a digital signature that is virtually guaranteed to be universally unique. An example hash might be implemented, for example, using a Secure Hash Algorithm such as SHA-256, which creates a 32-byte hash signature. Other hash algorithms may similarly be used, depending on the implementation. In some embodiments the versioning process assigns a version number to the versioned and signed codified experience specification190to enable each version of a given user experience design to be specifically identified. Example version number might be 1.1, 1.2, 1.2.1, etc., depending on the implementation. In some embodiments, the user is prompted to provide input as to how the user experience design should be versioned. The package, its signature, and its version identifier, constitute a unique artifact for a particular experience design. Any future change to the design will result in a new signature and a new version number, to enable all versions of the user experience design to be uniquely identified within the user experience design repository145. FIG.12is a functional block diagram of an example data structure configured to implement a user experience design repository145of the EDCMS ofFIG.1, according to some embodiments. As shown inFIG.12, in some embodiments the user experience design repository145includes a data structure1200having entries containing versioned and signed codified experience specifications190. Each entry has a version number1205that uniquely identifies the experience specification190and a signature1210that is able to be used to verify the content of the experience specification190. The user experience design repository145can be used as a single source for all codified versioned instances of all experience designs, and can be used in a CI/CD pipeline manner, kicking off events and operations when new packages are added or existing packages are changed. As described above, in some embodiments the EDCMS is configured to interface with design systems to retrieve a user experience definition based on a user experience design, and generate a full, versioned pattern implementation in a web framework such as Angular, React, Vue, or micro frontend. This enables a complete CSS, HTML, and JavaScript to be created for an entire prototype, that is then usable by the engineers to create a user interface based on the user experience design. The methods described herein may be implemented as software configured to be executed in control logic such as contained in a CPU (Central Processing Unit) or GPU (Graphics Processing Unit) of an electronic device such as a computer. In particular, the functions described herein may be implemented as sets of program instructions stored on a non-transitory tangible computer readable storage medium. The program instructions may be implemented utilizing programming techniques known to those of ordinary skill in the art. Program instructions may be stored in a computer readable memory within the computer or loaded onto the computer and executed on computer's microprocessor. However, it will be apparent to a skilled artisan that all logic described herein can be embodied using discrete components, integrated circuitry, programmable logic used in conjunction with a programmable logic device such as a FPGA (Field Programmable Gate Array) or microprocessor, or any other device including any combination thereof. Programmable logic can be fixed temporarily or permanently in a tangible non-transitory computer readable medium such as random-access memory, a computer memory, a disk drive, or other storage medium. All such embodiments are intended to fall within the scope of the present invention. Throughout the entirety of the present disclosure, use of the articles “a” or “an” to modify a noun may be understood to be used for convenience and to include one, or more than one of the modified noun, unless otherwise specifically stated. Elements, components, subsystems, and/or parts thereof that are described and/or otherwise portrayed through the figures to communicate with, be associated with, and/or be based on, something else, may be understood to so communicate, be associated with, and or be based on in a direct and/or indirect manner, unless otherwise stipulated herein. Various changes and modifications of the embodiments shown in the drawings and described in the specification may be made within the spirit and scope of the present invention. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings be interpreted in an illustrative and not in a limiting sense. The invention is limited only as defined in the following claims and the equivalents thereto.
57,105
11861330
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure. Embodiments are disclosed in sections according to the following outline:1.0 General Overview2.0 Structural & Functional Overview3.0 Machine Mediated Requirement Management in a Software Trial Management4.0 Implementation Example—Hardware Overview 1.0 General Overview In one embodiment, the disclosure provides computer-implemented techniques for machine mediated requirement management in a software trial management system. In one embodiment, the disclosure provides a programmed online distributed computer system or platform implemented via client-server Software as a Service (SaaS) techniques that executes, among other processes, computer-implemented database management and file management techniques, in particular requirement management techniques relating to product evaluations. Embodiments are programmed to centralized, normalized storage of requirements definitions associated with a software trial, where two-party commit operations are required when a requirement is added after the launch of the trial. Embodiments can also be programmed to automatically update visual progress bars in one or more of a trial management display, as seen by a prospect, target, or customer, and a consolidated display of multiple trial management records, as seen by a sales engineer or other staff of a seller or vendor who is managing multiple customers or trials, workspaces, and requirements tagging or labeling. In one embodiment, the disclosure provides methods and distributed computer systems that are programmed to generate and manage a digital electronic workspace that facilitates defining, viewing, approving, and managing product requirements, and allowing software vendors to guide prospects through an evaluation process. In particular embodiments, the disclosure provides a trust-free systematic online framework that allows both vendors and their pre-sales teams, as well as prospects or buyers, to be held accountable. Traditional PreSales processes may involve a PreSales team and a prospect each keeping track of software requirements on separate spreadsheets which can become misaligned, outdated, or filled with errors. Thus, one technical advantage of the disclosure over state-of-the-art techniques is that centralized storage and management of requirements produces more accurate data that can be more efficiently used or transmitted to other processes. While transparency and alignment are important to enable prospects to efficiently manage data associated with an evaluation, they are also essential for members of the selling organization. Hence, the disclosure provides computer-based systems and methods for improving data processing efficiency of enterprise organizations such as Sales, PreSales, and Customer Success. For example, in an embodiment, the disclosure provides an electronic historical record that documents the changes that are made to the digital electronic workspace. Queries can be transmitted to a database of historical records to allow customer service team members to efficiently determine which account made a particular change to a particular requirement, thereby saving computing resources and bandwidth that might otherwise need to be spent searching through gigabytes of emails or spreadsheets to effectively service a customer account, and representing a distinct technical advantage. In another example, the disclosure provides computer-based systems and methods for leveraging the institutional knowledge and experience of veteran pre-sales leaders. For example, in particular embodiments, the disclosure provides a library of requirements and associated tags that can be pulled from by new pre-sales team members to form initial requirements data for a new project or product. Thus, the disclosed library of particular embodiments allows a pre-sales team to build out requirements while saving computing resources and bandwidth that might otherwise need to be spent manually or algorithmically searching through digital records of previous evaluations, thereby improving the functioning of one or more computing devices. In one embodiment, the disclosure provides a computer-implemented or programmed method, comprising: generating and digitally storing, in computer memory, a unique identifier associated with a digital electronic workspace, the digital electronic workspace being associated with a project, a first account, and a second account; receiving, from the first account, a first input formatted to grant a plurality of permissions to the digital electronic workspace to a second account; generating and digitally storing, in the computer memory, initial requirements data comprising a plurality of digital requirement objects, each digital requirement object comprising a digital electronic representation of a natural language text summary, each natural language text summary describing a potential feature of the project; receiving, from the first account or a third account associated with the first account, a second input indicating a first set of one or more of the digital requirement objects; responsive to receiving the second input, associating the first set of one or more of the digital requirement objects with the unique identifier; receiving, from the second account, a third input to generate and digitally store an additional digital requirement object in the computer memory; associating the additional digital requirement object with the unique identifier; receiving, from the first account and the second account respectively, a fourth input and a fifth input, each of the fourth input and the fifth input indicating a consensus that the project should possess each potential feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier; responsive to receiving the fourth input and the fifth input, changing a state value of a variable associated with the unique identifier from a prior state to a new state, the new state indicating that new digital requirement objects can only be associated with the unique identifier responsive to receiving a digital input indicating assent from each of the first account and the second account; and displaying, in a graphical user interface (GUI), an indication that the state value of the variable has been changed to the new state. In one embodiment, the unique identifier is further associated with workspace data stored in computer memory, the workspace data being formatted to represent in the computer memory a workspace name, a description, a start date, and a due date, and the workspace name and the description each comprising natural language text. One embodiment includes: storing, in a database, a set of digital requirement objects comprising the plurality of digital requirement objects comprised by the initial requirements data, certain ones of the digital requirement objects of the set of digital requirement objects being associated with one or more tags, respectively; receiving, from the first account, a sixth input selecting one or more particular tags; and causing to be displayed, in the GUI, one or more particular digital requirement objects that are associated with one or more of the one or more particular tags. One embodiment includes: receiving, from the second account or a fourth account associated with the second account, a seventh input indicating that the project possesses a specific feature described in a specific natural language text summary digitally represented by a specific digital requirement object associated with the unique identifier; and responsive to receiving the seventh input, displaying, in the GUI, an indication that the second account or the fourth account has determined that the project possesses the specific feature. One embodiment includes: digitally storing, in a database, an electronic historical record documenting a set of changes made to the digital electronic workspace, the set of changes documented by the electronic historical recording including changes made responsive to the third input, the fourth input, and the fifth input and comprising a representation of a particular account that initiated each change of the set of changes, respectively; and displaying, in the GUI, a representation of the electronic historical record. One embodiment includes: receiving, from each of the first account and the second account, an eighth input and a ninth input, respectively, each of the eighth input and the ninth input indicating that the project possesses each feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier; and responsive to receiving the eighth input and the ninth input, displaying, in the GUI, an indication that the first account and the second account each has determined that the product possesses each feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier. In one embodiment, the digital electronic workspace associated with the unique identifier is a first digital electronic workspace of a set of one or more digital electronic workspaces. One embodiment includes: displaying, in a panel of the GUI, one or more links to the one or more digital electronic workspaces of the set of digital electronic workspaces, respectively; and displaying, in the panel of the GUI, one or more progress bars corresponding to the one or more links, the one or more progress bars indicating a completion level of the corresponding one of the one or more digital electronic workspaces of the set of digital electronic workspaces. In one embodiment, the disclosure provides one or more computer-readable non-transitory storage media storing instructions operable when executed by one or more processors to cause performance of the computer-implemented methods described herein with greater specificity. In one embodiment, the disclosure provides a system comprising: one or more processors; and one or more computer-readable non-transitory storage media coupled to one or more of the processors and storing instructions operable when executed by one or more of the processors to cause the system to perform operations comprising the computer-implemented methods described herein with greater specificity. All references to “accounts” or “users” in this disclosure, refer to manipulation of human-computer interfaces to provide data to a computer system, and/or programmatic action by user accounts or user computers interoperating with a system, and not to human action in the abstract. User accounts may be programmatically granted permissions to a digital electronic workspace through an invitation process such as by an “owner” of a digital electronic workspace sending an email invitation and link to an email address associated with a particular user account. Depending on the particular permissions granted to the account, the user of that account may be able to transmit input to the platform formatted to cause responses as described further herein with more specificity, those responses including, but not limited to, the creation of one or more digital requirement objects and association of those requirement objects with the digital electronic workspace or its unique identifier, changing a state value of a variable that indicates a status of the workspace, assigning tags to a requirement, initializing a task associated with a requirement, uploading an attachment to a workspace, or inviting additional users to a workspace. Furthermore, in some embodiments, tasks can be associated with any of a requirement, workspace, or organization. 2.0 Structural & Functional Overview FIG.1illustrates a distributed computer system showing the context of use and principal functional elements with which one embodiment could be implemented.FIG.1, and the other drawing figures and all the descriptions and claims in this disclosure, are intended to present, disclose, and claim a wholly technical system with wholly technical elements that implement technical methods. In the disclosure, specially programmed computers, using a special-purpose distributed computer system design, execute functions that have not been available before in a new manner using instructions ordered in a new way, to provide a practical application of computing technology to the technical problem of machine mediated requirement management in a software trial management system. Every step or operation that is functionally described in the disclosure is intended for implementation using programmed instructions that are executed by a computer. In this manner, the disclosure presents a technical solution to a technical problem, and any interpretation of the disclosure or claims to cover any judicial exception to patent eligibility, such as an abstract idea, mental process, method of organizing human activity, or mathematical algorithm, has no support in this disclosure and is erroneous. Certain aspects of the disclosure may involve or relate to certain non-technical contexts such as PreSales support, and a description of those contexts is appropriate to orient the reader to the technical improvements of the disclosure, but the specification and claims are not directed to those concepts in the abstract and are intended to be interpreted as directed to only the technical processes that are recited. In one embodiment, a distributed computer system comprises a server computer110that is communicatively coupled to client computing device120over network100. Network100broadly represents any combination of one or more data communication networks including local area networks, wide area networks, internetworks, or internets, using any of wireline or wireless links, including terrestrial or satellite links. The network(s) may be implemented by any medium or mechanism that provides for the exchange of data between the various elements ofFIG.1. The various elements ofFIG.1may also have direct (wired or wireless) communications links. The server computer110, the client computing device120, and other elements of the system may each comprise an interface compatible with the network100and may be programmed or configured to use standardized protocols for communication across the networks such as TCP/IP, Bluetooth, or higher-layer protocols such as HTTP, TLS, and the like. In one embodiment, client computing device120may be a computer that includes hardware capable of communicatively coupling the device to one or more server computers, such as server computer110, over one or more service provides. For example, the client computing device120may include a network card that communicates with server computer110through a home or office wireless router (not illustrated inFIG.1) that is communicatively coupled to an internet service provider. The client computing device120may be a smartphone, personal computer, tablet computing device, PDA, laptop, or any other computing device capable of transmitting and receiving information and performing the functions described herein. In one embodiment, the client computing device120may comprise device memory128, operating system122, application program124, and application extension126. In one embodiment, client computing device120hosts and executes the application program124, which the client computing device120may download and install from server computer110, an application store, or another repository. The application program124is compatible with server computer110and may communicate with the server computer110using an app-specific protocol, parameterized HTTP POST and GET requests, and/or other programmatic calls. In some embodiments, application program124comprises a conventional interne browser application that is capable of communicating over network100to other functional elements via HTTP and is capable of rendering dynamic or static HTML, XML, or other markup languages, including displaying text, images, accessing video windows and players, and so forth. In embodiments, server computer110may provide an application extension126for application program124through which the aforementioned communication and other functionality may be implemented. In some embodiments, a device display180, such as a screen, may be coupled to the client computing device120. In one embodiment, device memory128may digitally store one or more items depicted as being stored in memory111. The server computer110may be implemented using a server-class computer or other computer having one or more processor cores, co-processors, or other computers. The server computer110may be a physical server computer and/or virtual server instance stored in a data center, such as through cloud computing. In one embodiment, server computer110may be implemented using two or more processor cores, clusters, or instances of physical machines or virtual machines, configured in a discrete location, or co-located with other elements in a datacenter, shared computing facility, or cloud computing facility. In some embodiments, client computing device120is only one of a number of client computing devices interconnected with server computer110. There may be potentially many more client computing devices employed in executing the systems and methods described herein. On the other hand, some embodiments may not use Client-Server architecture and may instead implement the disclosed programmed processes on-device; thus, the disclosed architecture is exemplary. Referring again toFIG.1, in one embodiment, server computer110may comprise data processing instructions104coupled to both presentation instructions102and memory111. The memory111may represent any memory accessible by the server computer110including a relational database, a data lake, cloud data storage, local hard drives, computer main memory, or any other form of electronic memory. In various embodiments, server computer110may store and execute sequences of programmed instructions of various types to cause execution of various methods. In example only, server computer110may execute the data processing instructions104and the presentation instructions102in various programmed methods, but server computer110may also execute other types of programmed instructions in particular embodiments. The data processing instructions104may be executed by the server computer110to process or transform data, such as by executing a programmed machine learning model, or to cause data stored in memory111to be transmitted to client computing device120over the network100. In various embodiments, presentation instructions102may be executed by server computer110to cause presentation in a display of a computing device communicating with server computer110over network100(such as client computing device120) or to cause the transmission of display instructions to such a computing device, the display instructions formatted to cause such presentation upon execution. Rather than comprising a general-purpose computer, the server computer110is specially configured or programmed with the functional elements shown inFIG.1. In one embodiment, server computer110digitally stores digital electronic workspace data130. In one embodiment, each row of a database storing the digital electronic workspace data130represents a single workspace of a plurality of digital electronic workspaces. Each workspace may be digitally stored in memory111with a reference to a unique identifier132associated with that workspace. In particular embodiments, the workspace data130can be formatted to represent in the memory111a workspace name, a description, a start date, and a due date, and the workspace name and the description may each comprise natural language text. For example, the workspace start date may be “Nov. 1, 2021,” and the workspace end date may be “Nov. 30, 2021.” For further example, the workspace name may be “1-Month Leverik Trial,” and the description may be “Evaluating the Contactial product suite.” Each workspace may be further associated in memory111with a launch status136. In particular embodiments, the launch statuses may comprise: “AwaitingLaunch,” “PendingLaunch,” “Launched,” “PendingCompletion,” “Completed,” and “Archived.” The launch status of a workspace associated with a unique identifier132may be digitally updated in memory111by changing a state value of a variable to a value that indicates a corresponding launch status136for that workspace. In particular embodiments, sever computer110stores a plurality of digital requirement objects138in memory111. Each digital requirement object138may comprise a digital electronic representation of a natural language text summary, each natural language text summary describing a potential feature of a project associated with the digital electronic workspace. For example, the project could be a transaction involving a sale of a software product between a selling entity and a buying entity. In particular embodiments, server computer110stores task data140in memory111. Elements of the task data140may comprise references to a unique identifier132of a particular digital electronic workspace, thereby indicating that those elements are associated with the particular digital electronic workspace. In particular embodiments, task data140comprises a digital electronic representation of tasks associated with a digital electronic workspace which are to be performed by a particular user account or set of user accounts associated with the digital electronic workspace or its unique identifier132. Particular tasks represented in the task data140may or may not be associated with certain ones of the digital requirement objects138. In some embodiments, task data140can be associated with a workspace or organization. In particular embodiments, server computer110stores attachments data142in memory111. Elements of the attachments data142may comprise references to a unique identifier132of a particular digital electronic workspace, thereby indicating that those elements are associated with the particular digital electronic workspace. In example, attachments data may include image files, text files, video files, slideshow presentation files, spreadsheet files, or other types of files which may be relevant to the project associated with the digital electronic workspace. In particular embodiments, server computer110stores an electronic historical record144in memory111. The electronic historical record144may document a set of changes made to the digital electronic workspace, the set of changes documented by the electronic historical recording including changes made responsive to inputs received from one or more user accounts and comprising a representation of a particular account that initiated each change of the set of changes, respectively. 3.0 Machine Mediated Requirement Management in a Software Trial Management System FIG.2illustrates an example computer-implemented or programmed process for machine mediated requirement management in a software trial management system.FIG.2and each other flow diagram herein is intended as an illustration at the functional level at which skilled persons, in the art to which this disclosure pertains, communicate with one another to describe and implement algorithms using programming. The flow diagrams are not intended to illustrate every instruction, method object or sub-step that would be needed to program every aspect of a working program, but are provided at the same functional level of illustration that is normally used at the high level of skill in this art to communicate the basis of developing working programs. Referring now toFIG.2, in one embodiment, a process200is programmed to start execution at step205by generating and digitally storing, in computer memory, a unique identifier associated with a digital electronic workspace, the digital electronic workspace being associated with a project, a first account, and a second account. The unique identifier may be an alphanumeric string, a hashed, encoded, or encrypted value, or another type of digital data which is stored in computer memory and represents digital electronic workspace. In the context of this disclosure, a project can be understood as a joint effort being proposed or undertaken by a first organization or entity associated with the first account and a second organization or entity associated with the second account. For example, the project could be a product, such as a software product that the first organization or entity is developing for the second organization or entity. In further example, the first account may be a first user account, for the digital electronic workspace, of a pre-sales leader or manager employed by or contracted by the first organization or entity. On the other hand, the second account may be a second user account, for the digital electronic workspace, of a manager or responsible party employed by or contracted by the second organization or entity. Thus, the second organization or entity may be a prospective buyer (a “prospect”) considering purchasing the software product that the first organization or entity is developing for the second organization or entity. In particular embodiments, a first account or a second account being associated with the digital electronic workspace means that the program code defining the digital electronic workspace in computer memory contains references identifying each of the first account and second account. In particular embodiments, the unique identifier of the digital electronic workspace is stored in the computer memory with information defining each of the first account and the second account, thereby creating associations between each of the accounts and the digital electronic workspace. FIG.3Cillustrates a prompt for receiving input to configure a new digital electronic workspace, in one embodiment.FIG.3Cdepicts a workspace creation panel375displayed in a graphical user interface (GUI)390which may be rendered by a web browser. In particular embodiments, an input field such as text entry box376can be used by the first account to name the workspace, while an input filed such as text entry box377can be used to by the first account to provide a description for the workspace. A date input field378can be used to enter a start date for the workspace. Moreover, in particular embodiments, a date input field379can be used as an optional field to enter a due date for the workspace. In one embodiment, process200is programmed to execute step215by receiving, from the first account, a first input formatted to grant a plurality of permissions to the digital electronic workspace to a second account. In particular embodiments, permissions are understood as authorizations to use and change the digital electronic workspace as described further herein with more specificity. A level of permissions granted to a particular account can be stored as a variable in the computer memory associated with the account. Depending on the value of the aforementioned variable, the server computer110can execute coded instructions to grant the particular account access to and the ability to change the digital electronic workspace in particular ways as specified by the program code defining the digital electronic workspace. FIG.3Aillustrates an overview tab of a digital electronic workspace associated with a project, a first account, and a second account, the overview tab featuring a plurality of comments, in one embodiment.FIG.3Ashows example output displayed in a GUI on a device display of a client computing device related to a digital electronic workspace named “1-MONTH LEVERIK TRIAL.” The workspace named “1-MONTH LEVERIK TRIAL” is one of several digital electronic workspaces accessible to a first account via a workspace nav300that shows progress bars302for a plurality of digital electronic workspaces. A workspace information frame304displays information related to the workspace, including certain types of information that may have been uploaded using the prompt depicted inFIG.3C. The depicted workspace information frame shows that the workspace “owner” is Brian Cooke, a first user account associated with the workspace, while the “Champion” is Celia Hernandez, a second user account associated with the workspace. For example, Brian Cooke may be the name of an individual on a pre-sales team of a software vendor and Celia Hernandez may be the name of an individual managing software evaluation and potential purchasing related to a one-month long trial of a software called “Leverik.” FIG.3Adepicts a plurality of buttons which may be used to access various tabs related to the digital electronic workspace: overview button310(for accessing an overview tab like the one depicted inFIG.3A), requirements button320(for accessing a requirements tab), tasks button330(for accessing a tasks tab), attachments button340(for accessing an attachments tab), and members button350(for accessing a members tab). Responsive to receiving input indicating a user account selecting a particular one of these buttons, server computer110can execute programmed instructions formatted to cause displaying the appropriate corresponding tab of the digital electronic workspace. In particular embodiments, the overview tab also includes one or more side controls301which can access other aspects of the digital electronic workspace as explained further herein with greater specificity. And in particular embodiments, a presentation button309can be used to hide certain visual elements of the overview tab or other tabs that would otherwise be rendered in the GUI390, as explained further herein with greater specificity. FIG.4Aillustrates a “members” tab of a digital electronic workspace, in one embodiment. In particular embodiments, the first input of step215is transmitted to the platform by the first account using the “members” tab depicted inFIG.4A. For example, a user account may select the invite button391, triggering server computer110to display a prompt in the GUI390. The user account may then submit information in the prompt such as a name, email address, and user type (permissions level) of a user to invite to the workspace, triggering server computer110to grant permissions to the digital electronic workspace to the invited user. In particular embodiments, the “members” tab includes information about the user accounts associated with the workspace in a name column351, a user type (permissions) column352, an activity status column353, and an email column354displayed in the GUI390. Referring again toFIG.3A, in particular embodiments, the overview tab includes an attachment frame308that allows a user account to upload files to the digital electronic workspace. Various types of files may be uploaded including image files, text files, video files, slideshow presentation files, spreadsheet files, or other types of files.FIG.3Adepicts representations of several pdf files and a .pptx that were uploaded to the digital electronic workspace, potentially by selecting the depicted “+new” button and specifying a file path to the desired files. In particular embodiments, the overview tab includes a progress bar302that shows progress made in the workspace on requirements (represented in computer requirement objects). In particular embodiments, each requirement object may take on one of a variety of different requirement statuses such as “AwaitingReview,” “UnderReview,” “Satisfied,” “PartiallySatisfied,” “NotSatisfied,” “Roadmap,” “Exempt,” “Withdrawn,” or “PendingAgreement.” In the present example, each status value can be defined as:AwaitingReview—the requirement has been entered but is not yet approved as a valid requirement, and the first reviewer in a mandatory approval workflow has not yet viewed the requirement.UnderReview—the requirement has been entered but is not yet approved as a valid requirement, and one or more reviewers in the approval workflow have viewed the requirement.Satisfied—the prospect has completed evaluation of the requirement and has agreed that the product satisfies the requirement.PartiallySatisfied—the prospect has completed evaluation of the requirement and has agreed that the product satisfies a portion of the requirement.NotSatisfied—the prospect has completed evaluation of the requirement and has signaled that the product does not satisfy the requirement.Roadmap—the product does not satisfy the requirement, but the requirement has been added to digitally recorded plans for modification of the product.Exempt—the prospect has completed evaluation of the requirement and has determined that the product does not need to satisfy the requirement, usually because the requirement does not apply to a particular situation or use case.Withdrawn—the prospect or the vendor entered the requirement but later determined not to consider the requirement in the evaluation.PendingAgreement—the requirement has been entered and approved by either the prospect or the vendor but not both, and the counterparty needs to manifest agreement to the requirement. The particular values for status are not critical and other embodiments can use other status values. The status of a requirement may be digitally updated in the computer memory by changing a state value of a variable to a value that indicates a corresponding requirement status for that requirement, for example, by input from an authorized user to select a change in state. Said values may indicate whether designated accounts, such as the first account and the second account associated with the workspace agree that the project or product possesses the feature described in the natural language text summary digitally represented by the requirement object with the particular status. As shown, the progress bar may be displayed along with text that describes how many of the requirement objects of the digital electronic workspace have each of the aforementioned statuses. In particular embodiments, after the number of the requirement objects of the digital electronic workspace with the “Satisfied” status surpasses a threshold number digitally stored in the computer memory, the progress bar can turn green. Similarly, the progress bar can be yellow when a moderate number of the requirement objects are satisfied, and it can be red when few of the requirement objects have been satisfied. Color-coding in this manner is optional and other embodiments can use means other than color to visually communicate a state or change in state. In particular embodiments, the overview tab includes a public feed that displays at least one of comments or a representation of an electronic historical record. In particular embodiments, comments comprise text or images submitted by user accounts associated with the workspace. In particular embodiments, user accounts associated with the workspace may be alerted whenever they are mentioned in a comment, either through a visual indicator displayed in the GUI390or by server computer110executing programmed instructions formatted to send an email to the mentioned user. For example, users may be considered mentioned when their name or account name is uploaded in a comment preceding the “@” symbol, for example, “@Brian Cooke.” In particular embodiments, users might not receive an email to a registered email address every time they are mentioned in a comment but may instead receive a periodic email digest. In particular embodiments, server computer110may be programmed to execute notification/subscription logic within the APOLLO framework. In particular embodiments, when a user clicks a comments button312, the platform may execute programmed instructions formatted to display one or more comments313as text rendered in the GUI390. Similarly, in particular embodiments, when a user clicks a history button314, the platform may execute programmed instructions formatted to display a representation of an electronic historical record as text rendered in the GUI390.FIG.3Adepicts one example of an overview tab displaying comments313, for example, after an account has selected the comments button312. FIG.3Billustrates an overview tab of a digital electronic workspace associated with a project, a first account, and a second account, the overview tab featuring a representation of an electronic historical record, in one embodiment. For example,FIG.3Bdepicts output that might be displayed in the GUI390after a user account selects the history button314on the overview tab illustrated inFIG.3A. In particular embodiments, one or more update notes318may be displayed in the public feed311, the collection of update notes318representing the electronic historical record. In particular embodiments, each of the update notes318chronicles a change made to the digital electronic workspace as a result of an input received by server computer110from a user account associated with the workspace. The update notes318may memorialize information such as the name of the user account that made the change—for example, “Brian Cooke”—a date, a timestamp, and a natural language text description of the change. For example, update notes318displayed in the GUI390may relate to creating a new digital requirement object corresponding to a new requirement for the project or product, changing the status of a requirement, launching the workspace, and the like. In one embodiment, process200is programmed to execute step225by generating and digitally storing, in the computer memory, initial requirements data comprising a plurality of digital requirement objects, each digital requirement object comprising a digital electronic representation of a natural language text summary, each natural language text summary describing a potential feature of the project. FIG.4Dillustrates a “requirements” tab of a digital electronic workspace with a pop-up analytics element displayed proximal to a workspace nav, in one embodiment. In particular embodiments, various digital requirement objects are represented in the “requirements” tab of the digital electronic workspace as requirement rows306, each having a plurality of fields comprising a “name” field, a “description” field, a “status” field, an “owner” field, a “tags” field, and an “importance” field. The “name” and “description” field may comprise representations of natural language text describing the feature of the project or product represented by the particular requirement object. Each requirement object may take on one of a variety of different requirement statuses such as “AwaitingReview,” “UnderReview,” “Satisfied,” “Partially Satisfied,” “NotSatisfied,” “Roadmap,” “Exempt,” “Withdrawn,” or “PendingAgreement.” The status of a requirement may be digitally updated in the computer memory by changing a state value of a variable to a value that indicates a corresponding requirement status for that requirement. Said values may indicate whether designated accounts, such as the first account and the second account associated with the workspace agree that the project or product possesses the feature described in the natural language text summary digitally represented by the requirement object with the particular status. For example,FIG.4Dshows that the owner of the requirements is “Brian Cooke,” thus the user account of Brian Cooke may be one such designated account capable of updating the status of those requirements, as described further herein with greater specificity. As depicted inFIG.4D, in particular embodiments, certain ones of the digital requirement objects of the set of digital requirement objects are associated with one or more tags, respectively. One or more tags for each of the certain ones of the digital requirement objects may be displayed in the “tags” field. In particular embodiments, the “importance” field displays a level of importance set for the requirement object of the requirement row which is determined by digital input received from a user account associated with the digital electronic workspace. In particular embodiments, a pop-up analytics element326can be displayed on the workspace nav300when a user hovers over a progress bar302for a digital electronic workspace using an input device. In particular embodiments, the pop-up analytics element326comprises data indicating a level of completion of the requirements of the hovered workspace. For example, in the depiction ofFIG.4D. the pop-up analytics element326indicates that for the “Badtronic Test Drive” digital electronic workspace302, 14 requirements have the status “Not Satisfied,” 2 requirements have the status “Roadmap,” 1 requirement has the status “Partially Satisfied,” and 3 requirements have the status “Satisfied.” These statuses can also be reflected in the progress bar of the “Badtronic Test Drive” as described previously herein, such that requirements that are not satisfied show up in the color red, while requirements that are satisfied show up in the color green, and the like. Notably, as depicted inFIG.4D, the pop-up analytics element326does not necessarily correspond to the statuses of the digital requirements objects depicted in the digital requirements objects of the digital electronic workspace (LEVERIK TRIAL PILOT 7890) depicted in the displayed requirements tab, but rather corresponds to the hovered digital electronic workspace. In one embodiment, process200is programmed to execute step235by receiving, from the first account or a third account associated with the first account, a second input indicating a first set of one or more of the digital requirement objects. In one embodiment, process200is programmed to execute step245responsive to receiving the second input, associating the first set of one or more of the digital requirement objects with the unique identifier. For example, a reference to the unique identifier can be stored with the digital data of the set of the one or more digital requirement objects in the database. FIG.4Billustrates a “library” page of requirements of the platform, in one embodiment. In particular embodiments, the “library” page features requirement rows306representing digital requirement objects that can be added to a desired digital electronic workspace by an associated user account with the requisite permissions. In particular embodiments, a user account may select a particular requirement with an input device, triggering the platform to launch a prompt that allows the user account to select a particular digital electronic workspace to add the requirement to. Moreover, in particular embodiments, the “library” page comprises a search bar and/or filter controls that allows a user account to drill down to find a desired set of digital requirement objects to add to a desired digital electronic workspace. For example, a user account can search within the library of digital requirement objects to find the digital requirement objects that have been labeled with a particular tag. In particular embodiments, the filtered set of desired digital requirement objects can be added to the desired digital electronic workspace together in a batch. In particular embodiments, the “library” page includes control buttons307that allow for the addition of new digital requirement objects or the importation/exportations of one or more digital requirement objects of the library. Thus, in particular embodiments, the first set of one or more of the digital requirement objects could be a set of digital requirement objects added to the digital electronic workspace using the “library” page depicted inFIG.4B. For example, the first account or the third account could use the filter and/or search functionality of the “library” page to find and select a plurality of digital requirement objects having particular tags to add to a desired digital electronic workspace. Then, the user account could transmit an input formatted to add those selected digital requirement objects to the desired digital electronic workspace. Responsive to receiving that input, server computer110could then cause the selected digital requirement objects to be associated with the unique identifier of the desired digital electronic workspace. FIG.311illustrates a “task” tab of a digital electronic workspace, in one embodiment. In particular embodiments, each task of the “task” tab can be represented in the GUI as a task row369of a task panel366that comprises one or more of the name of the task, the owner of the task, the status of the task, the priority of the task, the name of the account assigned to the task, and a due date for the task. Tasks (represented by task rows369) can be associated with a specific digital requirement object or they can exist without an association to a specific digital requirement object. The tasks may also be grouped using a grouping button368. In one embodiment, process200is programmed to execute step255by receiving, from the second account, a third input to generate and digitally store an additional digital requirement object in the computer memory. In particular embodiments, control buttons307(FIG.4D) can be used to receive input from a user account to create a new digital requirement object as described further herein with greater specificity. In one embodiment, process200is programmed to execute step265by associating the additional digital requirement object with the unique identifier. For example, a reference to the unique identifier can be stored with the digital data of the additional digital requirement object in the database. FIG.3Eillustrates changing the status of a digital requirement object stored in computer memory, in one embodiment. In particular embodiments, responsive to receiving input from an account on a graphical element of a requirement row306, the platform may cause a requirement status panel322to populate in GUI390. The requirement status panel may comprise feedback elements324that indicate whether accounts associated with the digital electronic workspace have indicated that the project or product possesses the feature described by the natural language text summary represented by the requirement object corresponding to the selected requirement row306. The requirement status pane322may also have a graphical element such as drop-down menu327that can be used by an authorized account associated with the digital electronic workspace to change the status of the digital requirement object associated with the selected requirement row306. For example, if the user account believes that the project or product possess the feature, then the account can select the “Satisfied” status from the drop-down menu327. When server compute110receives such input, server computer110can execute programmed instructions to update the status of the requirement object in the database. In particular embodiments, the requirement status panel comprises one or more additional graphical elements displayed in GUI390. In particular embodiments, the requirement status panel may comprise an importance element328which can be used to update the importance of the digital requirement object similarly to the updating of the status as described above. In particular embodiment, the options for importance digitally stored in the database may comprise “High,” “Medium,” and “Low,” however other values are also possible. In particular embodiments, date stamp element325records the date that the digital requirement object was created. In particular embodiments, a tag element329can be used to change which tags are associated with the digital requirement object corresponding to the selected requirement row306. FIG.4Cillustrates a prompt for adding a tag to a digital requirement object, in one embodiment. In particular embodiments, the platform can cause the depicted tag prompt370to be displayed in GUI390when the tag element329(FIG.3Eis selected). In particular embodiments, the tag prompt370can be used to add, change, or remove tags of a digital requirement object. In particular embodiments each tag can comprise one or more of a name which can be input using a name drop-down371or other input element or a category which can be input using a category drop down372or other input element. The tags may also each be associated with privacy settings selectable via a privacy checkbox373which can determine whether a particular tag is viewable by accounts associated with a second account associated with an entity or organization evaluating a project or product or only by accounts associated with a first account providing or selling the project or product which is the subject of the digital electronic workspace. A save element374can be used to transmit input to server computer110, causing server computer110to update the tag of the digital requirement associated with the requirement row306in the database. Referring again toFIG.2, in one embodiment, process200is programmed to execute step275by receiving, from the first account and the second account respectively, a fourth input and a fifth input, each of the fourth input and the fifth input indicating a consensus that the project should possess each potential feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier. FIG.3Dillustrates a prompt for changing the state of a digital electronic workspace to a “launched” state such that new digital requirement objects can only be associated with the workspace responsive to receiving input indicating two-party assent, in one embodiment. In particular embodiments, a launch panel362can be populated in the GUI390responsive to receiving input indicating that a user selected a launch workspace button360. In particular embodiments, the launch panel362can then be used to send input to server computer110to launch the workspace. In particular embodiments, a confirmation field365can be used to confirm the choice to “launch” the workspace. In particular embodiments, the launch panel362can also be used to select a user account to be the “Champion” of the workspace, which means that account will become permissioned by the platform (along with the first account/workspace owner) to subsequently transmit input to server computer110indicating that the project should possesses each feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier. In one embodiment, process200is programmed to execute step285responsive to receiving the fourth input and the fifth input, changing a state value of a variable associated with the unique identifier from a prior state to a new state, the new state indicating that new digital requirement objects can only be associated with the unique identifier responsive to receiving a digital input indicating assent from each of the first account and the second account. In one embodiment, process200is programmed to execute step295by displaying, in a graphical user interface (GUI)390, an indication that the state value of the variable has been changed to the new state. For example, the word “launched” might be populated in frame304of the overview tab of the digital electronic workspace (FIG.3A&FIG.3B) indicating the new state of the digital electronic workspace. FIG.3Gillustrates a prompt for an account to provide assent to a new digital requirement object being added to a digital electronic workspace “post-launch,” in one embodiment. When server computer110receives input from a user account attempting to add a new requirement after the digital electronic workspace is in the “launched” state, it may cause a post-launch requirement panel380to be populated in GUI390. In particular embodiments, an agreement field382indicates whether the properly permissioned user accounts have agreed that a new digital requirement object should be created for a new requirement for the project or product. In particular embodiments, the properly permissioned account can use an agree button381to transmit input to server computer110assenting to the creation of the new digital requirement object. In particular embodiments, a post-launch status field397indicates whether the first and second account have assented to the creation of the additional digital requirement object and the association of the additional digital requirement object with the digital electronic workspace and/or its unique identifier. In particular embodiments, description fields321may comprise natural language text summaries of the feature of the additional digital requirement object.FIG.3Fillustrates a panel indicating received digital feedback related to whether a project possesses a specific feature corresponding to a specific digital requirement object, in one embodiment. In particular embodiments, the panel ofFIG.3Fmight be displayed in GUI390for a digital electronic workspace that has already been launched. In particular embodiments, server computer110may receive, from each of the first account and the second account, inputs indicating that the project possesses each feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier. Responsive to receiving these inputs, server computer110may cause to be displayed in the GUI, an indication that the first account and the second account each have determined that the product possesses each feature described in each natural language text summary digitally represented by each digital requirement object associated with the unique identifier. For example, in the display ofFIG.3A, the Progress bar306could indicate that no requirements are not Satisfied and that all requirements have the status of Satisfied or a neutral state such as Withdrawn. Furthermore, the Completed date in frame304could be filled in and the Status value in that panel could be Complete. 4.0 Implementation Example—Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.5is a block diagram that illustrates a computer system500upon which one embodiment may be implemented. Computer system500includes a bus502or other communication mechanism for communicating information, and a hardware processor504coupled with bus502for processing information. Hardware processor504may be, for example, a general-purpose microprocessor. Computer system500also includes a main memory506, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus502for storing information and instructions to be executed by processor504. Main memory506also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor504. Such instructions, when stored in non-transitory storage media accessible to processor504, render computer system500into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system500further includes a read only memory (ROM)508or other static storage device coupled to bus502for storing static information and instructions for processor504. A storage device510, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus502for storing information and instructions. Computer system500may be coupled via bus502to a display512, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device514, including alphanumeric and other keys, is coupled to bus502for communicating information and command selections to processor504. Another type of user input device is cursor control516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor504and for controlling cursor movement on display512. This input device typically has two degrees of freedom in two axes, a first axis (for example, x) and a second axis (for example, y), that allows the device to specify positions in a plane. Computer system500may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system500to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system500in response to processor504executing one or more sequences of one or more instructions contained in main memory506. Such instructions may be read into main memory506from another storage medium, such as storage device510. Execution of the sequences of instructions contained in main memory506causes processor504to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device510. Volatile media includes dynamic memory, such as main memory506. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor504for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system500can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus502. Bus502carries the data to main memory506, from which processor504retrieves and executes the instructions. The instructions received by main memory506may optionally be stored on storage device510either before or after execution by processor504. Computer system500also includes a communication interface518coupled to bus502. Communication interface518provides a two-way data communication coupling to a network link520that is connected to a local network522. For example, communication interface518may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface518may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface518sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link520typically provides data communication through one or more networks to other data devices. For example, network link520may provide a connection through local network522to a host computer524or to data equipment operated by an Internet Service Provider (ISP)526. ISP526in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet”528. Local network522and Internet528both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link520and through communication interface518, which carry the digital data to and from computer system500, are example forms of transmission media. Computer system500can send messages and receive data, including program code, through the network(s), network link520and communication interface518. In the Internet example, a server530might transmit a requested code for an application program through Internet528, ISP526, local network522and communication interface518. The received code may be executed by processor504as it is received, and/or stored in storage device510, or other non-volatile storage for later execution. The term “cloud computing” is generally used herein to describe a computing model which enables on-demand access to a shared pool of computing resources, such as computer networks, servers, software applications, and services, and which allows for rapid provisioning and release of resources with minimal management effort or service provider interaction. A cloud computing environment (sometimes referred to as a cloud environment, or a cloud) can be implemented in a variety of different ways to best suit different requirements. For example, in a public cloud environment, the underlying computing infrastructure is owned by an organization that makes its cloud services available to other organizations or to the general public. In contrast, a private cloud environment is generally intended solely for use by, or within, a single organization. A community cloud is intended to be shared by several organizations within a community; while a hybrid cloud comprises two or more types of cloud (for example, private, community, or public) that are bound together by data and application portability. Generally, a cloud computing model enables some of those responsibilities which previously may have been provided by an organization's own information technology department, to instead be delivered as service layers within a cloud environment, for use by consumers (either within or external to the organization, according to the cloud's public/private nature). Depending on the particular implementation, the precise definition of components or features provided by or within each cloud service layer can vary, but common examples include: Software as a Service (SaaS), in which consumers use software applications that are running upon a cloud infrastructure, while a SaaS provider manages or controls the underlying cloud infrastructure and applications. Platform as a Service (PaaS), in which consumers can use software programming languages and development tools supported by a PaaS provider to develop, deploy, and otherwise control their own applications, while the PaaS provider manages or controls other aspects of the cloud environment (i.e., everything below the run-time execution environment). Infrastructure as a Service (IaaS), in which consumers can deploy and run arbitrary software applications, and/or provision processing, storage, networks, and other fundamental computing resources, while an IaaS provider manages or controls the underlying physical cloud infrastructure (i.e., everything below the operating system layer). Database as a Service (DBaaS) in which consumers use a database server or Database Management System that is running upon a cloud infrastructure, while a DbaaS provider manages or controls the underlying cloud infrastructure, applications, and servers, including one or more database servers. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the disclosure, and what is intended by the applicants to be the scope of the disclosure, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
66,078
11861331
DETAILED DESCRIPTION In general, the techniques described in this document can be used to provide standard statistical programming language, for example, R, operations, which include large-scale data processing and large scale data-parallel pipeline functionality, to an end user without the end user having to learn a new programming model or change their existing code syntax. Aspects of the inventive concepts provide implementations of R functions using data wrappers to abstract implementation details of large-scale data processing and data-parallel pipelines from an end user such as a data analyst or statistician. For ease of understanding the R programming language is used as an example, but the techniques described in this document are applicable to any high-level statistical programming language. Large-scale processing may be performed in a distributed data processing system, such as a datacenter or a network of datacenters. For example, large-scale Internet services and the massively parallel computing infrastructure that supports such services may employ warehouse-sized computing systems, made up of thousands or tens of thousands of computing nodes. FIG.1is a block diagram illustrating an example of a datacenter (100). The datacenter (100) is used to store data, perform computational tasks, and transmit data to other systems outside of the datacenter using, for example, a network connected to the datacenter. In particular, the datacenter (100) may perform large-scale data processing on massive amounts of data. The datacenter (100) includes multiple racks (102). While only two racks are shown, the datacenter (100) may have many more racks, Each rack (102) can include a frame or cabinet into which components, such as processing modules (104), are mounted. In general, each processing module (104) can include a circuit board, such as a motherboard, on which a variety of computer-related components are mounted to perform data processing. The processing modules (104) within each rack (102) are interconnected to one another through, for example, a rack switch, and the racks (102) within each datacenter (100) are also interconnected through, for example, a datacenter switch. In some implementations, the processing modules (104) may each take on a role as a master or slave. The master modules control scheduling and data distribution tasks among themselves and the slaves. A rack can include storage (e.g., one or more network attached disks) that is shared by the one or more processing modules (104) and/or each processing module (104) may include its own storage. Additionally, or alternatively, there may be remote storage connected to the racks through a network. The datacenter (100) may include dedicated optical links or other dedicated communication channels, as well as supporting hardware, such as modems, bridges, routers, switches, wireless antennas and towers. The datacenter (100) may include one or more wide area networks (WANs) as well as multiple local area networks (LANs). FIG.2is a block diagram illustrating an example computing device (200) that may be used for one or more of the processing modules (104). In a very basic configuration (201), the computing device (00) typically includes one or more processors (210) and system memory (202). A memory bus (230) can be used for communicating between the processor (210) and the system memory (220). Depending on the desired configuration, the processor (210) can be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The processor (210) can include one more levels of caching, such as a level one cache (211) and a level two cache (212), a processor core (213), and registers (214). The processor core (213) can include an arithmetic logic unit (ALU), a floating point unit (ITU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller (216) can also be used with the processor (210), or in some implementations the memory controller (215) can be an internal part of the processor (210). Depending on the desired configuration, the system memory (220) can be of any type including but not limited to volatile, memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory (220) typically includes an operating system (221), one or more applications (222), and program data (224). The application (222) performs large-scale data processing using statistical programming language syntax which is familiar to data analysts and statisticians. Program Data (224) includes a library for large-scale data processing such as MapReduce or Pregel (202b), a pipeline library such as Flume (202c), and a high-level data wrapper package (202d) for translating between a high-level statistical programming language and lower-level libraries. The operating system (221) generally includes procedures for handling various basic system services and for performing hardware dependent tasks. In some embodiments, the application (222) can be arranged to operate on an operating system (221). The computing device (200) can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration (201) and any required devices and interfaces. System memory (220) is an example of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device200. Any such computer storage media can be part of the device (200). The libraries (202b,202c) and the high-level data wrapper package (202d) provide functions and classes that may be employed by the application software (222) to, using a statistical programming language, perform large-scale data processing and implement data-parallel pipelines in such large-scale data processing. The library for large-scale data processing may support the MapReduce programming model for processing massive amounts of data in parallel. The MapReduce model generally involves breaking computations down into a mapreduce operation, which includes one or more map operations and may include a reduce operation. The process includes receiving a dataset as input, dividing the dataset into data blocks, parsing the data blocks into key/value pairs, sending key/value pairs through a user-defined map function to create a set of intermediate key/value pairs, and reducing the key/value pairs by combining values associated with the same key to produce a final value for each key. Implicit in this model is a shuffle operation, which involves grouping all of the values with the same key. A mapreduce library may implement a map phase, a shuffle phase, and a reduce phase to support computations formulated according to the MapReduce model, in some implementations, to use the mapreduce library, a user program (or another library, such as a pipeline library) calls the mapreduce library, specifying information such as: the input file(s); the output files to receive the output data; and application-specific data processing operators for mapping and reducing. The large-scale data processing library may also support a graph-based programming model such as the Pregel programming model. The Pregel model is used for large-scale graph processing and takes input that is a directed graph in which each vertex is uniquely identified by a string vertex identifier. Each vertex is associated with a modifiable, user defined value. The directed edges are associated with their source vertices, and each edge consists of a modifiable, user defined value and a target vertex identifier. The Pregel model generally involves expressing graphs as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. A pregel library may provide users with a natural API for programming graph algorithms while managing the details of distribution invisibly, including messaging and fault tolerance. It is similar in concept to MapReduce. Although libraries for large-scale data processing such as MapReduce and Pregel make the task of writing data-parallel code significantly easier for software developers, many computations may require data pipelines of distributed data processing operations. A data pipeline is a chain of processing elements arranges so that the output of each element is the input of the next. Programming and managing such pipelines can be difficult. Therefore, software developers may use a library for building scalable data processing pipelines such as Flume. The pipeline library (202c) provides functions and classes that support data-parallel pipelines and, in particular, pipelines that include chains or directed graphs of large scale data processing operations such as those from MapReduce or Pregel. In general, many real-world computations require a chain of large-scale data processing operations. While some logical computations can be expressed as a single data processing operation, other computations require a sequence or a graph of the operations. FIG.3is a block diagram illustrating an example of a pipeline library (300) that may be used to implement the pipeline library as shown in the computer device ofFIG.2(202c). Although the pipeline library is shown within the computing device ofFIG.2, it may also be stored remotely. The pipeline library (300) includes one or more parallel data collection classes (302), one or more parallel operations (304), an evaluator (306), an optimizer (308), and an executor (310). In general, the parallel data collection classes (302) are used to instantiate parallel data objects that hold a collection of data, and the parallel operations (304) are used to perform parallel operations on the data held by the parallel data objects. The parallel operations (304) may be composed to implement data-parallel computations and an entire pipeline, or even multiple pipelines, can be implemented using the parallel collection classes (302) and parallel operations. (304). Parallel data collection classes (302) and operations (304) present a simple, high-level, uniform abstraction over many different data representations and over different execution strategies. The parallel data collection classes (302) abstract away the details of how data is represented, including whether the data is represented as an in-memory data structure, as one or more files, or as an external storage service. Similarly, parallel operations (304) abstract away their implementation strategy, such as whether an operation is implemented as a local, sequential loop, as a remote parallel invocation of a large-scale data processing library, as a query on a database, or as a streaming computation. A pipeline library may implement parallel operations using deferred evaluation. The evaluator (306) may construct an internal execution plan dataflow graph that contains the operations and their arguments. Once the execution plan dataflow graph for the whole logical computation is constructed, the optimizer (308) revises the execution plan, for example, by applying graph transformations that fuse or combine chains of parallel operations together into a smaller number of combined operations. The revised execution plan may include a generalized mapreduce operation, for example, that includes multiple, parallel map operations and multiple, parallel reduce operations, but which can be translated into a single mapreduce operation with a single map function to implement multiple map operations and a single reduce function to implement the multiple reduce operations. The executor executes the revised operations using underlying primitives. When running the execution plan, the executor may choose which strategy to use to implement each operation based in part on the size of the data being processed. The executor may also place remote computations near the data on which they operate, and may perform independent operation in parallel. The pipeline library may be implemented in any of a number of programming languages. The following describes examples of aspects of an implementation. A pipeline library provides a parallel data collection class referred to as a PTable<K,V>, which represents an immutable multi-map. This class is an unordered set of key/value pairs with keys of type K and values of type V. Keys and values may be one of several resource types including: vectors, lists, dataframes, environments, and NULL. There may also be multiple entries with the same key. Additionally, the pipeline library may include a container for a single object of type T, which may be called PObject<T>, A PObject<T>'s associated methods are designed to operate on a single element. A pipeline library may also include several methods that perform operations such as map-like operations. A map-like operation may transform a key,value pair into some number of other key,value pairs. These operations include: mapping; grouping key,value pairs by key; combining values; reducing values; sorting values; and flattening values. As described above, the pipeline library executes parallel operations lazily, using deferred evaluation. The evaluator defers the evaluation of parallel operations, and instead constructs an internal execution plan data flow graph that contains the operations and arguments of the operations. Each parallel data object is represented internally either in deferred, not yet computed, or materialized, computed, state. A deferred parallel data object, for example, holds a pointer to the deferred operation that computes the parallel data object. A deferred operation, in turn, may hold references to the parallel data objects that are the arguments of the deferred operation and the deferred parallel data objects that are the results of the operation. As the data parallel pipeline is executed, the evaluator converts the parallel data objects and parallel operations into a directed graph of deferred, unevaluated objects and operations. This graph may be referred to as the execution plan or execution plan dataflow graph. The optimizer fuses chains or subgraphs of parallel operations in the dataflow graph together into a smaller number of operations, some of which may be combined operations. The executor can then execute these operations using an underlying primitive or other logic. While a large-scale data processing library combined with a library for building scalable data processing pipelines may be scalable to extremely large data sizes and may be an easier programming model for software developers than a parallel data processing library alone, this combination is not sufficient for data analysts and statisticians to efficiently analyze large-scale datasets because data analysts and statisticians need to learn a new programming model in order to use the pipeline library. In an exemplary embodiment, a programming environment according to aspects of the inventive concepts includes a high-level data wrapper package as shown inFIG.2. This data wrapper package wraps a pipeline library (202c). A data wrapper is a data structure or software that contains other data or software to enable the contained elements to exist in a different programming environment and to abstract the implementation details of the contained elements from the user of the data wrapper. The exemplary wrapper package wraps the functionality from the pipeline library into distributed data objects. These distributed data objects may include implementations of functions and operations from a statistical programming language such as R. Statistical functions and operations include any number of functions and/or operations that can be used to analyze data. The data objects from the wrapper package may enable efficient analysis of large-scale datasets while providing normal statistical programming language syntax which is familiar to data analysts and statisticians. Data may be stored as PTables containing a named collection of data elements. This collection may contain a chunk of all the objects that are related to each other by Map-like operations. In some embodiments, as illustrated inFIG.4, a method for data analysts or statisticians to analyze large-scale datasets using a statistical programming language begins with receiving one or more high-level statistical operations written in a statistical programming language such as R (401). The operations may explicitly involve reading or writing data to several different data repositories and may include data transformations. After a high-level operation is received, the operation is dynamically translated into a graph of low-level data operations (403). The graph may be a directed, graph in which the nodes are the operations to perform and the edges are the data dependencies among the operations. Then, the low-level operations are run (405). These low-level operations may be run either locally or on a distributed backend system. When an optimizer such as the one in the pipeline library depicted inFIG.3(308) runs, the optimizer may sum up the size of all the operations that the optimizer needs to process. If the total size of all the operations is less than a given threshold, for example 16 MB, the low-level operations may be run locally. Otherwise, the operations may be run on a distributed system. In some embodiments, a user can explicitly force local or distributed execution. Local execution can be useful for testing whereas distributed execution may be useful for computationally intensive jobs. Running the operations may involve several sub-steps including multiple parallel data operations or local operations, which may be automatically scheduled. The results of the data operations may be written to a data repository. Once the results are put in a data repository, a data analyst or statistician may enter additional sequences of operations to be performed. Alternatively, if the result set or a subset of the results are small enough to fit into memory, the data or a subset of the data can be loaded into local memory for inspection. In other embodiments, once all the high-level operations and transformations have been received, the graph of operations can be automatically transformed into an efficient graph.FIG.5illustrates the process for generating an efficient graph of operations. In the process, unnecessary operations are removed (503). Similar or related operations can be fused, or chained, together (505), Operations can also be grouped into distributed data processing operations (507). This removal and optimization process may result in fewer operations being performed. The process may also result in an efficient use of computation time and resources. An exemplary embodiment May be used to impute missing data into a statistical dataset. Although this operation is a somewhat simplistic, it is not an uncommon way of imputing missing data into statistical datasets. First, a high-level statistical operation may be received which specifies that any missing values in a field called “count” of a table labeled “data” should be replaced with the mean of all the non-missing values of “count” in the “data” table. The received statistical operation may be similar to the following R code: data$count[is.na(data$count)]→mean(data$count, na.rm=TRUE) An exemplary process may dynamically translate the high-level statistical operations into the graph of operations to be run on a distributed data system using a high-level data wrapper package. The translation process translates the high-level statistical operations into operations understood by a pipeline library. The pipeline library in turn calls large-scale data processing operations using a data processing library to perform the large-scale data processing. The translation process first finds the “data” table using the pipeline library. Then, the field “count” in the “data” table is found. The translation process, using the pipeline and the large-scale data processing libraries, determines which entries in the “data” table are missing values for the “count” field. The non-missing values of “count” are added together across the entire dataset. The sum of the non-missing values is divided by the number of “data” table entries which are not missing values for “count” to calculate the arithmetic mean of “count,” The “data” table entries with missing values for count are then updated with the calculated mean. In the cases where a large-scale data processing library such as MapReduce is used to input missing data into a statistical dataset, the single high-level statistical operation, data$count[is.na(data$count)]→F mean(data$count, na.rm), implies at least three operations without aspects of the inventive concepts. One operation would find the missing values, the second would calculate the mean, and the third would replace the missing values. However, the exemplary process requires only two MapReduce operations, one operation to calculate the mean and one to replace the missing values because the operation to find the missing values can be fused into the same steps as calculating the mean and replacing the missing values. These operations may be merged with other operations when the graph of operations is optimized. Given the example statistical operation above, if a subsequent step was added to calculate the logarithm of the data in the data table, y→log(data), a customary process would require a distinct MapReduce operation to perform the computation of the logarithm. However, the exemplary process can fuse this operation into the mapper of the second MapReduce operation, the operation which replaces the missing values, without adding any more MapReduce steps. The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms; and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of non-transitory signal bearing medium used to actually carry out the distribution. Examples of a non-transitory signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital. Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium. (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.) With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
25,430
11861332
DETAILED DESCRIPTION OF EMBODIMENTS Various embodiments of the present disclosure relate generally to methods and systems for dynamic component visualization. In general, the present disclosure is directed to improving technology used to build an application (e.g., a program, a software, a build, etc.). Techniques and systems disclosed herein are directed to receiving string requests for localization. The string requests may be for any applicable part of an application such as a text string, a button, an experience, a graphic, a theme, or the like. The localization may be for one or more locales that may be identified by a user, may be automatically generated (e.g., by machine learning component), may be determined based on historic use, or the like. As a simplified example, the string request may include string content to be localized and that string content may be the word “enter.” A pull request may be created based on the string request. An output of the pull request may be to approve or reject the string request. As part of generating the pull request and/or implementing the pull request, a temporary string bundle with machine localized string content may be generated. The temporary string bundle may be a test string bundle that is used to perform a system validation check by applying the temporary string bundle to a system environment. The test may determine whether or not the temporary string bundle complies with the system environment requirements. If the test results are favorable (e.g., a binary 1 instead of 0, an approved result, a numerical value corresponding to a compatibility amount, etc.), then the string request may be approved and designated to be an approved phase. A string request in an approved phase may be transmitted to a localization component. The localization component may be a component that provides a contextual localized string bundle comprising original string content to be localized as well as one or more context localized string content. The context localized string content may correspond to string content that is localized based on context (e.g., as provided in a string request). The contextual localized string bundle may be provided to a library for access by an original editor (e.g., software development platform) as well as one or more other editors that can access the same contextual localized string bundle from the library. Techniques and systems disclosed herein result in a faster and more efficient use of string bundles by an editor. The contextual localized string bundles disclosed herein provide a faster retrieval of localized strings from a central repository. For example, different editors may retrieve the same contextual localized string bundle from a central repository, which may reduce the number of localization string requests, reduce correction of mismatched strings, and also reduces the storage space required to store multiple mismatched strings. Additionally, techniques disclosed herein mitigate system environment failures post contextual localization by performing a system environment check prior to generating a more resource intensive contextual localization. Such a system environment check mitigates costly resource use and reduces time spent in expending such resources. As applied herein, localization refers to a process, technique, or to the implementation of adapting, modifying, translating, and/or adjusting content, product(s), and/or services to a local market. The local market may be determined based on geographical factors, linguistic factors, social factors, socioeconomic factors, demographics, or the like. For example, localization may be used to modify the word “on” for use in a plurality of locales, based on its context. In this example, the word “on” may refer to “turning on” (e.g., turning a mobile device on) in the United States for ages 25-80. The color associated with the word “on” for this locale may be green. A localization for the word “on” in a Spanish locale for ages 15-20 may be “encender” which refers to turning on. The color associated with this locale may be a teal color which indicates turning something on. As applied herein, a string request may include one or more of a string identifier (e.g., a numerical value or a short hand for identifying a given string content) a string context (e.g., context that conveys “turning on”), and a string content (e.g., the word “on”). The string request may be originated by a user or may be, in whole or in part, originated by a software component configured to scrape an application for localization. The software component configured to scrape an application for localization may be a machine learning component. The software component configured to scrape an application for localization may review code and/or an interface to identify string content based on its training (e.g., training to generate a machine learning model that identifies content to be localized), based on flags or indicators associated with the content, or the like. As applied herein, a pull request may be a request to localize string content. A pull request may be submitted after or in parallel with generating a string request. The pull request may be evaluated by a system level component that is configured to determine whether a given string request meets system criteria. The system criteria may include whether the given string request is received after a code freeze, whether a previous string request matches a new string request, whether the string request is compatible with a given application (e.g., via a system validation check, as discussed herein), or the like. The pull request may be automatically triggered and may be supplemented by a request feature branch, a commit message, a pull request title, and/or the like. As applied herein, a temporary string bundle may be a test string bundle that includes machine generated localizations. The machine generated localizations may be non-contextual localizations that are generated using a machine component. The machine component may receive a string content as an input and may output machine localizations for one or more locales. The machine component may use any applicable technique to provide the one or more localizations such as, but not limited to, lookup tables, word associations, web results, a machine learning model, or the like. The temporary string bundle may include the string content and the one or more machine localizations. As applied herein, a contextual localized string bundle may be include a string content and one or more contextual localized strings. The contextual localized strings may be context based localizations that are provided based one context based machine learning, user input, and/or a combination of the same. For example, a string content and string context may be provided as inputs to a localization machine learning model. The localization machine learning model may select a subset of available layers within the localization machine learning model based on the string content. The localization machine learning model may provide a contextual localized string output based on localizing the string content by applying the selected subset of available layers to the string content. The contextual localized string bundle may be provided in any applicable format such as a JavaScript Object Notation (JSON), a comma-separated values (CSV) format, text format, or the like. The format may be determined by a given editor or may be predetermined. FIG.1depicts an exemplary environment100in which systems, methods and other aspects of the present disclosure may be implemented. Environment100may include a development platform102that may include a plurality of editors such as editor104and editor106. Each editor may have respective branches such as branches104A and1046for editor104and branches106A and1066for editor106. As applied herein, development platform102may be a software platform configured to develop a given application and associated with a server125. The application may be generated using a plurality of editors (e.g., editor104and editor106), such that each editor may contribute to a component of the application. Accordingly, development platform102, editors104and106, and branches104A,104B,106A, and106B may all be or include software and/or firmware components configured to develop a given application. First editor104may include branches104A and104B such that a given branch may correspond to a version of code associated with first editor104. Similarly, second editor106may include branches106A and1066such that a given branch may correspond to a version of code associated with second editor106. The development process for first editor104may be independent or semi-dependent on the development processor for second editor106. One or more attributes of first editor104may overlap with second editor106. For example, as further disclosed herein, a string request may be initiated at first editor104and a corresponding contextual localization string bundle may be called by both first editor104and second editor106. Development platform102(e.g., server125associated with development platform102) may connect to a network120. Server125may be implemented using multiple computers that cooperate to perform the functions discussed below, which may be located remotely from each other. Server125may be a local server, a remote server, a cloud server, or the like. Network120may be any suitable network or combination of networks and may support any appropriate protocol suitable for the communication of data between various components in environment100. Network120may include a public network (e.g., the Internet), a private network (e.g., a network within an organization), or a combination of public and/or private networks. Environment100may include one or more computer systems configured to gather, process, transmit, and/or receive data. In general, whenever environment100or components thereof is described as performing an operation of gathering, processing, transmitting, or receiving data, it is understood that such operation may be performed by a computer system thereof. Development platform102may be connected to a localization component130either directly and/or via network120. The localization component130may be configured to localize string content based on string context generated at development platform102. The localization component130may include one or more machine learning models trained to receive string content and string context, and output contextual localized string bundles based on the same. According to an implementation, the localization component130may include user input mechanisms to receive user input. Development platform102may be connected to a repository110either directly and/or via network120. The repository110may store one or more contextual localized string bundles such that one or more editors (e.g., editor104, editor106, etc.) may access the stored contextual localized string bundles. The repository110may be a binary repository configured to store binary versions of the contextual localized string bundles. The repository110may catalogue the stored binary versions of the contextual localized string bundles such that they can be retrieved by one or more editors (e.g., editor104, editor106, etc.) FIG.2depicts a flowchart200for providing contextual localized string bundles. At202of flowchart200, a string request including a string identifier, a string context, and a string content may be received. The string request may be generated at an editor (e.g., editor104, editor106, etc.) of development platform102. The editor may generate the string request automatically or based on user input. A recognition component at an editor may determine one or more strings that require localization. The recognition component may be an optical character recognition (OCR) component that scans a complied version of code at the editor or the code itself to identify one or more string content for localization. The recognition component may be a scraper such as a code scraper or a compiled code scraper that may identify one or more string content for localization. The string identifier may be automatically generated or may be received via user input. An automatically generated string identifier may assigned based on chronological order, based on a sequence, or may generate the string identifier based on the string content. Digital platform102may track one or more string identifiers upon generation of the string identifier and may associate the string identifier with a temporary string bundle, with a contextual localized string bundle, and/or a repository storage of the contextual localized string bundle. Repository110may identify a binary contextual localized string bundle based on its associated string identifier. The string context may be auto generated or may be received via user input. An automatically generated string context may identified by a recognition component as disclosed herein. For example, an OCR component or scraping component may identify one or more strings or other content (e.g., images, themes, colors, graphics, videos, sounds, etc.) from a code or an executed version of the code associated with an editor or overall program. The identification may be based on tagged stings or content. For example, one or more text strings within a code may be tagged (e.g., via metadata or other applicable tag) as corresponding to context cues. In this example, the tagged strings may be dates, labels, titles, sub-titles, etc. The recognition component may be configured to identify string context that is proximate to string content. For example, the recognition component may limit the string context by a number of characters (e.g., two hundred characters) such that text or content that exceeds the character threshold is not applied as string context for the given string content. At204ofFIG.2, a pull request for the string request may be generated. The pull request may be automatically generated and/or may be user input. According to an implementation, the pull request may be supplemented by user input such that an automatically generated pull request is completed based on additional information provided by a user. The pull request may be a request to approve the string request before resources are expended to localize the string content and to ensure that an applicable contextual localized string bundle is not already available. Accordingly, the pull request may reduce system resource use by automatically determining whether the string request is in a valid format (e.g., ensuring that the string request will not cause time intensive errors). Additionally, the pull request may reduce processing time by comparing the string request to available contextual localized string bundle (e.g., at repository110). The pull request at204may place the string request in a pending phase. The pull request may include a request feature branch, a commit message, and a pull request title. The request feature branch may be a designation of a branch (e.g., branch104A, branch104B, branch106A, branch106B, etc.) and may identify which branch in an editor's code process for which the pull request corresponds. The pull request may use the request feature branch to determine whether the given string request is applicable to the identified request feature branch. For example, a given string request may be approved, via the pull request, for a first branch but might not be approved for a second branch. The determination of whether to approve or deny for a given branch may depend on an attribute of the branch (e.g., a code freeze, a code condition, a pending code, etc.). A commit message may be a message associated with a pull request that may be stored with the pull request and/or a corresponding contextual localized string bundle. For example, the commit message may be stored in a non-substantive portion of a contextual localized string bundle (e.g., a header, a comment, etc.). A pull request title may be automatically generated or may be assigned by a user. The pull request title may differentiate a given pull request from another pull request and may be used during a review of pull requests. At206, a temporary string bundle may be generated with one or more machine localized string content. The temporary string bundle may be generated as part of the pull request or may be generated based on the pull request (e.g., may be triggered as part of the pull request). The temporary string bundle may be a test string bundle that includes machine generated localizations. The machine generated localizations may be non-contextual localizations that are generated using a machine component. The machine component may receive a string content as an input and may output machine localizations for one or more locales. A string context corresponding to the string content might not be provided to the machine component. The machine component may use any applicable technique to provide the one or more localizations such as, but not limited to, lookup tables, word associations, web results, a machine learning model, or the like. The temporary string bundle may include the string content and the one or more machine localizations. The temporary string bundle may mimic the contextual localized string bundle such that it may be in a format similar to the contextual localized string bundle. For example, the temporary string bundle may be a JSON file formatted in the same format as a contextual localized string bundle. The temporary string bundle may be a test string bundle that is used to perform a system validation check, at208, by applying the temporary string bundle to a system environment. The system environment may be an editor environment or a development platform102environment. An editor environment may be an environment associated with the portion of an application with which a given editor (e.g., editor104, editor106, etc.) is associated. For example, an editor environment may be specific to a given feature of an application. Accordingly, the temporary string bundle may be tested against that given feature to determine whether the given feature is implementable as part of the application after insertion of the temporary string bundle. A development platform102environment may extend beyond the editor environment such that it may incorporate a plurality of editor environments or may incorporate an entire application. For example, the development platform102environment may incorporate an entire application and, accordingly, the temporary string bundle may be tested against the entire application to determine whether the entire application is implementable after insertion of the temporary string bundle. Accordingly, a temporary string bundle based test may determine whether or not the temporary string bundle complies with the system environment requirements. If the test results are favorable (e.g., a binary 1 instead of 0, an approved result, a numerical value corresponding to a compatibility amount, etc.), then the string request may be approved and designated to be an approved phase, at210. According to an implementation, a temporary string bundle test may be used to determine a level of compatibility of the temporary string bundle. As disclosed herein, an editor environment or a development platform102environment may be used to perform a system validation check for the temporary string bundle. Accordingly, the editor environment or the developer platform102environment may be compiled with the temporary string bundle and the resulting compiled temporary application may be evaluated for one or more locales. A system validation check may be performed to determine errors, such as an invalid character check, a score calculation, or the like. A point value may be associated with deviations from an ideal score, as a result of the system validation check. The deviations may be determined based on compiling with the temporary string bundle and may include a line break (e.g., a text string outside predetermined parameters), an error message that prevents the code from compiling in part or full, a restricted visual effect (e.g., an “L” on a line), an invalid character, or the like. A compatibility score may be generated based on either the temporary string bundle resulting in no deviations (e.g., an ideal score such as 100) or based on one or more deviations (e.g., a score less than the ideal score). The temporary string bundle test may have an approved result at210if the compatibility score is at or above a threshold and may be a failed result if the compatibility score is below the threshold. At212ofFIG.2, the string request may be transmitted to localization component130based on the approval at210. Localization component130may include a localization machine learning model and/or may receive user input. The localization component130may output one or more contextual localized strings (e.g., for one or more locales). A contextual localized string bundle comprising original string content as well as one or more context localized string content may be generated based on the outputs of the localization component130. According to an implementation, transmitting the string request at212may also include transmitting one or more additional string requests from one or more additional feature branches and/or editors such that multiple string requests can be transmitted (e.g., to a localization machine learning model) at the same time. The localization machine learning model may include a plurality of layers and/or a plurality of weights that are configured based on the localization machine learning model's training. The model be configured to receive the string content and the string context as inputs. It may be trained to select a subset of available layers within the model based on the string content. The localization machine learning model may receive the string context, may categorize the string context based on analyzing the string content. The categorization may be done based on historical categorization provided to the localization machine learning model during its training, as further disclosed herein. The localization machine learning model may determine a contextual localized string output based on localizing the string content by applying the selected subset of available layers to the string content. Accordingly, the localization may be specific to the layers selected based on the string context. Alternatively or additionally, the localization machine learning model may be trained to adjust one or more weights based on the string content. For example, the localization machine learning model may receive the string context, may categorize the string context based on analyzing the string content. The categorization may be done based on historical categorization provided to the localization machine learning model during its training, as further disclosed herein. The localization machine learning model may determine a contextual localized string output based on localizing the string content by applying the selected weights to the string content. Accordingly, the localization may be specific to the weights selected based on the string context. A contextual localized string bundle may be generated based on one or more outputs of the localization machine learning model. Localization component130, development platform102, and/or another component may compile the string content and one or more context localized string content output by the localization component130to generate a contextual localized string bundle. At214ofFIG.2, the contextual localized string bundle may be received. The contextual localized string bundle may be transmitted over network120or may be provided to development platform102directly. According to an implementation, the contextual localized string bundle may be formatted at the development platform102or at another component. The formatting may be, for example, converting the contextual localized string bundle to a storable format in repository110. According to an example, the storable format may be a binary format such that the contextual localized string bundle is converted to binary code. At216, the contextual localized string bundle formatted in accordance with the storable format may be stored in library, such as repository110, such that it can be accessed by a plurality of editors (e.g., editor104, editor106, etc.). One or more editors may call the contextual localized string bundle to extract the contextual localized strings contained therein. The one or more editors may call the contextual localized string bundle using a localized identifier such as any applicable pointer such as a pointer based on the string identifier, pull request title, or the like. The localized identifier may be an updated version of the string identifier, pull request title, or the like based on the contextual localized string bundle. The one or more editors may include a requesting editor (i.e., an editor that submitted the string bundle) and one or more additional editors. The one or more editors may access a contextual localized string bundle by providing a corresponding localized identifier. According to an implementation, a subsequent string request may be generated for a subsequent string content, after the contextual localized string bundle is provided to the library at216. A determination may be made that the subsequent string request matches the string request received at202. The determination may be made based on comparing the string content to the subsequent string content, based on comparing the string context to the subsequent string context, comparing the pull request received at204to a subsequent pull request corresponding to the subsequent string request or the like. Based on determining that the subsequent string request matches the string request received at202, the subsequent string request may be denied and a pointer to the contextual localized string bundle provided to the library at216may be provided. Accordingly, use of system resources and time may be mitigated by preventing the generation of a new contextual localized string bundle when an existing contextual localized string bundle applies. Additionally, storage space at the library is also reduced by preventing duplication of contextual localized string bundles. According to an implementation, a secondary system validation check may be performed before an editor implements a contextual localized string bundle. The secondary system validation check may determine whether or not the contextual localized string bundle complies with an editor or developer platform102's system environment requirements. If the test results are favorable (e.g., a binary 1 instead of 0, an approved result, a numerical value corresponding to a compatibility amount, etc.), then the contextual localized string bundle may be approved for use by one or more editors. For example, if approved, the contextual localized string bundle may be provided to a library. FIG.3Adepicts an application and gateway flow diagram. As shown, an input application component302may include a login page304for a user to log into an editor or development platform102. A landing page306may be provided to interact with the editor and/or to select a string request, pull request, or access a library (e.g., repository110). An import page308may be used for providing or augmenting a string request. For example,FIG.3Bdepicts an example string request input screen351that may be completed automatically or may be completed in whole or party by a user. String request input screen351may be completed by a machine learning model, as disclosed herein, and may be provided to a user for verification. As show, string request input screen351may include a symbolic name field350(e.g., a string identifier), a context field352(e.g., a string context), and a string language field354(e.g., a string content) which may designate the language for the string content. The information included in string request input screen351may be committed using a button356and may be provided to an export page310ofFIG.3A. The export page310may include an inline edit page312that allows for modification of one or more attributes of a version of the string request input screen351. As also shown inFIG.3A, a server gateway page320may include a server login page322, a branch retrieval page324, a database retrieval page326, an export page328, and an import page330. Server login page322may grant access to a pull request (e.g., to view, approve, reject, etc. a given pull request). Branch retrieval page324may provide access to data related to a given branch (e.g., a branch associated with a pull request). Database retrieval page326may provide access to database data that may include historical contextual localized string bundles and associated information. Export page328and import page330may facilitate exporting string requests after a pull request has been approved and/or receiving contextual localized string bundle to facilitate modification and/or storage of the contextual localized string bundle. For example, server gateway320may format a received contextual localized string bundle (e.g., conversion to binary) for storage at repository340. FIG.3Cshows an example pull request input screen361that may be completed automatically or may be completed in whole or part by a user. Pull request input screen361may be completed by a machine learning model, as disclosed herein, and may be provided to a user for verification. As show, pull request input screen361may include a feature branch field360(e.g., to identify the feature branch associated with a corresponding string request), a commit message362(e.g., for information about the string request), a pull request title field364(e.g., a pull request identifier), and/or a pull request description field366(e.g., for information about the pull request). The information included pull request input screen361may be committed using a button368and may be provided to an export page328ofFIG.3A. FIG.4shows a system flow diagram400for the system and techniques disclosed herein. As shown, an initiate string bundle request401step may include inserting a string on a local batch at402or updating a string on a local batch at404. As disclosed herein, the string may be automatically generated or may be provided by a user. Updating a string may be identifying a contextual localized string bundle and updating a string content to trigger the process outlined inFIG.2. At406, local source files and/or binary versions of the same may be generated. A machine translation of the string generated at402or404may be converted to a temporary string bundle and at408changes to a local environment may be tested. If the test fails, an update to the respective string may be made at404and the process may repeat. If the test passes, a pull request may be committed at410. The string request and/or pull request may be provided to a global storage426. A localization workflow step403may include receiving new (e.g., from402) or updated string or pull requests at412(e.g., from402). The new or updated string or pull requests may be retrieved from global storage426. At414, a package may be prepared for localization. For example, the packet may include the string content and string context presented for localization. At416the prepared packet may be transmitted to a localization component (e.g., a machine learning model) and at418the output results from the localization component may be received. A local storage may be updated with new localizations at420. The update may include components to generate a contextual localized string bundle such as one or more of an original string content and one or more localized string content. At422a pull request submission may be made and may include the components to generate a contextual localized string bundle. At string bundle generation step405, the pull request committed at410and the submission at422may be received to make changes to a global storage at424. The changes may include storing pointers and/or other information related to a pull request. At428, source files and/or binaries may be generated based on the pull request submission including the components to generate a contextual localized string bundle, as submitted at422. As disclosed herein, the contextual localized string bundle may include an original string content and one or more contextual localized strings. At430, source files and binaries may be stored in respective repositories for access by a plurality of editors. At editor build workflow step407, a trigger to build on a global branch may be received at432. At434, a respective repository may be accessed to copy binaries stored at430, from the respective repository. At436, the editor build may be completed using the binaries copied at434and at438, the build may be packaged for download or downstream use (e.g., use of an application). As disclosed, one or more implementations described herein include a machine learning model. A machine learning model disclosed herein may be trained using the data flow500ofFIG.5. As shown inFIG.5, training data512may include one or more of stage inputs514and known outcomes518related to a machine learning model to be trained. The stage inputs514may be from any applicable source including historical localizations, context categories, context attributes, etc. (e.g., one or more outputs from a step from flowchart200ofFIG.2). The known outcomes518may be included for machine learning models generated based on supervised or semi-supervised training. An unsupervised machine learning model might not be trained using known outcomes518. Known outcomes518may include known or desired outputs for future inputs similar to or in the same category as stage inputs514that do not have corresponding known outputs. The training data512and a training algorithm520may be provided to a training component530that may apply the training data512to the training algorithm520to generate a machine learning model. According to an implementation, the training component530may be provided comparison results516that compare a previous output of the corresponding machine learning model to apply the previous result to re-train the machine learning model. The comparison results516may be used by the training component530to update the corresponding machine learning model. The training algorithm520may utilize machine learning networks and/or models including, but not limited to a deep learning network such as Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN) and Recurrent Neural Networks (RCN), probabilistic models such as Bayesian Networks and Graphical Models, and/or discriminative models such as Decision Forests and maximum margin methods, or the like. In general, any process or operation discussed in this disclosure that is understood to be computer-implementable, such as the process illustrated inFIG.2, may be performed by one or more processors of a computer system, such any of the systems or devices in the environment ofFIG.1as described above. A process or process step performed by one or more processors may also be referred to as an operation. The one or more processors may be configured to perform such processes by having access to instructions (e.g., software or computer-readable code) that, when executed by the one or more processors, cause the one or more processors to perform the processes. The instructions may be stored in a memory of the computer system. A processor may be a central processing unit (CPU), a graphics processing unit (GPU), or any suitable types of processing unit. FIG.6depicts an example system600that may execute techniques presented herein.FIG.6is a simplified functional block diagram of a computer that may be configured to execute techniques described herein, according to exemplary embodiments of the present disclosure. Specifically, the computer (or “platform” as it may not be a single physical computer infrastructure) may include a data communication interface660for packet data communication. The platform may also include a central processing unit (“CPU”)620, in the form of one or more processors, for executing program instructions. The platform may include an internal communication bus610, and the platform may also include a program storage and/or a data storage for various data files to be processed and/or communicated by the platform such as ROM630and RAM640, although the system600may receive programming and data via network communications. The system600also may include input and output ports650to connect with input and output devices such as keyboards, mice, touchscreens, monitors, displays, etc. Of course, the various system functions may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load. Alternatively, the systems may be implemented by appropriate programming of one computer hardware platform. The general discussion of this disclosure provides a brief, general description of a suitable computing environment in which the present disclosure may be implemented. In one embodiment, any of the disclosed systems, methods, and/or graphical user interfaces may be executed by or implemented by a computing system consistent with or similar to that depicted and/or explained in this disclosure. Although not required, aspects of the present disclosure are described in the context of computer-executable instructions, such as routines executed by a data processing device, e.g., a server computer, wireless device, and/or personal computer. Those skilled in the relevant art will appreciate that aspects of the present disclosure can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, hand-held devices (including personal digital assistants (“PDAs”)), wearable computers, all manner of cellular or mobile phones (including Voice over IP (“VoIP”) phones), dumb terminals, media players, gaming devices, virtual reality devices, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, and the like. Indeed, the terms “computer,” “server,” and the like, are generally used interchangeably herein, and refer to any of the above devices and systems, as well as any data processor. Aspects of the present disclosure may be embodied in a special purpose computer and/or data processor that is specifically programmed, configured, and/or constructed to perform one or more of the computer-executable instructions explained in detail herein. While aspects of the present disclosure, such as certain functions, are described as being performed exclusively on a single device, the present disclosure may also be practiced in distributed environments where functions or modules are shared among disparate processing devices, which are linked through a communications network, such as a Local Area Network (“LAN”), Wide Area Network (“WAN”), and/or the Internet. Similarly, techniques presented herein as involving multiple devices may be implemented in a single device. In a distributed computing environment, program modules may be located in both local and/or remote memory storage devices. Aspects of the present disclosure may be stored and/or distributed on non-transitory computer-readable media, including magnetically or optically readable computer discs, hard-wired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. Alternatively, computer implemented instructions, data structures, screen displays, and other data under aspects of the present disclosure may be distributed over the Internet and/or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, and/or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme). Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine-readable medium. “Storage” type media include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of the mobile communication network into the computer platform of a server and/or from a server to the mobile device. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links, or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution. The terminology used above may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized above; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Both the foregoing general description and the detailed description are exemplary and explanatory only and are not restrictive of the features, as claimed. As used herein, the terms “comprises,” “comprising,” “having,” including,” or other variations thereof, are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such a process, method, article, or apparatus. In this disclosure, relative terms, such as, for example, “about,” “substantially,” “generally,” and “approximately” are used to indicate a possible variation of ±10% in a stated value. The term “exemplary” is used in the sense of “example” rather than “ideal.” As used herein, the singular forms “a,” “an,” and “the” include plural reference unless the context dictates otherwise. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
43,690
11861333
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Computer applications for code development can provide suggestions for code completion or code synthesis based on a received user input. For example, if a user begins to type in a function call, the computer application may suggest one or more ways to complete the line of code. Further, if a user begins typing code for a loop, such as a “for loop” or a “do while” loop, the computer program can provide one or more suggestions for a script (i.e., multiple lines of code) for completing the loop. These suggestions, known as autofill or auto-complete suggestions, can be generated using machine learning models trained on a large amount of previously received code. However, there are a number of issues that arise when using large-scale machine learning models to generate autofill suggestions for code completion. For one, although machine learning models may be regularly fine-tuned or updated to improve accuracy, the suggestions provided are typically based on an out-of-date code base and subsequently less relevant. Further, the code completion suggestions produced using machine learning models are text-based, and are not generated with consideration of the content of the code and/or the suggestions. In other words, the suggestions may be unsuitable because they are not semantically correct or may have incorrect syntax. Implementations herein include a rule-based semantic checker to check or verify that autofill suggestions are semantically and syntactically correct prior to presentation. The rule-based semantic checker may be communicatively connected to the user's development environment and code base such that the autofill suggestions can be tested against current code associated with the user. FIG.1illustrates a code suggestion system100including a user device16having a user interface14. The user device16may correspond to any computing device, such as a desktop workstation, a laptop workstation, or a mobile device (i.e., a smart phone). The user device16includes computing resources17(e.g., data processing hardware) and/or storage resources18(e.g., memory hardware). The user device16may be configured to host a development environment112and a programming code base114. The development environment112, also known as an integrated development environment (IDE), is a software application that facilitates software development and creating/editing/compiling/debugging of programming code. The development environment can be specific to a particular language or can be configured to process multiple programming languages simultaneously. In some implementations, the development environment112executes locally on the user device106. The programming code base114may include a data store of source code. For example, the code base114includes multiple files, libraries, modules, etc. that each support or implement one or more programs. In some implementations, the programming code base114includes code generated by one or more users12. Like the development environment112, the programming code base114may be stored locally (e.g., on the memory hardware18) on the user device16or may be stored remotely on a server or in a cloud computing environment150. In some implementations, the user device16is in communication with a remote system150(also referred to herein as a cloud computing environment) via a network140. The remote system150may be a single computer, multiple computers, or a distributed system (e.g., a cloud environment) having scalable/elastic resources152including computing resources154(e.g., data processing hardware) and/or storage resources156(e.g., memory hardware). A data store158(i.e., a remote storage device) may be overlain on the storage resources146to allow scalable use of the storage resources146by one or more user device16or the computing resources154. The remote system150may execute both a machine learning model550and a rule-based semantic checker250(i.e., the machine learning model550and the rule-based semantic checker250may be co-located in the cloud computing environment150). The development environment112may execute locally on the user device16(e.g., on the data processing hardware17) or remotely (e.g., at the remote system150). Likewise, the code base114may be stored locally at the user device16or stored at the remote system150(e.g., at the data store158). In some implementations, a user12of the user device16enters a user input120representing source code at the user device16via user interface14. For example, the user12types, using a keyboard, “myFunction(” to begin a function call. The user device16may then transmit the user input120to the machine learning model550in real-time (i.e., as or shortly after the user provides the user input120). In response to receiving the user input120, the machine learning model550may be configured to generate one or more autofill suggestions125for code completion based on the user input120(i.e., a continuation or completion of the source code represented by the user input120). The one or more suggestions125generated by the machine learning model550, in some examples, are text-based and may not be indicative of the actual substance of the code (e.g., have correct syntax). Further, the machine learning model550may not be trained on the latest code in the programming code base114. For example, one or more users12have added code to the code base114since the last time the machine learning model550was updated. In order to provide relevant suggestions125to the user12, the one or more suggestions125are verified as being semantically and/or syntactically correct by the rule-based semantic checker250prior to being presented to the user12via the user interface14of the user device16. The rule-based semantic checker250may verify each of the one or more suggestions125by performing a number of pre-defined checks for each of the one or more suggestions125. For example, the rule-based semantic checker250may check, for each of the one or more suggestions125, the resolution (e.g., does the referred object exist), the invocation (e.g., the correct number of arguments are passed to the method), the assignability (e.g., is an object of the correct type passed as a parameter), etc. The rule-based semantic checker250may perform the number of pre-defined checks in a pre-defined order, with the most important checks (i.e., the checks that most commonly discover errors) first. In some implementations, the rule-based semantic checker250performs some or all of the number of checks for each of the one or more suggestions125using a structural representation300of the programming code base114. Optionally, when one or more suggestions125are not correct (e.g., have incorrect syntax), the rule-base semantic checker250provides one or more constraints225to the machine learning model550. In turn, the machine learning model550may generate one or more new suggestions125based on the constraints225and the user input120. In some implementations, the machine learning model550again provides the newly generated suggestions125to the rule based semantic checker250to verify that the suggestions125are semantically and/or syntactically correct. In this way, the rule-based semantic checker250may iteratively provide the machine learning model550with the constraints225to improve a quality and/or accuracy of the suggestions125. In other implementations, the cloud environment150transmits the new suggestions125generated by the machine learning model550directly to the user device16for display in the user interface14without requiring the rule based semantic checker250to verify the newly generated suggestions125. In some implementations, an integration model260facilitates communication (e.g., via an application programming interface (API) or the like) between the machine learning model550and the rule-based semantic checker250. The integration model260may be integrated with the rule-based semantic checker250, as illustrated. In some implementations, the integration model260may be a stand-alone application also co-located within the cloud computing environment150. The rule-based semantic checker250may be programming language specific. That is, the rule-based semantic checker250may be configured to check semantics of suggestions125for a specific programming language. In some implementations, the integration model260is configured to determine or select an appropriate rule-based semantic checker250based on the language of the one or more suggestions125. For example, the integration model260may receive one or more suggestions125, and determine (e.g., via the suggestions125, the user input120, and/or configuration settings) that the suggestions125correspond to a particular programming language (e.g., C++). The integration model260may then transmit the suggestions125to a specific rule-based semantic checker250configured for that particular programming language (such as C++). The rule-based semantic checker250may perform the number of checks for each of the suggestions125using a structural representation300of the programming code base114. The structural representation300may be an abstract syntax tree, as described in greater detail below with respect toFIG.3. In some implementations, the structural representation300is stored in a memory cache215. By storing the structural representation300in a memory cache215at or near the semantic checker250(e.g., at the remote system150), the rule-based semantic checker250can more quickly retrieve the proper structural representation300based on the code base114, user input120, suggestions125, etc. Because latency is of prime concern in an autofill suggestion system (i.e., suggestions must appear quickly for the user12in order to be useful), storing the structural representation300in a high-speed cache in communication with the semantic checker250allows the semantic checker250to minimize latency. Further, the structural representation300can be updated in the memory cache215so that the semantic checker250determines proper semantics of the suggestions125based on the current or most recent code in the programming code base114. When the suggestions125are incorrect or inappropriate, the rule-based semantic checker250can return one or more constraints225to the machine learning model550to be used in generating new suggestions125. The system ofFIG.1is presented for illustrative purposes only and is not intended to be limiting. For example, although only a single example of each component is illustrated, any number of components16,112,114,150,550, and250may be communicatively coupled to the system100. Further, although some components are illustrated as being located in a cloud computing environment150, in some implementations those components may be hosted locally on the user device16. Alternatively, although some components are illustrated as being hosted on the user device16, in some implementations those components can be hosted in a cloud computing environment150. Further, in various implementations, some or all of the components112,114,550,250,260, and300are hosted locally on user device16, remotely (such as in cloud computing environment150), or some combination thereof. FIG.2is an exemplary schematic view200where the machine learning model550and the rule-based semantic checker250are co-located in the cloud computing environment150ofFIG.1. Here, the machine learning model550provides the suggestions125to the rule-based semantic checker250. The rule-based semantic checker250then accesses the memory cache215to determine whether the appropriate structural representation300is available in the cache215. The appropriate structural representation300may be based on the code base114. When the structural representation is available, the semantic checker250retrieves the structural representation300of the programming code base114. As discussed in more detail below, the rule-based semantic checker250determines whether the one or more suggestions125are semantically and/or syntactically correct. In some implementations, when the suggestions125are not semantically and/or syntactically correct, the rule-based semantic checker250returns a number of constraints225that the machine learning model550uses to generate new suggestions125. In other implementations, when at least one suggestion is semantically and/or syntactically correct, the rule-based semantic checker250returns a confirmation that the correct suggestions125can be displayed to the user. In still further implementations, the rule-based semantic checker250returns an indication that one or more suggestions125can be displayed to a user in addition to a number of constraints225for the machine learning model550to use in determining new suggestions125. The integration model260may facilitate the communications between the machine learning model550and the rule-based semantic checker250. FIG.3is a schematic view of an example structural representation300of the programming code base114. In this example, the structural representation300is in the form of an abstract syntax tree. The example ofFIG.3is a simplified version of an abstract syntax tree for illustrative purposes. Here, the programming code base114ofFIG.1includes a function310made up of a first sub-function320and a second sub-function330. The first sub-function320requires two parameters321,322and the second sub-function330requires a single parameter331. Using the example structural representation300ofFIG.3, a rule-based semantic checker250may quickly a number of checks to determine whether suggestions125corresponding to function310are semantically and syntactically correct. For example, function310requires three parameters321,322, and331. Thus, any suggestions125corresponding to function310that has more or less than three parameters in the call will be incorrect (i.e., such suggestions125would fail a method invocation check). Additionally or alternatively, the structural representation may provide specific information along the branches of the abstract syntax tree regarding components of the code. For example, the parameter331is an integer, while parameter322is a text string. In this example, when one or more suggestions125invoke parameters that are not of the correct type (e.g., a Boolean), the rule-based semantic checker250can quickly determine that those suggestions125are not correct (i.e., such suggestions125would fail an assignability check). In another example, an autofill suggestion125may include “FunctionA(a,b,c).” The rule-based semantic checker250may traverse the structural representation300to determine if such a function exists in the programming code base114. Here, the example structural representation300does not include “FunctionA( )” and would determine that the autofill suggestions125of “FunctionA(a,b,c)” is incorrect (i.e., such suggestions125would fail a resolution check). In some implementations, the rule-based semantic checker250is constrained to determining if the suggestions125are correct within an allotted time budget. As the suggestions125are intended to be displayed to the user in real time, the check must be performed quickly enough that the suggestions125can be presented to the user while still relevant. Accordingly, the rule-based semantic checker250may be constrained to verify suggestions125quickly and efficiently. For example, if the rule-based semantic checker250uses 20 different pre-defined checks for each suggestion125for complete verification, the rule based semantic checker250will only perform a subset of checks, such as the top three checks that maximizes coverage of finding errors in suggestions125, for each suggestion125that can be performed in the allotted time budget. Additionally, the rule-based semantic checker250may perform checks on one or more suggestions125in parallel to save time. In another example, when the rule-based semantic checker250receives a large number of suggestions125, it may be inefficient to check each suggestion125individually. One way to expedite the check performed by the rule-based semantic checker250is to group like suggestions125, and only perform a check on a representative suggestion125from each group. For example, the rule-based semantic checker250receives a number of suggestions125that include “Function(a,b),” “Function(y,z),” “Function(a,b,c),” and “Function(x,y,z)” each representing calling a function with two or three parameters. Here, the rule-based semantic checker250may divide the suggestions125into two separate groups. The first group may include “Function(a,b)” and “Function(y,z)” as they each include two parameters in the call. The second group may include “Function(a,b,c)” and “Function(x,y,z)” as they each include three parameters in the call. The rule-based semantic checker250can then verify or compare one suggestion125from each group to determine whether the suggestions are semantically and/or syntactically correct relative to the specific code base114the user12is working within. In the illustrated example ofFIG.3where the structural representation indicates that the function has three parameters321,322,331, the suggestions125from the first group are not semantically correct as the structural representation300requires three (3) parameters for function310, while the suggestions125from the second group have the correct number of parameters in the call. In some implementations, when the rule-based semantic checker250cannot check each of the autofill suggestions125provided by the model550within the allotted time budget, the rule-based semantic checker250performs a partial check that maximizes coverage of the autofill suggestions125given the allotted time budget. For example, the rule-based semantic checker250determines groups of autofill suggestions125(as described above) and then determine which groups provide the broadest coverage of the entirety of autofill suggestions125(e.g., groups with the most autofill suggestions125). The rule-based semantic checker250then checks as many groups of autofill suggestions125as possible in the allotted time budget, starting with the largest groups first. In some implementations, the rule-based semantic checker250only returns autofill suggestions125that are confirmed as correct. In other implementations, the rule-based semantic checker250returns all of the autofill suggestions125, regardless of whether each autofill suggestion125has been checked. The above example is for illustrative purposes and is not intended to be limiting. The rule-based semantic checker250may be a deterministic model based on a finite set of rules and not a machine learning model. In turn, the rule-based semantic checker250can perform any additional or alternative checks based on the finite set of rule to determine if the suggestions125are correct. The finite set of rules can be based on the intended structural representation300of the programming code base114, or on the particular programming language. For example, the rule-based semantic checker250verifies, using the set of rules, against an appropriate representation of the programming code base114(e.g., the structural representation300) to determine whether the autofill suggestions125are semantically and syntactically correct and/or supported in the programming code base114. Further, the rule-based semantic checker250may also perform any additional or alternative checks, based on the set of rule, to determine whether the code is correct based on the determined particular programming language (e.g., C, C++, Java, Python, etc.). FIG.4illustrates an example sequence diagram400for providing autofill suggestions125for code completion in a development environment112. In some implementations, the steps410-460of the sequence diagram are constrained to be performed within an allotted time budget in order to provide the autofill suggestions125to the user12while the suggestions125are still relevant. When the steps410-460cannot be wholly completed within the allotted time budget, the steps410-460may only be performed on a portion of the autofill suggestions125. The sequence diagram400may begin at step410by receiving the user input120. The user input120may be received via the user interface14of the user device16. In some examples, the user input120is received continuously as a streaming input. For example, as the user12continues to enter characters while typing, each new character is considered by the machine learning model550when generating autofill suggestions125as each additional character eliminates the pool of possible relevant autofill suggestions125. Upon receiving the user input120, at step415, the machine learning model550generates one or more autofill suggestions125based on the user input120. At step420, the machine learning model550sends/provides the autofill suggestions125to the rule-based semantic checker250. At step425, the rule-based semantic checker250determines whether each of the one or more autofill suggestions125are semantically and/or syntactically correct based on the development environment112and/or the programming code base114. For example, the rule-based semantic checker250compares the autofill suggestions125to the structural representation300of the programming code base114. Further, the rule-based semantic checker250may also check that the autofill suggestions125are in a proper form based on the development environment112. For example, when a suggestion125does not conform to the proper syntax, include proper punctuation, etc., then the suggestion125may be determined to be incorrect. At step430, the rule-based semantic checker250sends feedback to the machine learning model550. In some implementations, the rule-based semantic checker250sends (as feedback) one or more constraints225limiting the subsequent suggestions125generated or predicted by the machine learning model550(i.e., reducing a scope of the possible suggestions the machine learning model550may predict). Optionally, the rule-based semantic checker250sends (as feedback) back a list of approved suggestions125and a list of rejected suggestions125to the machine learning model550. If necessary, based on the feedback, the machine learning model550, at step435, generates one or more new suggestions based on the feedback (e.g., the constraints225) provided by the rule-based semantic checker250. In some implementations, upon generating a new set of autofill suggestions125, the machine learning model550skips to step460and transmits the autofill suggestions125for display in the user interface14executing on the user device16. In other implementations, rather than skipping to step460after generating the one or more new suggestions125at step435, the machine learning model550first sends, at step440, the new autofill suggestions125to the rule-based semantic checker250. Here, at step445, rule-based semantic checker250checks/verifies the new autofill suggestions125to determine if the suggestions125are semantically correct. At step450, the rule-based semantic checker250once again provides feedback, which can be in the form of another set of constraints225and/or a list of correct suggestions125and a list of rejected suggestions125. In some implementations, when there is at least one correct suggestion125, the machine learning model transmits the one or more correct suggestions125to the user device16for display. While in this example, at step460the autofill suggestions125are transmitted for display at the user device16, the machine learning model550and the rule-based semantic checker250may continue to generate suggestions125and constraints225for any number of cycles. In some implementations, the autofill suggestions125are displayed (e.g., via the user interface14) as a list of selectable options in a drop down menu. In other implementations, a single autofill suggestion125is displayed on the user device16. For example, an autofill suggestion125is displayed as a continuation of the user input120(i.e., as a continuation of a sequence of text entered by the user12) but in a different color, font, size, etc. than the original user input120. In this example, the autofill suggestion125appears to automatically complete the code for the user, and the user can either accept or reject the suggestion125using further inputs (e.g., if the user hits the “enter” key of a keyboard the autofill suggestion125will be accepted, while if the user continues to type the autofill suggestion125will disappear or be replaced with a new autofill suggestion125). FIG.5illustrates an exemplary training process500for training the machine learning model550. In some implementations, the process500employs a two-step training technique. First, the machine learning model550is pre-trained on a large set of code to produce a base model. The machine learning model550may then be trained in an iterative fashion based on additional user inputs and feedback. For example, the process500starts with pre-training the machine learning model550using pre-training data505. Pre-training a model is a technique used for initializing a model which can then be further fine-tuned based on additional training data510. For the machine learning model550, pre-training may include initiating the machine learning model550with pre-training data505including a large data set including previously written code in one or more programming languages. The process500can then fine-tune parameters of the machine learning model550. The training process500may include feeding training input510to the machine learning model550. In some implementations, the training input510includes inputs from one or more users, such as new code. Upon receiving the training input510, the machine learning model550may generate an output515(e.g., an autofill suggestion125). The training inputs510may include some or all of the code base114. In some implementations, the output515is used by a loss function530to generate a loss540. The loss function530compares the output515and a label520to generate the loss540, where the loss540indicates a discrepancy between the label520(i.e., ground truth representation of the code) and the output515(i.e., the autofill suggestion). The loss function530may implement any suitable technique to determine a loss such as regression loss, mean squared error, mean squared logarithmic error, mean absolute error, binary classification, binary cross entropy, hinge loss, multi-class loss, etc. The loss540may then be fed directly to the machine learning model550. Here, the machine learning model550processes the loss540and adjusts one or more parameters of the machine learning model550to account for the loss540. In some implementations, the training process500occurs in real time. In other words, a user may enter an input in a development environment, which is received as training input510. The machine learning model550may produce one or more autofill suggestions125(i.e., output515) which are provided to the user12in response to the user input120. The user12may then either accept one of the autofill suggestions125or complete the user input120manually. The resulting final code (i.e., the output515or the completion entered by the user12) may be used to label520additional training inputs510for the machine learning model550. The loss function530may then generate the loss540based on the label520and the output515. FIG.6is a flowchart of an exemplary arrangement of operations for a method600for providing autofill suggestions for code completion in a development environment. The method600may be performed, for example, by various elements of the system100ofFIG.1. For instance, the method600may execute on the data processing hardware154of the remote system150, the data processing hardware112of the user device16, or some combination thereof. At operation610, the method600includes obtaining, from a user interface14executing on a user device16, a user input120representing source code generated within a development environment112, the source code created using a particular programming language and a programming code base114. At operation620, the method600includes determining, using a machine learning model550, an autofill suggestion125based on the user input120. The autofill suggestion125continues the source code represented by the user input120. At operation630, the method600includes determining, using a rule-based semantic checker250configured for the particular programming language, whether the autofill suggestion125is semantically correct based on the development environment112and the programming code base114. At operation640, the method600includes, when the autofill suggestion125is semantically correct, transmitting the autofill suggestion125for display on the user interface14of the user device16. FIG.7is a schematic view of an example computing device700that may be used to implement the systems and methods described in this document. The computing device700is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device700includes a processor710, memory720, a storage device730, a high-speed interface/controller740connecting to the memory720and high-speed expansion ports750, and a low speed interface/controller760connecting to a low speed bus770and a storage device730. Each of the components710,720,730,740,750, and760, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor710can process instructions for execution within the computing device700, including instructions stored in the memory720or on the storage device730to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display780coupled to high speed interface740. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices700may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory720stores information non-transitorily within the computing device700. The memory720may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory720may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device700. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. The storage device730is capable of providing mass storage for the computing device700. In some implementations, the storage device730is a computer-readable medium. In various different implementations, the storage device730may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory720, the storage device730, or memory on processor710. The high speed controller740manages bandwidth-intensive operations for the computing device700, while the low speed controller760manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller740is coupled to the memory720, the display780(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports750, which may accept various expansion cards (not shown). In some implementations, the low-speed controller760is coupled to the storage device730and a low-speed expansion port790. The low-speed expansion port790, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device700may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server700aor multiple times in a group of such servers700a, as a laptop computer700b, or as part of a rack server system700c. Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. A software application (i.e., a software resource) may refer to computer software that causes a computing device to perform a task. In some examples, a software application may be referred to as an “application,” an “app,” or a “program.” Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications. The non-transitory memory may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by a computing device. The non-transitory memory may be volatile and/or non-volatile addressable semiconductor memory. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
39,411
11861334
DETAILED DESCRIPTION OF EMBODIMENTS Conventionally, software developers, configuration managers, deployment managers, and other users of a computing environment may subscribe to certain cloud services to facilitate development, configuration, and deployment of software applications and storage of associated files. A cloud service that is configured for software application or process flow development, management, and/or deployment is called a Process Cloud Service (PCS) herein. A process cloud service may employ a networked database to store files and other objects used by a given software program being developed. Server-side development environments may be accessible to developers via a browser. The development environments may be backed by the PCS, such that developed software application files are stored in the PCS database. For the purposes of the present discussion, a computing environment may be any collection of computing resources used to perform one or more tasks involving computer processing. A computer may be any processor in communication with a memory. A computing resource may be any component, mechanism, or capability or quantities thereof of a computing environment, including, but not limited to, processors, memories, software applications, user input devices, and output devices, servers, and so on. An enterprise computing environment may be any computing environment used for a business or organization. An example enterprise computing environment includes various computing resources distributed across a network and may further include private and shared content on Intranet Web servers, databases, files on local hard discs or file servers, email systems, document management systems, portals, and so on. A given software application may include (but not necessarily) constituent software applications or modules (e.g., services, functions, procedures, computing objects, etc.). Accordingly, the term “software application” may also include networked software applications or integrated groups thereof. Certain embodiments discussed herein are particularly useful for development, deployment, and implementation of webpage code and associated REST interfaces and services. For the purposes of the present discussion, a web service may be any computer code and associated functionality that is adapted to be called by an application or other service or process whose code is stored in a separate location (e.g., on another computer or memory storage location or device) from the web service. Accordingly, the term “service” as used herein is relatively broad and may include remotely accessible APIs and services characterized by Web Services Description Language (WSDL) interfaces, Simple Object Access Protocol (SOAP), REpresentational State Transfer (REST), YAML (Yet Another Markup Language), and/or other types of interfaces. Generally, web services, also simply called services herein, provide functionality, e.g., capabilities, that may be reused by different applications, processes, or web services (that may be distributed across a network), which access the functionality via a service interface (e.g., WSDL interface) consistent with a description of the web service. A web services model may represent a loosely coupled integration model for allowing flexible integration of various network-distributed applications or processes. Business process-based software applications are often modeled using Business Process Model and Notation (BPMN). Software development tools for enabling business users, developers, designers, and so on, may provide features for interacting with and/or manipulating BPMN graphical representations of the process-based software application during development of the application. A software system may be any collection of computing resources implementing machine-readable instructions, i.e., computer code. Accordingly, the term “software system” may refer to a software application, and depending upon the context in which the term is used, may further refer to the accompanying computer(s) and associated computing resources used to run the software application. Depending upon the context in which the term is used, a software system may further include hardware, firmware, and other computing resources enabling running of the software application. Note that certain software systems may include collections of disparate services, which are implemented in particular sequences in accordance with a process template and accompanying logic. Accordingly, the terms “software system,” “system,” and “software application” may be employed interchangeably herein to refer to modules or groups of modules or computing resources used for computer processing. For clarity, certain well-known components, such as hard drives, processors, operating systems, power supplies, routers, Internet Service Providers (ISPs), workflow orchestrators, process schedulers, Tenant Automation Systems (TASs), certain web services, virtual machines, middleware, enterprise databases, MetaData Services (MDS) modules, and so on, are not necessarily explicitly called out in the figures. However, those skilled in the art with access to the present teachings will know which components to implement and how to implement them to meet the needs of a given implementation. FIG.1is a first block diagram illustrating a conceptual view of first example system10and accompanying computing environment that facilitates enabling developers to create REpresentational State Transfer (REST) interface connectors40(also simply called REST connectors herein) using an interface model26that enables defining of a REST API or service28using a list of resources and corresponding operations. For the purposes of the present discussion, a connector may be any mechanism that facilitates interfacing or otherwise facilitates intercommunications between different software components and/or computing resources. Connectors discussed herein generally include added functionality over conventional software components used to make web service calls, where the added functionality may be visualized as a layer of abstraction that simplifies developer use of external services for implementing steps of process-based software applications. Note that the REST connector40acts as a first interface between one or more steps of a process-based software application implemented by the webpage code24, and one or more REST APIs or services28called by the webpage code24. The REST connector(s)40include code for translating service or API calls of a process step into messages that conform to acceptable message types and protocols, as defined by an exposed interface description (e.g., as may be provided by a WADL file) for the REST API or service28. Note that the overall system10may represent a networked computing environment, such as a networked enterprise computing environment. Furthermore, note that in general, groupings of various modules of the system10are illustrative and may vary, e.g., certain modules may be combined with other modules or implemented inside of other modules, or the modules may otherwise be distributed differently (than shown) among a network or within one or more computing devices or virtual machines, without departing from the scope of the present teachings. The example system10includes one or more client systems12running client-side software18, e.g., a browser, which is usable to generate REST requests20and to process corresponding responses22. Note that alternatively, or in addition, the webpage code24generates REST requests (e.g., for data from a backend database32) to one or more REST APIs or services28, and processes responses, in response to user interaction with the webpage code24via the browser18. Accordingly, the REST request generation20may arise, for example, from certain user interaction with a BPMN (Business Process Model and Notation) process represented by a graphical BPMN process flow. Data retrieved, e.g., by the REST APIs or service28, from the enterprise applications (e.g., the enterprise database32) may be user accessible via the browser18after response processing by the webpage code24, and further processing (e.g., rendering)22by the client-side browser18. Accordingly, in the present example embodiment, requests20are sent from the client system(s)12to a server system14, where they are processed in accordance with web page code24, which may employ the REST API28to interface with software applications16and accompanying databases32to retrieve and/or otherwise interact with or change content provided via the applications16, e.g., enterprise applications and accompanying databases32. In the present example embodiment, a developer system34facilitates development of webpage code24and associated REST connector(s)40and REST API(s)28using a connector editor36, which may represent a full-featured JS/HTML/CSS editor. Note that while the REST connector40and the REST APIs28are shown as separate blocks inFIG.1, that conceptually, a developer or other user may view or understand the REST connector40as representing the REST API28, such that it may be considered part of the REST API28. Accordingly, when developing process-based software applications, a developer may employ the developer software34to assign different REST connectors to different flow elements of a process-based software application, for use thereby in accessing data and/or functionality provided by the REST API or service associated with and described by the REST connector40. The example connector editor36includes a REST API connector definer38, which facilitates enabling a developer or other user to define the REST connectors40(and consequently define the interfaces between the REST API(s)28and steps of a process-based software application implemented via the webpage code24). Conventionally, a developer may hand-code the REST interface definition40, e.g., using an interface modeling language, e.g., RAML (RESTful API Modeling Language), YAML (Yet Another Markup Language), WADL (Web Application Description Language), etc. However, such tasks can be tedious, especially when such conventional RAML, YAML, or WADL interfaces can be characterized by relatively complicated models that may be difficult for a developer to visualize or work with. Accordingly, the REST API connector definer38includes computer code that implements a simplified interface description model26characterized by a flat structure represented by a list of available resources30(also simply called resource list) and associated operation(s). The flat structure of the resource list30is used by the REST API connector definer38to back a set of simplified UI display screens provided by the connector editor36. The UI display screens are said to be simplified, as users, e.g., developers, interacting therewith are presented with options to interact with the design of the REST connector40as though the REST connector has been simplified to (or abstracted to) a description of its resources and operations. The remaining technical details involved in the actual WADL description of the REST APIs or services28are automatically handled by the connector editor36, as the edited simplified model is mapped back into the more complex WADL description file that is saved with the deployed REST connector40and accompanying webpage code24. Note that while the client-side browser18currently illustrates content20,22from the perspective of an end user that is accessing the webpage code24to participate in a process-based software application implemented thereby, that instead (or in addition), the client-system may represent a developer system, whereby the browser18is used to browse to a site that provides a development environment (e.g., as may be hosted on a process cloud) backed by the developer software34. The REST API connector definer38further includes computer code for rendering developer UI display screens (as discussed more fully below with reference toFIG.4) with various UI controls and associated options for enabling the developer to treat the REST connector40as though it is actually encoded directly using the flat structure30and associated REST interface model26. For the purposes of the present discussion, UI display screen may be any software-generated depiction presented on a display. Examples of depictions include windows, dialog boxes, displayed tables, and any other graphical user interface features, such as user interface controls, presented to a user via software, such as a browser. A user interface display screen contained within a single border is called a view or window. Views or windows may include sections, such as sub-views or sub-windows, dialog boxes, graphs, tables, and so on. In certain cases, a user interface display screen may refer to all application windows presently displayed on a display. A UI control may be any displayed element or component of a user interface display screen, which is adapted to enable a user to provide input, view data, and/or otherwise interact with a user interface. Additional examples of user interface controls include buttons, drop down menus, menu items, tap-and-hold functionality, and so on. Similarly, a user interface control signal may be any signal that is provided as input for software, wherein the input affects a user interface display screen and/or accompanying software application associated with the software. As a developer enters information into UI display screens of the connector editor36, for defining a particular REST API, the description may populate the REST API model26, e.g., by specifying resources for an interface and associated operation(s) that apply to those resources. The REST API connector definer38then runs (or causes running of) computer code for translating, mapping, or otherwise converting the populated REST API model26into the REST connector(s)40for use in the computing environment10to enable end users to employ the client systems12to interact with a backend database32(and/or other computing resources) via the webpage code24, REST connector(s)40and corresponding REST API(s)28. Note that the actual REST connector40may be automatically encoded by the REST API connector definer38using a conventional interface description language, e.g., Web Application Description Language (WADL). Furthermore, note that the developer software34may be part of or may otherwise leverage other computing resources, i.e., other than a desktop developer computer. For example, the developer software34may further include or otherwise leverage computing resources of a process cloud for hosting a server-side development environment, which may support running the connector editor36. As set forth above, the connector editor36uses the REST API model abstraction26(or alternative interface description structure) to facilitate automatically generating the REST connector(s) in response to developer input defining resources and operations available via the REST API(s)28and associated REST connector(s)40. Note that while the REST API(s)28are shown as part of the same server system14as the webpage code24, that embodiments are not limited thereto. The REST API(s) or service(s)28may run on a separate server system or cloud (that is still accessible for calls by the webpage code24and REST connector(s), e.g., via the Internet), without departing from the scope of the present teachings. FIGS.2A-2Brepresent a diagram that is split between two sheets and illustrates a first example mapping54between a first example Web Application Description Language (WADL) REST interface description50(as shown inFIG.2A) and a first example substantially REST flat structure52(as shown inFIG.2B) that characterizes the resource list30and accompanying interface model26ofFIG.1. Note that depending upon the needs of a given implementation, a WADL definition50(FIG.2A) can be converted into a REST editor flat structure model52(FIG.2B) and vice versa using properties of the mapping54ofFIGS.2A-2B, i.e., the direction of the mapping54shown inFIGS.2A-2Bis illustrative and can readily be reversed to meet the needs of a given implementation. With reference toFIGS.1and2, the mapping54may be implemented via the REST API connector definer38ofFIG.1, which also includes computer code for enabling not just defining REST API connectors in terms of the resource list30, but translating the defined resource list30used for a given REST API connector into a WADL (or other type of description) description for deployment of the REST API connector40ofFIG.1for use in facilitating interfacing the REST API(s)28with the webpage code24ofFIG.1. InFIG.2A, the example WADL definition50conforms to an example WADL specification that is discussed more fully below with reference toFIG.3. Note that the WADL definition50includes multiple levels of resources58-62within a resource container56. The various levels of resources58-52include various nested resources, where the lowest level resources80-84at a third resource level62are used for accessing various methods86, also called operations, at a lowest level64of the WADL definition. The example WADL definition includes a top-level resource container66, which is associated with a container Uniform Resource Locator (URL), e.g., “https://api.sample.us” which represents a network path to the resource container66. A first resource level58includes a first resource68(accessible via the “https://api.sample.us/container” path). The first resource includes additional resources70-78nested within the first resource68. Similarly, the different groups of resources80,82,84in the third resource level62are accessible via parent resources74,76,78of the second resource level respectively. Similarly, the different groups of resources80,82,84in the third resource level62may be used to access various methods86of the operations level64. Note that one resource70of the second resource level60may be used to access one of the methods (i.e., operations)86of the methods or operations level64. Accordingly, note the complexity of the WADL structure50characterizing an example REST API to be accessed via one or more steps of a process-based software application. A developer attempting to use the methods86may have difficulty understanding how to specify the endpoints to those methods86; how to configure the calls to those methods86, and so on. Furthermore, note that in certain WADL interface descriptions, a given REST API WADL definition may exhibit not just nesting, but various loops and cycles. Accordingly, certain embodiments herein collapse the structure characterizing the WADL definition50into one level of resources, e.g., corresponding to the one resource level98inFIG.2B. This is why the structure52ofFIG.2Bis called a flat structure, i.e., it includes one level98of resources104-110. This helps to enable a user working with a REST connector editor (e.g., the editor36ofFIG.1) to understand or visualize the associated software application providing the REST API (e.g.,28ofFIG.1) for enabling access to the operations86, as a list of resources104-110that are usable to access associated operations86. The flat structure52ofFIG.2Bis analogous to the resource list30ofFIG.1. Note that in the flat structure52ofFIG.2B, the first two levels56,58of the WADL definition have been collapsed, such that the new container102represents a REST connector representing the combination of the resource container66and first resource68ofFIG.2A. Other resource levels60-62of the WADL definition ofFIG.2Aare similarly collapsed into the single level98of resources104-110inFIG.2B, as part of the mapping operation54. FIG.3is a diagram illustrating a second example mapping114between a second example generalized WADL REST interface specification120and a second example REST flat structure specification122. With reference toFIGS.2A-2B and3, note that the example WADL definition50ofFIG.2A(for an example REST API) has been constructed in accordance with the example WADL specification120, which describes a software application used to implement a REST API or service. The WADL specification120ofFIG.3includes multiple layers of resources provided by the described WADL application, which is described by its corresponding interface definition124. An example of such interface definition124is provided as the WADL definition50inFIG.2A. The WADL specification120includes multiple levels of resources126, which are characterized by different paths126, and which may be used to access one or more operations130. The different operations130(e.g., Request and Response operations) may also contain parameters and representations of data to be sent and/or received. The WADL definition120includes various connecting lines128, which show nesting and/or looping relationships between elements of the WADL specification120. The second example mapping operation114is analogous to the mapping operation54ofFIGS.2A-2B, and illustrates a generalized approach for implementing the mapping114, which may be bi-directional as needed. The WADL structure120may be transformed by the mapping114into the simplified flat REST editor flat structure122. Note that the flat structure122, which is characterized by a base URL134, a single resource level136with a base path to a particular operation138that is described by a particular action type. The operation may be a request and/or response operation that may include a representation of data to be sent and/or received140, and one or more configurable parameters142. Accordingly, the flat structure122can be used by the REST connector editor36(also simply called REST editor herein) ofFIG.1to describe constructed REST connectors. The resulting description can then be transformed back into a WADL definition in accordance with the WADL structure120using the mapping114. FIG.4is a block diagram illustrating a second example system150and accompanying functions (including WADL structure mapping)156,158,162performed by (or otherwise supported by) the REST connector editor38, and illustrating communications between the REST connector editor38and a client developer system152and accompanying UI display screen(s)162that are used to define specific REST connectors160usable for implementing flow elements (e.g., steps) of a process-based software application. The example REST editor38(also called REST connector editor or REST API connector definer or editor inFIG.1) includes a controller154, which includes code for facilitating interfacing the different modules26,50,156-162shown inFIG.4. The controller154further includes code for implementing connector wizards (and/or for calling other code to implement the wizards); and code for communicating with a rendering module156for generating rendering instructions for various UI display screens and accompanying UI controls162(e.g., for wizards), which may be accessible to the developer systems152via a browser. Note that code156for generating REST editor UI rendering instructions may include code for implementing various wizards, e.g., connector configuration wizards, connector construction wizards, connector deployment wizards, and so on. Furthermore, the REST structure transformer156may include code for transforming a WADL definition50describing a particular REST API into the flat structure26and vice versa, in accordance with the example mappings54ofFIGS.2A-2B and114ofFIG.3. REST connectors164that are built using the REST editor38may be stored in the connectors catalog160. The connectors catalog160and accompanying connectors164stored therein may be accessed by other authorized users of the system150for use in building process-based software applications. Note that while various modules156,158of the REST editor38, and the connectors catalog160, may be implemented, in part, using functionality that is implemented outside of the REST editor module38, without departing from the scope of the present teachings. For example, the REST editor38may include code for calling modules provided by a connectors framework to implement various connector-related functions as needed. For example, the mappings implemented via the REST structure transformer158may be implemented in response to a call from the REST editor38to one or more functions in a connectors framework, which are built to transform WADL structures and associated definitions into flat structures usable by the REST editor38to facilitate construction of the various UI controls162and associated connector wizards. For the purposes of the present discussion, a connectors framework may be any reusable set of data and/or software functionality for implementing one or more functions pertaining to connectors. Accordingly, a connectors framework may represent a type of software framework. Example functions pertaining to connectors include implementing tools to facilitate user construction, configuration, deployment, and use of connectors. The functionality provided by a connectors framework may be distributed across a network, i.e., implemented via one or more servers. Note that the REST editor38ofFIG.4may be coupled with (i.e., in communication with) additional modules in a process cloud environment, without departing from the scope of the present teachings. For example, the REST editor38may incorporate functionality for reading files comprising a process-based software application so as to facilitate identifying and further configuring or otherwise modifying any connectors packaged with the process-based software application. FIG.5is a flow diagram illustrating an example user (e.g., developer) experience flow170when using the systems10,150ofFIGS.1and4to create and use a REST connector in a process-based software application. The user experience flow170may be performed by a developer or designer of a process-based software application. With reference toFIGS.4and5, the user experience flow170includes initially creating a REST connector172. REST connector creation172may involve using one or more of the UI controls162and associated functionality of the REST editor38ofFIG.4to create an initial connector container and specify a connector name. This step172may correspond to constructing the first level or layer96of the REST editor flat structure52ofFIG.2B. Next, in a resource-specifying step174, the user employs the one or more of the UI controls162and associated functionality of the REST editor38ofFIG.4to specify one or more resources for the REST API connector. This can be analogous to specifying the second level98of the example REST editor flat structure52ofFIG.2B. Next, in a resource-specifying step174, the user employs the one or more of the UI controls162and associated functionality of the REST editor38ofFIG.4to specify one or more operations for the REST API connector being developed. This can be analogous to specifying the third level100of the example REST editor flat structure52ofFIG.2B. A subsequent importation step178involves importing any requisite information about a particular connector type (e.g., information specifying any data mappings). The additional information about the connector type may include information about different file types, protocols, default security configuration information, and so on, for a particular type of connector. The information about the particular connector type (e.g., where the type may be type REST API) may be imported from the connectors library (or elsewhere) as a JavaScript Object Notation (JSON) file, which may be deserialized (from JSON and converted into an object) and serialized (converted from the object back into a JSON file) as needed to meet the needs of a given implementation. Next a connector-saving step180includes saving the resulting connector definition. The connector definition may be saved in the connectors catalog160ofFIG.4. In the present example embodiment, when the connector164is saved to the catalog160ofFIG.4, its flat structure26is converted back into a WADL structure50, e.g., subsequent deployment and use with a deployed process-based software application. Next, an implementation step182includes using the connector to implement an activity (e.g., process step corresponding to a BPMN flow element) in a process-based software application. FIG.6is a flow diagram of a first example method190implementable via the embodiments ofFIGS.1-4. The example method190facilitates defining a REST interface in a computing environment, where in this particular context, the REST interface corresponds to (or is otherwise implemented using) a REST connector. Note that, depending upon the context in which the term is used, the term “REST interface” may refer to a REST connector that acts as an interface between a process step and a REST API or service called by the process step. Alternatively, when separate code of a process step calls the REST connector to then call the service, one REST interface may refer to the interface between the code of the process step and the REST connector, and another REST interface may refer to the interface between the REST connector and the REST API or service called by the REST connector. The example method10includes a first step192, which involves modelling a REST interface description using a model that represents an abstraction of, or alternative version of, a second REST interface description. The alternative version of the second REST interface description is represented by or as exemplified by the flat structure52ofFIG.2Band the flat structure specification122ofFIG.3. For example, with reference toFIG.3, the flat REST structure122represents an abstraction of the more complicated WADL structure120. Similarly, the example flat structure52ofFIG.2Brepresents and abstraction of the more complicated WADL definition50ofFIG.2A. A second step194includes employing the model to enable modelling a REST interface description as list of resources containing operations. For example, the list of resources can correspond to the list30ofFIG.1; the row or level of resources98ofFIG.2B; the resource136ofFIG.3, and the flat structure26ofFIG.4. A third step196includes providing one or more UI controls (e.g., the UI controls162ofFIG.4) to enable a developer to specify one or more REST interfaces (e.g., REST API connectors) using the model without manually specifying the description via hand coding of an interface description language (e.g., RAML, YAML, WADL, etc.). A fourth step198includes using the one or more specified REST interfaces (e.g., REST API connectors) to automatically generate an interface description (e.g., WADL definition40ofFIG.1; WADL definitions inherent in50,164ofFIG.4, etc.) in an interface description language. Note that the method190may be altered, without departing from the scope of the present teachings. For example, the method190may further specify that the fourth step196further includes using the model to facilitate specification of a REST interface using a REST flat structure; incorporating one or more parameters (e.g., parameters142ofFIG.3) specified by a developer during manipulation of the one or more UI controls (e.g., UI controls162ofFIG.4) into the REST flat structure (e.g., flat structure26ofFIG.4) defined by the model; and employing the REST flat structure and accompanying developer-specified parameters to generate an interface description (e.g., corresponding to a REST connector) in an interface description language (e.g., RAML, YAML, WADL, etc.) suitable for implementing a REST API connector (e.g., corresponding to the connector40ofFIG.1) in a target computing environment (e.g., the computing environment of the server system14ofFIG.1). The one or more UI controls may include a first user option to define (e.g., (e.g., a UI control for defining) one or more resources and one or more corresponding operations within the one or more resources (as illustrated inFIGS.2A-2BandFIG.3, where the operations86ofFIG.2Bare considered to be “within” the resources104-110, and where inFIG.3, the operation138is considered to be “within” the resource136). The one or more UI controls may further include a second user option to create one or more business objects from a JavaScript Object Notation (JSON) instance or schema, thereby simplifying user integration tasks. The second user option may include a UI control enabling a developer to paste a JSON response of a call to a REST API, resulting in a pasted JSON response; then transforming the pasted JSON response into a business object. Note that in the present example embodiment, when the REST editor (e.g., the REST editor38ofFIG.4) works with an interface description or definition (e.g., WADL definition), e.g., to build a REST connector, it works using the flat REST structure (e.g., the flat structure26ofFIG.4). When the REST editor saves the interface description, it saves the associated REST connector to a format characterized by a description language (e.g., WADL) consistent with that used by the REST API or service that the REST connector will be used to communicate with, i.e., to call and receive data and/or otherwise use functionality afforded by the REST API or service. Accordingly, the example method190may further include using the REST connector to interface a process step of a process-based software application with a REST API or web service to be called by the process step. The process step may correspond to a flow element in a Business Process Model and Notation (BPMN) model of the process-based software application. FIG.7is a flow diagram of a second example method270implementable via the embodiment ofFIGS.1-5, wherein the second example method270is from the perspective of a REST connector and/or connectors framework used by a REST editor to implement various functions thereof. Note that in general, various embodiments discussed herein pertaining to use of a layer of abstraction (e.g., implemented via a REST connector as discussed herein) between a REST service (e.g., corresponding to the REST API(s)28ofFIG.1) and a step of a process-based software application (e.g., represented by the webpage code24ofFIG.1), can greatly facilitate REST connector setup, use, and configuration changes. The second example method270represents a communications method for facilitating communications between a process step of a process-based software application and a REST service (e.g., using a REST connector). The second example method270includes an initial receiving step272, which includes receiving, at a first interface between a process step and the REST service, a first indication of an operation to be performed by a resource of the REST service. For example, if the first step272is implemented pre-deployment (e.g., before deployment of a process-based software application that will use a REST connector to have an operation performed for the process-based software application using a call to an external REST API or service), then the first indication may corresponding to a specification of the operation138ofFIG.3responsive to user specification of the operation via one or more of the UI controls162ofFIG.4. In cases where the method270is implemented post-deployment, the indication of the operation to be performed is received by the REST connector (e.g., REST connector40ofFIG.1) from a process step of the webpage code24, wherein the REST connector is to call the REST connector40to implement the operation using the REST API28ofFIG.1. A subsequent second receiving step274includes receiving a second indication of the resource of the REST service that is to perform the operation on behalf of the process step. For example, if the second step274is implemented pre-deployment the second indication may correspond to an indication of the resource136ofFIG.3responsive to user specification of the resource via the one or more UI controls162ofFIG.4. In cases where the method270is implemented post-deployment, then indication of the resource used to implement the operation may be passed from a process step, e.g., of the webpage code24ofFIG.1, to the REST connector40, which may then package the information about the resource and operation into a request to be sent to the REST API28ofFIG.1. A third step276includes using the first interface, in combination with the first indication and the second indication, to automatically construct a call to the REST service in accordance with a predetermined description of the REST service and on behalf of the process step. Note that if the third step276is implemented pre-deployment, then the call that is constructed represents an instruction as to how to make such a call when it is implemented post-deployment. In cases where the third step270is implemented post-deployment, then the third step276may involve the REST connector40ofFIG.1automatically packaging and formatting an input resource/operation pair to be delivered to the REST API28as a request message that has been formatted in accordance with the WADL definition40exposed by the REST API28and used by the REST connector40to package calls (e.g., prepare and format request messages) for delivery to the REST APIs28ofFIG.1. Note that the second example method270may be altered, without departing from the scope of the present teachings. For example, the first two steps272,274may be interchanged. Furthermore, additional steps or details may be provided, or certain steps or details may be removed. For example, the third step276of the second example method270may further include mapping information pertaining to the first indication and the second indication into a WADL file. The mapping (e.g., corresponding to the mapping54ofFIGS.2A-2B or114ofFIG.3) may be performed to transfer a flat structure of a WADL representation into a hierarchal representation (e.g., corresponding to the representation50ofFIG.2A or120ofFIG.3) of the WADL file. The predetermined description of the REST service may include a description of an exposed second interface of the REST service. The second interface may be provided in an interface description file for the REST service, whereby the first interface acts as an abstraction of the second interface. The first interface may include or represent a REST connector. The REST connector may communicate with the REST service via the second interface that is exposed by the REST service. The first interface may further include one or more connectors, which are characterized by one or more connector configurations. The one or more connector configurations may include one or more parameters. The one or more parameters may include a network address (e.g., a URL) for an endpoint associated with the resource of the REST API to be called by the one or more connectors. Note that the interface description file need not be limited to WADL format, but instead can be a RAML file, a YAML file, or other type of file, without departing from the scope of the present teachings. Furthermore, note that the call to the REST API or service may be structured in accordance with the description of the exposed second interface. The third step276may further include mapping a first interface model (describing the first interface as indicating a collection of one or more resources and one or more associated operations), into a second interface model (describing the second interface in accordance with a WADL description). The first model may be characterized by a substantially flat structure, and the second model is characterized by a hierarchical structure, e.g., with one or more nested resources. The so-called “process step” may correspond to a flow element in a BPMN model of the process-based software application (e.g., the process-based software application implemented via the webpage code24ofFIG.1). FIG.8is a general block diagram of a system900and accompanying computing environment usable to implement the embodiments ofFIGS.1-7. The example system900is capable of supporting or running various hardware and/or software modules and associated methods discussed with reference toFIGS.1-1. Note that certain embodiments may be implemented using one or more standalone applications (for example, residing in a user device) and/or one or more web-based applications implemented using a combination of client-side and server-side code. The general system900includes user devices960-990, including desktop computers960, notebook computers970, smartphones980, mobile phones985, and tablets990. The general system900can interface with any type of user device, such as a thin-client computer, Internet-enabled mobile telephone, mobile Internet access device, tablet, electronic book, or personal digital assistant, capable of displaying and navigating web pages or other types of electronic documents and UIs, and/or executing applications. Although the system900is shown with five user devices, any number of user devices can be supported. A web server910is used to process requests from web browsers and standalone applications for web pages, electronic documents, enterprise data or other content, and other data from the user computers. The web server910may also provide push data or syndicated content, such as RSS feeds, of data related to enterprise operations. An application server920operates one or more applications. The applications can be implemented as one or more scripts or programs written in any programming language, such as Java, C, C++, C#, or any scripting language, such as JavaScript or ECMAScript (European Computer Manufacturers Association Script), Perl, PHP (Hypertext Preprocessor), Python, Ruby, or TCL (Tool Command Language). Applications can be built using libraries or application frameworks, such as Rails, Enterprise JavaBeans, or .NET. Web content can created using HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and other web technology, including templating languages and parsers. The data applications running on the application server920are adapted to process input data and user computer requests and can store or retrieve data from data storage device or database930. Database930stores data created and used by the data applications. In an embodiment, the database930includes a relational database that is adapted to store, update, and retrieve data in response to SQL format commands or other database query languages. Other embodiments may use unstructured data storage architectures and NoSQL (Not Only SQL) databases. In an embodiment, the application server920includes one or more general-purpose computers capable of executing programs or scripts. In an embodiment, web server910is implemented as an application running on the one or more general-purpose computers. The web server910and application server920may be combined and executed on the same computers. An electronic communication network940-950enables communication between user computers960-990, web server910, application server920, and database930. In an embodiment, networks940-950may further include any form of electrical or optical communication devices, including wired network940and wireless network950. Networks940-950may also incorporate one or more local-area networks, such as an Ethernet network, wide-area networks, such as the Internet; cellular carrier data networks; and virtual networks, such as a virtual private network. The system900is one example for executing applications according to an embodiment of the invention. In another embodiment, application server910, web server920, and optionally database930can be combined into a single server computer application and system. In a further embodiment, virtualization and virtual machine applications may be used to implement one or more of the application server910, web server920, and database930. In still further embodiments, all or a portion of the web and application serving functions may be integrated into an application running on each of the user computers. For example, a JavaScript application on the user computer may be used to retrieve or analyze data and display portions of the applications. With reference toFIGS.1and5, the developer system(s)12,152(also called client systems, as they may be used by persons other than a “developer”) ofFIGS.1and4may be implemented in whole or in part via one or more of the desktop computer960, notebook computer970, smartphone980, mobile phone985, tablet990, ofFIG.8and/or other computing devices. In a particular example embodiment, the computing devices960-990run browsers, e.g., used to display the UI display screens for the client-side software18(e.g., browser) ofFIG.1an UI display screens162for the server-side REST editor38ofFIG.4. In a particular example embodiment, browsers of the client-side system(s)12ofFIG.1connect to the Internet, represented by the wired network940and/or wireless network950as shown inFIG.8, to thereby access one or more network-coupled servers, databases, and/or associated cloud-based functionality, e.g., as represented, inFIG.1, by the server system14and accompanying developer software34, REST API model abstraction26and enterprise applications16and databases32. Examples of process cloud functionality that may be accessed and used by the client-side system(s)12include process cloud services and accompanying process-based application development functionality34,36,28shown inFIG.1. Note that one or more of the web server910, application server920, and data storage device or database930shown inFIG.8may be used to host software corresponding to the modules24-38,14,16ofFIG.1, and modules26,38,50,154-160,164ofFIG.4, as detailed more fully below. In the particular example embodiment, process cloud functionality14,16,26,34ofFIG.1runs in a cloud computing environment that includes a collection of plural web servers910, application servers920, and data storage devices930shown inFIG.8. For example, in a particular example embodiment, process-based application development functionality34(e.g., as may be implemented using a PCS composer) ofFIG.1runs on a process cloud that communicates with a document cloud via an integration mechanism, e.g., middleware, APIs, web services, etc. The document cloud may maintain data that may otherwise be maintained in a process database (e.g., as may be represented by database32and accompanying database management system enterprise application16ofFIG.1). Note that a runtime of the application development systems34ofFIG.1and a runtime of the server system14ofFIG.1may run on one or more application servers920, as shown inFIG.8. Furthermore, the enterprise applications16ofFIG.1may run on the one or more application servers920ofFIG.8and may have further access to enterprise the enterprise databases32ofFIG.1, which may be maintained via one or more of the data storage devices930ofFIG.8. Note that in certain implementations, the webpage code24ofFIG.1may run on a web server of the server system14, which may be implemented via one or more of the web servers910ofFIG.8. The REST APIs28may also run on a web server, or alternatively, on an application server that hosts the enterprise applications16, such as the application server920ofFIG.8. Furthermore, note that the connectors catalog160may also be hosted on one or more of the data storage devices930ofFIG.8. In general, software developers and/or other users of the client-side system(s)12,152ofFIGS.1and4, may subscribe to certain cloud services to facilitate development of and use of software applications and storage of associated files. A cloud service that is configured for software application or process flow development and/or implementation is called a PCS herein. A PCS may employ a networked database, e.g., the data storage device930ofFIG.8, to store files and other objects used by a given software program being developed. In general, the example server-side development environments discussed herein may be accessible to developers via browsers. The development environments may be backed by the PCS, such that certain developed software application files are stored in a PCS database (e.g., one of the databases32ofFIG.1) corresponding to the one or more of the data storage devices930ofFIG.8. In the particular example embodiment, the UI display screens162ofFIG.4and associated wizard UI display screens generated by the REST editor UI rendering module156of the REST editor38include accompanying UI controls and associated options. Example options include options to browse, create, delete, define, upload, download, etc., specifications for process-based software applications, configuration files, connector specifications, and so on. Note that in the particular example embodiment, browsers used by the client-side systems12,152ofFIGS.1and4interface with web servers910shown inFIG.8to access websites and accompanying webpage code, which is backed by applications used to implement the modules24-38,14,16ofFIG.1, and modules26,38,50,154-160,164ofFIG.4. The webpage code of the web servers910ofFIG.8may include or otherwise use or call web services, APIs, and/or other interfacing mechanisms to communicate with application software hosted on application servers920ofFIG.8of the cloud, which includes a collection of web servers910, application servers920, and data storage devices930ofFIG.8. Note that various embodiments discussed herein may provide substantial benefits in terms of providing efficiencies in systems and methods that achieve a new and useful end as it pertains to new software usability; particularly usability of development environments for process-based software applications that demand a simplified interface for constructing REST connectors for use in implementing steps of process-based software applications. FIG.9is a general block diagram of a computing device usable to implement the embodiments ofFIGS.1-3. While computing device500ofFIG.9is described as facilitating performing the steps as described in certain implementations herein, any suitable component or combination of components of computing device500or any suitable processor or processors associated with computing device500(also called computing system herein) may be used for performing the steps described. Hence, the example computing system500ofFIG.9may be used for implementations described herein. For example, computing system500may be used to implement server devices910,920ofFIG.8as well as to perform the method implementations described herein. In some implementations, computing system500may include a processor502, an operating system504, a memory506, and an input/output (I/O) interface508. In various implementations, processor502may be used to implement various functions and features described herein, as well as to perform the method implementations described herein. While processor502is described as performing implementations described herein, any suitable component or combination of components of system500or any suitable processor or processors associated with system500or any suitable system may perform the steps described. Implementations described herein may be carried out on a user device, on a server, or a combination of both. Computing device500also includes a software application510, which may be stored on memory506or on any other suitable storage location or computer-readable medium. Software application510provides instructions that enable processor502to perform the functions described herein and other functions. The components of computing system500may be implemented by one or more processors or any combination of hardware devices, as well as any combination of hardware, software, firmware, etc. For ease of illustration,FIG.9shows one block for each of processor502, operating system504, memory506, I/O interface508, and software application510. These blocks502,504,506,508, and510may represent multiple processors, operating systems, memories, I/O interfaces, and software applications. In various implementations, computing system500may not have all of the components shown and/or may have other elements including other types of components instead of, or in addition to, those shown herein. Although the description has been described with respect to particular embodiments thereof, these particular embodiments are merely illustrative, and not restrictive. For example, while certain embodiments discussed herein involve integrating or leveraging functionality of a process cloud to facilitate use of a simplified model for representing and streamlining developer interaction with REST APIs or services that are used with process-based software applications (e.g., by using REST connectors that have been developed using a flat REST editor structure), embodiments are not limited thereto. For example, other types of software; not just process-based software applications (e.g., that may be represented via BPMN models) may be more readily developed using embodiments discussed herein that simplify developer tasks required to implement REST service calls and/or other types of service calls. Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different particular embodiments. In some particular embodiments, multiple steps shown as sequential in this specification can be performed at the same time. Particular embodiments may be implemented in a computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or device. Particular embodiments can be implemented in the form of control logic in software or hardware or a combination of both. The control logic, when executed by one or more processors, may be operable to perform that which is described in particular embodiments. Particular embodiments may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of particular embodiments can be achieved by any means as is known in the art. Distributed, networked systems, components, and/or circuits can be used. Communication, or transfer, of data may be wired, wireless, or by any other means. It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope to implement a program or code that can be stored in a machine-readable medium to permit a computer to perform any of the methods described above. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Thus, while particular embodiments have been described herein, latitudes of modification, various changes, and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of particular embodiments will be employed without a corresponding use of other features without departing from the scope and spirit as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit.
56,071
11861335
DETAILED DESCRIPTION OF SEVERAL EMBODIMENTS To mitigate the above deficiencies, embodiments disclosed herein leverage an LEST system that implements a machine learning technique that utilizes known code graph and abstract syntax tree pairs to learn a function for predicting an abstract syntax tree from a corresponding nested tree object such as a JSON object. The predicted abstract syntax tree is used to generate the code for formatting (i.e., mapping) the JSON object into a standardized data structure such as a table (i.e., desired data structure). In one example, various JSON objects may be scraped from an online source. Rather than manually writing code to transform these JSON objects into a standardized data format (e.g., table), the disclosed LEST system performs machine learning by utilizing known code graph and abstract syntax tree pairs for the JSON objects from the source to learn a function for predicting a corresponding abstract syntax tree from a JSON object. The corresponding abstract syntax tree is used to generate the code for formatting the JSON object into a standardized data structure. In other words, known conversions from JSON objects to the standardized data structure may be used to train the LEST system to generate code that will accurately transform the JSON objects to the standardized data structure even when changes occur in the layout and in the communication protocol of the informational source. The above-described features are now described in detail with respect toFIGS.1-7. It is noted that the examples disclosed herein are directed to the conversion of JSON objects to data tables. However, it is noted that the disclosed system and method are applicable to converting any nested tree object to any standardized data structure. JSON, or JavaScript Object Notation is an intuitive notation for representing and transmitting data between servers and web applications. Essentially a JSON object includes key-value pair and an ordered list of data values. The JSON object may support various data types including, but not limited to strings, numbers, Boolean values, arrays and objects. In one example, JSON objects are used in application programming interfaces (API) for exchanging data between different computer systems. For example, payroll data may be sent as JSON objects from a financial institution to a payroll system for further processing. FIG.1shows a diagram100of an example JSON object converted to a standardized data structure (e.g., a table). JSON object102generally includes key-value pairs including arrays of strings for identifying categories, characters for identifying employees and numerical values for payroll data of employees. The strings include the categories (e.g., employee, payroll, month, payroll code, and totals), whereas the characters and numerical values are the values corresponding to the employee data with respect to the categories. The strings, characters and numerical values of JSON object102are extracted from the JSON object and converted into a data table104by a coded algorithm. JSON object102shown inFIG.1is an example of financial data that may be stored by financial institutions for millions of customers. Each financial institution, however, may have differently formatted JSON objects for representing the financial data. Manually writing code for converting JSON objects of various formats into standardized data formats (e.g., tables) would be time consuming. Therefore, a method and system for automatically generating the code required to convert JSON objects of various formats into standardized data formats (e.g., tables) is beneficial. The LEST method and system described herein implement machine learning, including a training phase that utilizes known JSON objects and corresponding AST pairs to learn how to predict, during a prediction phase, ASTs from new JSON objects that may have varying formats. These ASTs may then be converted to code for converting the JSON objects into standardized data formats. The training phase generally includes four steps. The first step includes capturing known JSON objects and the known corresponding code that converts the known JSON objects into a known standardized data format. The second step of the training phase includes transforming the known JSON objects into code graphs. The third step of the training phase includes transforming the known corresponding code into abstract syntax trees (ASTs). The fourth step of the training phase includes using known code graph and AST pairs to learn a function to predict an unknown AST from a new JSON object. This is beneficial because the predicted AST can then be compiled to create a function for converting the new JSON object into the standardized data format (e.g., table). In other words, the learning process to generate the appropriate code is performed in the abstract using the relationships between the code graphs and their corresponding ASTs. These four training steps are now described with respect toFIGS.2A-2D. FIG.2Ashows diagram200of a JSON object and corresponding code for transforming the JSON object into the standardized data structure. As mentioned above, the JSON object102generally includes key-value pairs including arrays of strings for identifying categories, characters for identifying employees and numerical values for payroll data of employees. Code204for transforming the JSON object to the standardized data structure (e.g., a table) generally includes commands to “get” (i.e., extract) the various key-value pairs from the JSON object. For example, the commands may get the desired key-value pairs for certain employees (e.g., employees A and B). Although not shown, the code may also include a command for packaging and displaying the extracted key-value pairs in the standardized data structure (e.g., the table). It is noted that numerous pairs of similar JSON objects and the corresponding code for transforming the JSON objects to the standardized data structure are captured and utilized in the training phase. The pairs of JSON objects and corresponding code generally may be for converting the JSON objects into the same or a similar standardized data structure (i.e., a similar table desired by the destination system that processes the information). It is also noted that although the examples described herein are related to JSON objects and corresponding JAVA code for extracting the key-value pairs and displaying the extracted key-value pairs in the standardized data structure (e.g., the table), other programming objects and programming languages may be utilized. Once a sufficient number of JSON objects and corresponding JAVA code pairs are captured, the training phase proceeds to step two of the training process shown in diagram220ofFIG.2Bwhere a JSON object is transformed into a code graph. The code graph222is a model that represents code in a structured graph format. This is beneficial for easier analysis and manipulation of the code. The nodes of the graph may represent various code objects such as functions, classes, variables, etc. In the example ofFIG.2B, the nodes represent the overall object, object types, items in the object and the various strings, arrays and numerical values shown in JSON object102. Generating code graph222may be performed by using a static analysis technique that extracts the graph structure directly from JSON object102, a dynamic analysis that infers the graph structure, or a combination of the two. Once the code graph is created, the training phase proceeds to step three of the training process shown in diagram240ofFIG.2Cwhere the code transformed into an AST242. In general, an AST is a hierarchical representation of the program code. Specifically, AST242is constructed by analyzing code204and capturing the structure of the code and relationships within the code. Each node in the AST represents a code structure or a code operation. For example, AST242includes a parent node for item “c” and child nodes that represent the “get” commands in the code and any logic in the code. It is noted that AST242inFIG.2Cis different than code graph222inFIG.2B. Specifically, AST242represents the functionality of code204, whereas code graph222represents the structure of the objects within the code. Together, code graph222and AST242fully describe code204in graphical form. Once a desired number of code graphs and corresponding ASTs are determined for a number of JSON objects, the training phase proceeds to step four of the training process shown in diagram260ofFIG.2Dfor determining a function264based on pairs of code graphs and ASTs. Specifically, graph/tree pairs262may include a code graph222and AST242for corresponding JSON objects. The numbers of pairs may be in the hundreds or thousands depending on the available training data. In either case, a machine learning (ML) algorithm such as a recurrent neural network (RNN) may be trained by inputting a subset of the code graphs into the RNN and adjusting weights in the RNN to create a function264for predicting the ASTs based on the corresponding code graphs. The weights may be adjusted by comparing the predicted ASTs to the known ASTs for the input code graphs. Once training is complete, prediction accuracy of function264can be confirmed by inputting another subset of the known code graphs into the RNN in an attempt to predict the known corresponding ASTs. This ensures that the RNN is not overfitted and is accurately predicting ASTs from new code graphs. Training the RNN may generally include defining the architecture of the RNN (e.g., number of layers, types of functions being performed at each node within the RNN, etc.), preparing the training data (e.g., extracting portions of the JSON object for input to different nodes of the RNN, etc.), initializing the RNN from scratch or based on a pretrained model (e.g., initializing node weights, etc.), performing forward propagation (e.g., propagating the JSON data through the various node layers to produce a predicted AST, etc.), back propagation (e.g., compute and propagate gradients through the model based on computed loss, etc.), updating the weights based on the gradients, repeating the above described steps, evaluating performance by comparing the predictions known results, and performing finetuning (e.g., adjusting hyperparameters, etc.) RNNs are generally applicable to operating on JSON objects due to the hierarchical structured format of the JSON objects. Various RNNs may be implemented. One example of an RNN that may be used is a long short-term memory (LSTM) architecture that learns long-term dependencies in the sequential data. The LSTM may include interconnected memory cells including various states (e.g., cell states) and gates (e.g., input gates, forget gates and output gates). The overall operation of the LSTM may perform input processing, cell state updates and output computations. The gates control flow of data through the memory cells and also provide a mechanism to remember or forget certain data to provide long term memory while avoiding a vanishing gradient problem. In general, the LSTM may be beneficial in that it can track long term dependencies in the input data sequence. Due to the nested tree structure of the object (e.g. JSON object), the RNN may be implemented as a GraphRNN which is designed to handle/predict structured data. The GraphRNN model works generally by decomposing the graph generation process to learn the graph structure (i.e. learn code graph structure and predict the corresponding AST graph structure). The GraphRNN may generate new graph nodes at each step and update the hidden state, while also generating edges for new nodes. GraphRNN training includes using the known code graphs, predicting the next node in the AST graphs and the connections in the AST graphs given the current state of the AST graphs. Once trained, GraphRNN can generate new AST graphs by predicting nodes and their connections one after another. In other words, the GraphRNN uses the known code graphs of the JSON objects to determine how to predict the AST graph nodes and connections. This knowledge is then used to determine new AST graphs for new JSON objects. In either case, once the learning phase is complete, a function264is learned. Function264can be used in the prediction phase for newly received JSON objects that may have similar, but different formats than the known JSON objects used during training. The prediction phase generally includes two steps. The first step of the prediction phase is to transform a new JSON object (JSON object with a new format) to a code graph and then apply the function264learned during the training phases to convert the code graph into a corresponding AST. The second step of the prediction phase is to compile the predicted AST to create a coded function that transforms the new JSON object to the standardized data format (e.g., the table). These two steps are now described with respect toFIGS.3A and3B. Predicting ASTs by the RNN generally includes preprocessing the input JSON object into a code graph to be input to the RNN, performing coding if beneficial, and sequentially feeding the preprocessed data into the input layer of the RNN, and generating a predicted output (i.e., AST structure and values).FIG.3Ashows a diagram300of using the function to transform the code graph to a corresponding abstract syntax tree. As mentioned above, JSON objects can be transformed into code graphs. In this case, a new JSON object302is converted into a new code graph304using static techniques, dynamic techniques or a combination of the two. The nodes of the graph represent various code objects such as functions, classes, variables, etc. The generated code graph304is input to learned function264from the training phase. Learned function264converts code graph304into a new corresponding AST306. Once the new corresponding AST306is determined, the process can proceed to step two of the prediction phase320shown inFIG.3Bwhere the AST is compiled to transform the corresponding JSON object into the standardized data structure. For example, AST306is compiled to create a function322which converts JSON object302into the desired standard data structure (e.g., table)324. It is noted that the steps of the training process as illustrated inFIGS.3A and3Bmay be repeated for each new JSON object received. As long as the ML algorithm is properly trained, function264predicts an accurate AST which can then be compiled into function322for obtaining code for accurately converting the JSON object302into the desired table format324. It is noted that the function264could be updated continuously or periodically as new training data is received. For example, incorrect predictions could be corrected manually by the programmers and then used as new training data in a subsequent learning phase. After each subsequent learning phase, function264is updated to more accurately predict the AST corresponding to the JSON object. As described above, the process performed by the system and method disclosed herein includes both a training phase and a prediction phase. Each of these phases are described in detail with respect toFIGS.4and5below. FIG.4shows a flowchart400for the training phase of the disclosed LEST system. In step402, the training data is captured or received. This training data generally includes known JSON objects and corresponding code that accurately converts the JSON objects to the desired output structure (e.g., desired table structure). Generally, hundreds or thousands of pairs may be used to accurately train the algorithm. In step404, the training phase transforms the JSON objects to respective code graphs, and in step406, the training phase transforms the code graphs into respective ASTs. Once the JSON objects are represented by code graphs and ASTs, the code graphs and ASTs may be paired up and used as training data for the ML algorithm. For example, in step408the training phase performs machine learning by inputting the code graphs into the algorithm (e.g., RNN) for predicting the corresponding ASTs and adjusting weights of the algorithm based on accuracy of the predictions. Once the training is complete, the function264for accurately predicting ASTs based on code graphs is generated. Then, in step410, the function264is output to the system for performing the prediction phase. FIG.5shows a flowchart500for the prediction phase of the LEST system. In step502, a new JSON object is captured. This JSON object may have a different structure than the JSON objects used during training. In step504, the prediction phase transforms the new JSON object into a code graph. The prediction phase then applies the function264to the code graph in step506in order to predict the corresponding AST associated with the code graph. In step508, the prediction phase compiles the predicted AST to create a function322to transform the new JSON into the desired format (e.g., the table). In step510, the prediction phase applies the created function322to perform the desired transformation. Steps502-510may be repeated as necessary for newly captured JSON objects. As mentioned above, the training phase may also be repeated as desired when new training data is received or generated. This ensures that the prediction algorithm is accurate regardless of the varying formats of the JSON objects being received. FIG.6shows an example of a system600configured for providing the LEST system shown inFIGS.2A-2D,3A,3B,4and5. It should be understood that the components of the system600shown inFIG.6and described herein are merely examples and systems with additional, alternative, or fewer number of components should be considered within the scope of this disclosure. As shown, the LEST system600comprises at least one end user device602and servers604and606interconnected through a network610. In the illustrated example, server604supports operation of the LEST system and server606supports operation of the JSON data source(s). In the illustrated example, user device602is a PC but could be any device (e.g., smartphone, tablet, etc.) providing access to the servers via network610. User device602has a user interface UI, which may be used to communicate with the servers using the network610via a browser or via software applications. For example, user device602may allow the user to access the LEST system and JSON sources running on servers604and606, thereby initiating conversion of JSON objects to standardized data objects (e.g., tables). The network610may be the Internet and or other public or private networks or combinations thereof. The network610therefore should be understood to include any type of circuit switching network, packet switching network, or a combination thereof. Non-limiting examples of the network610may include a local area network (LAN), metropolitan area network (MAN), wide area network (WAN), and the like. In an example, end user device602may communicate with servers604and606via a software application to control LEST system disclosed herein. The software application may initiate server604to perform LEST on the JSON objects of the JSON source(s) according to the systems/methods shown inFIGS.1-5. Servers604,606and user device602are each depicted as single devices for ease of illustration, but those of ordinary skill in the art will appreciate that servers604and606and user device602may be embodied in different forms for different implementations. For example, any or each of the servers may include a plurality of servers including a plurality of databases, etc. Alternatively, the operations performed by any of the servers may be performed on fewer (e.g., one or two) servers. In another example, a plurality of user devices (not shown) may communicate with the servers. Furthermore, a single user may have multiple user devices (not shown), and/or there may be multiple users (not shown) each having their own respective user devices (not shown). Regardless, the hardware configuration shown inFIG.6may be a system that supports the functionality of the LEST system shown inFIGS.1-5. FIG.7shows a block diagram of an example computing device700that is configured for facilitating the LEST system based on the principles disclosed herein. For example, computing device700may function as the servers604,606and/or user device602, or a portion or combination thereof in some embodiments. The computing device700performs one or more steps of the methods shown inFIGS.1-5. The computing device700is implemented on any electronic device that runs software applications derived from compiled instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc. In some implementations, the computing device700includes one or more processors702, one or more input devices704, one or more display devices706, one or more network interfaces708, and one or more computer-readable media710. Each of these components is coupled by a bus712. Display device706includes any display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology. Processor(s)702uses any processor technology, including but not limited to graphics processors and multi-core processors. Input device704includes any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display. Bus712includes any internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, USB, Serial ATA or FireWire. Computer-readable medium710includes any non-transitory computer readable medium that provides instructions to processor(s)702for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.), or volatile media (e.g., SDRAM, ROM, etc.). Computer-readable medium710includes various instructions714for implementing an operating system (e.g., Mac OS®, Windows®, Linux). The operating system may be multi-user, multiprocessing, multitasking, multithreading, real-time, and the like. The operating system performs basic tasks, including but not limited to: recognizing input from input device704; sending output to display device706; keeping track of files and directories on computer-readable medium710; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus712. Network communications instructions716establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, telephony, etc.). Application(s)718may comprise an application that uses or implements the processes described herein and/or other processes. The processes may also be implemented in the operating system. The described features may be implemented in one or more computer programs that may be executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program may be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. In one embodiment, this may include Python. The computer programs therefore are polyglots. Suitable processors for the execution of a program of instructions may include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor may receive instructions and data from a read-only memory or a random-access memory or both. The essential elements of a computer may include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer may also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data may include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features may be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The features may be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a user computer having a graphical user interface or an Internet browser, or any combination thereof. The components of the system may be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a telephone network, a LAN, a WAN, and the computers and networks forming the Internet. The computer system may include user devices and servers. A user device and server may generally be remote from each other and may typically interact through a network. The relationship of user device and server may arise by virtue of computer programs running on the respective computers and having a relationship with each other. One or more features or steps of the disclosed embodiments may be implemented using an API. An API may define one or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation. The API may be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter may be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters may be implemented in any programming language. The programming language may define the vocabulary and calling convention that a programmer will employ to access functions supporting the API. In some implementations, an API call may report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc. While various embodiments have been described above, it should be understood that they have been presented by way of example and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. For example, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. In addition, it should be understood that any figures which highlight the functionality and advantages are presented for example purposes only. The disclosed methodology and system are each sufficiently flexible and configurable such that they may be utilized in ways other than that shown. Although the term “at least one” may often be used in the specification, claims and drawings, the terms “a”, “an”, “the”, “said”, etc. also signify “at least one” or “the at least one” in the specification, claims and drawings. Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112(f). Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112(f).
29,212
11861336
DETAILED DESCRIPTION The present invention provides a multiple TALP family enhancement and management system (MTF E&M system) comprised of time-affecting linear pathway (TALP) and TALP prediction polynomial generation, TALP enhancement, TALP simulation and selection, TALP modeling, TALP family/cross-family generation, and family/cross-family TALP output data optimization. TALPs are generated from paired Input/Output (I/O) datasets or from the decomposition of algorithms and/or software codes. TALPs are executed using test input data to generate prediction polynomials. System-generated TALPs can be merged with enhancement TALPs. Using TALP-associated prediction polynomials and acceptance criteria comprised of paired I/O datasets that represent acceptable TALP behavior, system-generated and enhanced TALPs are simulated and selected. The TALP-associated prediction polynomials of selected TALPs are then modeled using actual input data values from external platforms, and their output data values pooled, discretized, and optimized for use by, or distribution to, system users. Alternatively, the TALP-associated prediction polynomials of selected TALPs are executed using the input values from the TALP Family Selection criteria for inclusion in TALP Families. The associated output values of these TALP-associated prediction polynomials are compared to the associated output values of the TALP Family Selection criteria. TALP-associated prediction polynomials from each family can be re-executed using input from the Proposed TALP Cross-Family Structure criteria, with output value comparison for inclusion in one of those structures. TALP-associated prediction polynomials for each TALP in each TALP Family and each TALP Cross-Family are modeled using actual input data from external platforms, and their output data values pooled, discretized, and optimized for use by, or distribution to, system users. A TALP is an execution pathway through an algorithm or software code which includes looping structures. TALPs allow for the direct and automatic selection of a pathway through an algorithm or software code via the examination of the values of input non-loop-control variable attributes. Time prediction for TALPs occurs through varying the input loop control variable attributes and generating a time prediction polynomial. This means that examining the values of input loop control variable attributes is enough to know the processing time of a TALP. The output value prediction of a TALP occurs through varying the attribute domain of the input variable attributes that affect output values forming an output prediction polynomial. This means that it is possible to know the output values of a TALP through the examination of the input variables. Various TALP methods and systems are disclosed in U.S. Pat. No. 11,520,560, which is hereby fully incorporated herein by reference and can be implemented with various aspects, embodiments, methods, and systems of the present invention. Various devices or computing systems can be included and adapted to process and carry out the aspects, computations, and algorithmic processing of the software systems and methods of the present invention. Computing systems and devices of the present invention may include a processor, which may include one or more microprocessors and/or one or more circuits, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), etc. Further, the devices can include a network interface. The network interface is configured to enable communication with a communication network, other devices and systems, and servers, using a wired and/or wireless connection. The devices or computing systems may include memory, such as non-transitive, which may include one or more non-volatile storage devices and/or one or more volatile storage devices (e.g., random access memory (RAM)). In instances where the computing devices include a microprocessor, computer readable program code may be stored in a computer readable medium or memory, such as, but not limited to drive media (e.g., a hard disk or SSD), optical media (e.g., a OVO), memory devices (e.g., random access memory, flash memory), etc. The computer program or software code can be stored on a tangible, or non-transitive, machine-readable medium or memory. In some embodiments, computer readable program code is configured such that when executed by a processor, the code causes the device to perform the steps described above and herein. In other embodiments, the device is configured to perform steps described herein without the need for code. It will be recognized by one skilled in the art that these operations, algorithms, logic, method steps, routines, sub-routines, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof without deviating from the spirit and scope of the present invention as recited within the claims attached hereto. The devices or computing devices may include an input device. The input device is configured to receive an input from either a user (e.g., admin, user, etc.) or a hardware or software component—as disclosed herein in connection with the various user interface or automatic data inputs. Examples of an input device include a keyboard, mouse, microphone, touch screen and software enabling interaction with a touch screen, etc. The devices can also include an output device. Examples of output devices include monitors, televisions, mobile device screens, tablet screens, speakers, remote screens, etc. The output device can be configured to display images, media files, text, video, or play audio to a user through speaker output. Server processing systems for use or connected with the systems of the present invention, can include one or more microprocessors, and/or one or more circuits, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), etc. A network interface can be configured to enable communication with a communication network, using a wired and/or wireless connection, including communication with devices or computing devices disclosed herein. Memory can include one or more non-volatile storage devices and/or one or more volatile storage devices (e.g., random access memory (RAM)). In instances where the server system includes a microprocessor, computer readable program code may be stored in a computer readable medium, such as, but not limited to drive media (e.g., a hard disk or SSD), optical media (e.g., a DVD), memory devices, etc. FIG.1is a diagram showing an example of a multiple TALP Family Enhancement and Management (MTF E&M) system100. The exemplary MTF E&M System shown inFIG.1is composed of four primary components: TALP Simulation and Selection102, TALP Family Generation104, TALP Cross-Family Generation106, and TALP and TALP Prediction Polynomial Generation108. Referring toFIGS.1-1B, the present invention comprises software systems and methods that use TALPs that are generated from detected paired I/O dataset values (automatic conversion to TALP form), algorithms, and/or software codes. The generated TALPs, regardless of their origin, can be used to create a set of executable prediction polynomials112, as shown forFIG.3andFIG.4. These prediction polynomials are generated in the TALP and TALP Prediction Polynomial Generation component108of the MTF E&M system100, and General embodiment, by varying input data values114from the Test Data Makers116, giving associated output values, timings, and memory allocation. These output values, timings, and memory allocation values are used in an extended source values table, from which the prediction polynomials are constructed. Once a TALP with its associated prediction polynomials has been generated, then it is possible for that TALP's performance to be enhanced by merging the system-generated TALP with another TALP called an enhancement TALP118that originates from the Super User120. This merging is shown inFIG.1AandFIG.1B. FIG.1Ashows a diagram whereby the Super User120sends both merge criteria122and enhancement TALPs118with their associated prediction polynomials to the TALP Merge subcomponent126of the system's TALP and TALP Prediction Polynomial Generation component108. The TALP Merge subcomponent uses the merge criteria and the enhancement TALPs to determine if an enhancement TALP is to be merged with some system-generated TALP. Merging for the system100means linking the output of one TALP to the input of another TALP. There are two possible ways for a single system-generated TALP to be merged with a single enhancement TALP: (1) the output of the system-generated TALP can be the input to the enhancement TALP, or (2) the output of the enhancement TALP can be the input of the system-generated TALP. More than one enhancement TALP can be linked to a single system-generated TALP. FIG.1Bshows a workflow of a system-generated TALP receiving the merge criteria input values130from the Super User120. The system-generated TALP's prediction polynomials execute using the input values to generate associated output values. The output values are then compared to the merge criteria output values132to determine a match. System-generated TALP prediction polynomials whose output match the merge criteria are shown to be linked to an associated enhancement TALP124. After TALP generation, the prediction polynomials of the system-generated and enhanced TALPs are used in simulation184. These prediction polynomials are executed using input data from acceptance criteria128, giving associated outputs. Comparing the simulated output values with the associated set of acceptance criteria output values allows for the automatic selection186of TALPs. Once the TALP is simulated and selected, it is then either modeled in the Data Discretization Optimization Engine (DD)105using actual input data values from external platforms and made available for use by, or distribution directly to, a TALP user122or matched to criteria for placement into a TALP Family124or TALP Cross-Family126. Selected TALPs are added to TALP families based on the TALP Family Selection criteria. TALPs from more than one TALP family can be combined into TALP Cross-Families using proposed TALP Cross-Family structures. The behavior of each TALP in a TALP Families124or TALP Cross-Families126can be modeled in the Data Discretization Optimization Engine105using actual input data values from external platforms, and their output data values pooled, discretized, and optimized for use by, or distribution to, system users. Various embodiments of the systems and methods of the present invention perform the following communication and processing:1) In order to generate TALPs and TALP prediction polynomials112, the “TALP and TALP Prediction Polynomial Generation” component108of the MTF E&M system100receives paired I/O Datasets (called Datasets) from Dataset Sources140, algorithms from Algorithm Sources142, software codes from Software Sources144, and input data values from the Test Data Makers116. For paired I/O Datasets, there is no algorithm or software code to decompose into TALPs; instead, a Value Complexity polynomial that represents a TALP is generated. Once a TALP has been generated, its behavior can be enhanced by merging enhancement TALPs, from the Super User, with the system-generated TALPs.2) The “TALP Simulation and Selection” component102of the MTF E&M system100receives the generated TALPs with their associated prediction polynomials and the acceptance criteria (comprised of a set of acceptable input values with associated acceptable output values).a. The “TALP Simulation and Selection” component102activates its “TALP Simulation” subcomponent184using the list of generated TALPs with their associated prediction polynomials from the “TALP and TALP Prediction Polynomial Generation” component108and the acceptable input values of the acceptance criteria from the Super User120. The various TALP prediction polynomials are executed using these acceptable input values, generating a set of associated predicted output values for each TALP.b. The “TALP Simulation and Selection” component102activates its “TALP Selection” subcomponent186using these acceptable input values with their generated associated predicted output values from the TALP simulation. These predicted output values are compared to the acceptable output values of the Acceptance Criteria128, creating a set of selected TALPs when the generated predicted output values match the acceptable output values. Selected TALPs are either modeled using actual input data values from external platforms for direct use by system users or executed using the input values from the TALP Family Selection criteria for inclusion in TALP Families.3) The “TALP Family Generation” component104of the MTF E&M system100receives the selected TALPs with their associated prediction polynomials from the “TALP Simulation and Selection” component102and the TALP Family Selection Criteria150(comprised of a set of acceptable input values with associated output values for each family type) from the System Operator121. The prediction polynomials associated with each selected TALP executed using the acceptable family input values generating a set of output values that are compared to the acceptable family output values for inclusion into the matching family.a. After inclusion in a TALP Family, the prediction polynomials of each TALP in each Family are executed using input data from external platform TALP input data sources151, generating a pool of output values152made available to TALP User Categories.b. Alternately, after inclusion in a TALP Family, the prediction polynomials of each TALP in each Family are modeled in the DDO Engine105then pooled, discretized and optimized for use by, or distribution to, the various TALP User Categories.4) The “TALP Cross-Family Generator” component107of the MTF E&M system100receives the TALPs with their associated prediction polynomials from the families of the “TALP Family Generation” component104and the Proposed TALP Cross-Family Structure154(comprised of a set of cross-family acceptable input values with associated output values for each cross-family type) from the TALP Cross-Family Designer156. The prediction polynomials associated with each TALP of each TALP Family are executed using the acceptable cross-family input values generating a set of output values that are compared to the acceptable cross-family output values, for inclusion into the matching cross-family.a. After inclusion in a TALP Cross-Family, prediction polynomials of each TALP in each cross-family are executed using external platform input data from actual TALP input data sources, generating a pool of output values158made available to TALP User Categories.b. Alternately, after inclusion in a TALP Cross-Family, the prediction polynomials of each TALP in each Family are modeled in the DDO Engine105then pooled, discretized and optimized for use by, or distribution to, the various TALP User Categories. FIG.2is a diagram showing the details of the TALP and TALP Polynomial Generation component108of the MTF E&M system110: general embodiment, including TALP generation108aand TALP Prediction Polynomial Generation108b. The polynomial form of an algorithm occurs when a set of input variable attribute values can be used to generate a set of output variable attribute values and that those values approximate the original algorithm behavior to within some epsilon. This means that a predictive, executable polynomial (Value Complexity) that is formed from data detection represents an algorithm in polynomial form. That is, it is possible to automatically generate TALPs from sets of detected data. TALPs can also be generated from the decomposition of algorithms and software code. Executing the generated TALPs using test data from the Test Data Maker116allows the system to generate both advanced time complexity and advanced space complexity polynomials. Advanced time complexity uses input variable attribute values to predict the processing time. Descaling the Advanced Time Complexity polynomial gives the Advanced Speedup polynomial. Speedup describes the processing speed from a given input variable attribute value. Advanced space complexity uses input variable attribute values to predict memory allocation. Descaling the Advanced Space-Complexity polynomials gives the Freeup polynomials. Freeup describes the memory requirement for a given input variable attribute value. There are three Advanced Space Complexity polynomials for the following: Random Access Memory Allocation, Cache Memory Allocation, and Output Memory Allocation. Because there are three Advanced Space Complexity polynomials, there are also three Freeup polynomials. FIG.3is a diagram showing an input variable attribute vector160(x1through xn), an associated output variable attribute vector162(v1through vn), an associating timing variable attribute vector164(t1through tn) and an associated memory allocation variable attribute vector166(s1through sn) in an extended source values table. The vectors are accessed pairwise: Input Values and Output Values, Input Values and Timing Values, Input Values and Memory Allocation. These pairs are used to generate respectively: Value Complexity, Advanced Time Complexity, and Advanced Space Complexity. TALP values can be generated for any valid set of input values as long as the input value is greater than the minimum value used to create the Value Complexity polynomial, the Advanced Time Complexity polynomial, or Advanced Space Complexity polynomial. Below shows an example of how to construct polynomials from the various data vector parings. In this example, an advanced space complexity polynomial is generated from an executable software code pathway: Example Advanced Time Complexity Polynomial Generation A table called the Source Values table containing ordered, scaled input dataset sizes and associated scaled space values is compared to a table called the Target Values table containing a set of scaled dataset sizes and associated space values generated from some pre-existing functions depicted as the column headers, following the steps below.1. Referring to Table170ofFIG.3A, a value for an input dataset size d is divided evenly and successively (varying the input dataset size) then the TALP's associated executable code is executed by the system to find the associated space values s which are sorted and stored in the Input Dataset Size and Space table.2. Referring to Table172ofFIG.3B, the input dataset size d and associated space values s are scaled by their respective smallest received values, dminand smin, and saved in a Source Values table. In this example, dmin=2 and smin=3. Scaling gives the Source Values table.3. Referring to Table174ofFIG.3C, the scaled space values s of the Source Values table are compared to those found in a previously created Target Values table.4. The functions (polynomial terms) in the headers of the columns of the Target Values table are in ascending order. Zero values in the Target Values table are not compared to the corresponding Source Values table space value, but not comparing a row does not eliminate the corresponding Target table column function header from consideration for inclusion in the final polynomial. When comparing the Source Values table space values to corresponding Target Values table space values, all Source Values table s values in a column will be at least one of the following:a. Greater than or equal to all associated Target Values table values in a column (plus or minus some epsilon value),b. Less than or equal to all associated Target Values table values in a column (plus or minus some epsilon value), orc. All Source Values table e values are the same value (plus or minus some epsilon value).The function header of any Target Values table column whose rows do not meet condition a or condition b above is eliminated from consideration for inclusion in the final polynomial, and a comparison is made using a different target column. If condition c is met, the value is considered a constant and added to a Saved Term List fterm. Condition c means the polynomial is complete, and the process jumps to Step 8.5. When Source space values are compared to the corresponding Target space values, the closest column header that meets condition a or b is saved in the ftermlist and the process continues with Step 6. If no tested columns meet condition a or b then an error condition exists, and the “Error—stop processing” message is displayed. This comparison is a binary search process.6. Referring to Table176ofFIG.3D, the selected Target Values table column's values are subtracted from the corresponding Source Value table space values, and those new values are saved in a temporary Source Values table. If the temporary Source space values contain any negative values, then the following found polynomial term may be a negative term in which case two versions of the term (negative and positive) are saved with the one whose maximum error (as calculated in step 9) is the smallest becoming the selected version. The absolute values of the temporary Source space values are saved as the new Source Values table.7. Referring to Table178ofFIG.3E, if there are any computed zero values in the new Source Values table, the values of the current column below the zero are shifted to the row above, replacing the zero value. Step 4 is then repeated using the new Source Values table.8. All saved terms in the ftermlist are summed, creating the predictive, monotonic polynomialv(d) for input variable attribute d. To de-scale this polynomial with its resulting scaled space value s, it is multiplied by the smallest original s value, called smin, within the original Source Values table. Equation⁢1⁢Variable⁢Space⁢Complexity⁢as⁢Monotonic⁢Polynomialv(d)=smin×∑i=1n⁢ftermi Coefficients are automatically calculated from this step. Two or more like terms are summed to produce the coefficient of the term. For example, summing s2and s2gives 2s2. FIG.4is a diagram180showing that multiple TALPs can be processed simultaneously using their associated prediction polynomials. An array of input values is constructed and used to generate either an array of output values from Value Complexity I or a single pooled value from Value Complexity II. The input variable attribute array is also used to generate an Advanced Time Complexity value via the use of an Advanced Time Complexity polynomial and an Advanced Space Complexity value via the use of an Advanced Space Complexity polynomial. FIG.5shows a work flow182of TALP polynomials used in the simulation and selection of TALPs, which is the first of the three primary components (e.g.,102) of the MFT E&M system100. TALP simulation and selection is performed as follows: TALP Simulation184 1) The system receives Asset Acceptance Criteria input values, times, and memory allocation from the Super User.2) The TALP polynomials are executed using the received Acceptance Criteria values.3) The output values from the executed TALP polynomials are saved for selection comparison. Selection1861) The system receives output acceptance criteria for values, timings, and memory allocation from the Super User.2) The TALP polynomial's saved output data (values, timing, and memory allocation) from the simulation is compared to the received acceptance criteria output values, timings, and memory allocation.3) The TALP polynomials whose saved output values match the received acceptance criteria output values are selected. FIG.6shows a work flow190showing TALP Family generation as follows:1) The system receives TALP Family Selection Criteria from the System Operator121.2) Selected TALP Family Selection Criteria Inputs are used in the execution of the selected TALP polynomials, at process192.3) The TALP outputs are compared to the TALP Family Selection Criteria outputs (values, timings, memory allocations) for inclusion in the associated TALP Family, at process(es)194. FIG.7is a work flow196showing the TALP Cross-Family generation as follows:1) The system receives Proposed TALP Cross-Family Structures from the TALP Cross-Family Designer.2) Proposed TALP Cross-Family Structure Inputs are used in the execution of the TALP polynomials of the TALPs in families, at process198.3) The outputs of the executed TALP polynomial outputs are compared to the Proposed TALP Cross-Family Structure output values (values, timings, memory allocations) for inclusion in the associated TALP Cross-Family, at process(es)200. FIG.8is a diagram202detailing the grouping of selected TALPs into TALP families204as presented inFIG.6. TALPs in TALP Families can be accessed by various categories of TALP users or used in TALP Cross-Family structures. FIG.9is a diagram210detailing the inclusion of selected TALPs from TALP families into TALP Cross Families212as presented in theFIG.7description. TALPs within TALP Cross-Families can be accessed by various categories of TALP users. FIG.10depicts two diagrams. The first diagram214shows the MTF E&M system100contained within a stand-alone server (mobile device, desktop, laptop, rack-mounted, etc.). The second diagram216shows an example of a software system (Investment Management Software) put into MTF E&M form by replacing test data makers, super users, TALP cross-family designers, and users with their analogous data from external platform data, general partners, market makers, and limited partners. The second diagram216also shows a system218that is contained within a stand-alone server system. FIG.11depicts two diagrams. The first diagram220shows the MTF E & M system accessible using a client-server model. The second diagram224shows an example of a software system (Investment Management Software) put into MTF E&M form by replacing test data makers, super users, TALP cross-family designers, and users with their analogous data from external platform data, general partners, market makers, and limited partners. The second diagram224also shows a system that is accessible using a Client-Server model. FIG.12depicts two diagrams. The first diagram226shows the MTF E&M system accessible using a cloud-based model. The second diagram228, which is also accessible using a cloud-based model, shows an example of a software system (Investment Management Software) put into MTF E&M form by replacing test data makers, super users, TALP cross-family designers, and users with their analogous data from external platform data, general partners, market makers, and limited partners. FIG.13is a diagram showing an example of a MTF E&M system100constructed for investment software, algorithms, or datasets. The MTF E&M system for investment software is composed of four primary components: Asset Simulation and Selection102a, Fund and Portfolio Family Generation104a, Market Management106a, and Assets as TALPs and Asset Prediction Polynomial Generation108a. Again, the MTF E&M100system for investment software, algorithms, or datasets replaces test data makers, super users, TALP cross-family designers, and users with their analogous data from external platform data151a, general partners120a, market makers156a, and limited partners122a. This embodiment of the present invention converts asset software codes and asset algorithms into TALP form using the TALP decomposition. Alternatively, detected paired I/O datasets from assets can be converted into the equivalent of TALP form by their transformation into Value Complexity polynomials. These TALPs are herein called asset TALPs. Associated prediction polynomials for each asset TALP can be generated by executing the asset TALPs using the input values from the external platforms151a. Asset TALP execution produces a set of input to output data pairs, input to processing time pairs, and input to memory allocation pairs. These paired I/O datasets are placed in the extended source values table shown inFIG.14andFIG.15and used to generate Value Complexity, Advanced Time Complexity, and Advanced Space Complexity prediction polynomials. Once an asset TALP with its associated prediction polynomials has been generated, then it is possible for that asset TALP's performance to be enhanced by merging the system-generated asset TALP with another asset TALP called an enhancement asset TALP that originates from the General Partner120a. Once the asset TALP is generated, regardless of its merge status, it is available for use by, or distribution directly to, an asset TALP user or placement into an Asset TALP Family124aor Asset TALP Cross-Family126a. The asset TALP-associated prediction polynomials are each given a set of input asset acceptance criteria data128afrom the General Partner120a. This data is used in the execution of the asset TALP-associated prediction polynomials in the system's Asset Simulation184a. The output values from asset simulation are compared to the output asset acceptance criteria128aof the General Partners120a. Any asset TALP whose output values match the output asset acceptance criteria associated with the current input asset acceptance criteria are selected in the system's Asset Selection186afor further use by the system. The selected asset TALP's prediction polynomials are each either modeled105ausing input data values from external platforms and made available for use by various partners or executed using sets of input values from the fund/portfolio/securities family selection criteria from the System Operator121a, giving output values called asset family output values. The asset TALPs whose asset family output values match the output values of the fund/portfolio/securities family selection criteria are added to the matching fund or portfolio family in the system's Fund and Portfolio Family Generation component104a. The output of these asset TALPs are pooled and made directly available to the various types of partners or modeled, pooled, and discretized then made available to the various types of partners. The prediction polynomials of the selected asset TALPs in a family are each given sets of input values of the Proposed Asset Cross-Family Market Structures data154afrom the Market Maker156a, giving output values called herein asset cross-family output values. The asset TALPs whose asset cross-family output values match the output values of the proposed asset cross-family structure data are added to the matching asset cross-family in the system's Market Management component106a. The outputs of these asset TALPs are also pooled and made directly available to the various types of partners or modeled, pooled, and discretized then made available to the various types of partners. Decreased financial risks and increased financial returns are generated in families and cross-families of funds or portfolios. Standard investment criteria, such as a fund's underlying venture capital requirements, anticipated risk, native investment units (stocks), and anticipated return on investment, is used as part of the optimization criteria by the rRAV Engine for modeling, allowing for the creation of a set of risk/returns instantiated as a set of Asset TALP-derived investment units called prioritized units. Prioritized units are associated with a set of funds, portfolios, and any bonds or derivatives that are used to leverage the return on investment or the rate of return of the prioritized units. The current invention also allows multiple prioritized units to be automatically temporally chained together, using the sale proceeds at the maturity of a prior prioritized unit to automatically acquire the assets of another prioritized unit. This allows for automatic reinvesting, as well as cashflow generation, prior to the maturity date of the chain. It is possible to construct multiple types of prioritized units and chained prioritized units, each having its own risk/return values and its own minimum and maximum investment level. Since two of the primary distinguishers for different categories of investors are risk/returns and minimum/maximum values, it is now possible to have different categories of investors.1) In order to generate assets as TALPs and asset prediction polynomials, the “Asset as TALPs and Asset Prediction Polynomial Generation” component108of the MTF E&M: Investment Software Embodiment (MTF E&M: IS)100receives Economic Conditions from External Platforms, PE assets, REIT Assets, and VC assets as either paired I/O datasets, algorithms, or software codes from various asset sources. Once an Asset TALP has been generated, its behavior can be enhanced by merging enhancement Asset TALPs118a, from the General Partner120a, with the system-generated Asset TALPs.2) The “Asset Simulation and Selection” component102aof the MTF E&M:IS system100receives the generated Asset TALPs (called Assets) with their associated prediction polynomials and the acceptance criteria (comprised of a set of acceptable input values with associated acceptable output values).a. The “Asset Simulation and Selection” component102aactivates its “Asset Simulation” subcomponent184ausing the list of generated Assets with their associated prediction polynomials from the “Assets as TALPs and Asset Prediction Polynomial Generation” component108and the acceptable input values of the acceptance criteria from the General Partner120a. The various asset prediction polynomials are executed using the acceptable input values, generating a set of output values for each asset.b. The “Asset Simulation and Selection” component102aactivates its “Asset Selection” subcomponent186ausing these acceptable input values paired with their generated predicted output values from the asset simulation184a. These predicted output values are compared to the acceptable output values of the acceptance criteria128a, creating a set of selected assets when the generated predicted output values match the acceptable output values. Selected Asset TALPs are either modeled105ausing actual input data values from external platforms151afor direct use by limited partners122aor executed using the input values from the Asset TALP Family Selection criteria150afor inclusion in Asset TALP Families.3) The “Asset Family Generation” component104aof the MTF E&M:IS system100receives the selected assets with their associated prediction polynomials from the “Asset Simulation and Selection” component102aand the Fund/Portfolio/Securities family selection criteria150a(comprised of a set of Fund/Portfolio/Securities family acceptable input values with associated output values for each asset family type) from the System Operator121a. The prediction polynomials associated with each selected asset are executed using the acceptable Fund/Portfolio/Securities family input values, generating a set of output values that are compared to the acceptable Fund/Portfolio/Securities family output values, for inclusion into the matching asset family.a. After inclusion in an Asset Family, the prediction polynomials of each asset in each Asset Family are executed using input data from external platform input data sources generating a pool of output values made available to Limited Partners Categories.b. Alternatively, after inclusion in an Asset Family, the prediction polynomials of each asset in each Asset Family are modeled in the rRAV Engine105ausing input data from external platform input data sources then pooled, discretized and optimized and made available to Limited Partners Categories.4) The “Asset Cross-Family Generator” component107aof the MTF E&M:IS system100receives the assets with their associated prediction polynomials from the Asset Families of the “Asset Family Generation” component104aand the Proposed Asset Cross-Family Market Structure154a(comprised of a set of cross-family acceptable input values with associated output values for each cross-family type) from the Market Maker156a. The prediction polynomials of each asset in each family are executed using the acceptable cross-family input values, generating a set of output values that are compared to the acceptable cross-family output values, for inclusion into the matching cross-family.a. After inclusion in an asset cross-family, the prediction polynomials each Asset in each Asset Cross-Family are executed using input data from external platform input data sources, generating a pool of output values made available to Limited Partners Categories.b. Alternatively, after inclusion in an Asset Cross-Family, the prediction polynomials of each asset in each Asset Family are modeled in the rRAV Engine105ausing input data from external platform input data sources then pooled, discretized and optimized and made available to Limited Partners Categories. FIG.14is a diagram300showing the creation of an extended source values table302from PE/REIT/Venture assets treated as TALPs. The sets of monotonic input values of the PE/REIT/Venture assets as TALPs form an input variable attribute vector (x1through xn) while the sets of associated output values form an output variable attribute vector (v1through vn). The completion time for the associated input to output transformation of PE/REIT/Venture assets form the timing variable attribute vector (t1through tn), and an associated memory allocation required to process and store values that are transformed by PE/REIT/Venture assets form the memory allocation variable attribute vector (s1through sn). These vectors are shown combined into the extended source values table302. The vectors are accessed pairwise: Input values and output values, input values and timing values, input values and memory allocation. These pairs are used to generate, respectively: Value Complexity, Advanced Time Complexity, and Advanced Space Complexity. The predicted processing time of the current asset can be generated for any valid set of input values so long as the input value is greater than the minimum value used to create the Advanced Time Complexity polynomial. The predicted required memory allocation needed to process the current asset can be generated for any valid set of input values so long as the input value is greater than the minimum value used to create the Advanced Space Complexity polynomial. FIG.15is a diagram304showing that multiple assets of a family or cross-family can be processed simultaneously using their associated prediction polynomials. An array of input values is constructed and used to generate either an array of output values from Value Complexity I306or a single pooled value from Value Complexity11308. The input variable attribute array is also used to generate an Advanced Time Complexity value310via the use of an Advanced Time Complexity polynomial and an Advanced Space Complexity value312via the use of an Advanced Space Complexity polynomial. FIG.16shows a more detailed diagram of Investment Asset polynomials (treated as TALPs)320used in asset simulation and asset selection. Asset Simulation and Asset Selection is performed as follows: Asset Simulation322 1) The system receives Asset Acceptance Criteria input values, times, and memory allocation from the General Partner.2) The Asset polynomials (treated as TALPs) are executed using the received Asset Acceptance Criteria values.3) The output values from the executed Asset polynomials are saved for selection comparison. Asset Selection3241) The system receives Asset Acceptance Criteria output values, timings, and memory allocation from the General Partner.2) The Asset polynomial's saved output data (values, timing, and memory allocation) from the simulation is compared to the received Asset Acceptance Criteria output values, timings, and memory allocation.3) The Asset polynomials whose saved simulation output values match the received Asset Acceptance Criteria output values are selected for use in a fund or portfolio. FIG.17shows a detailed diagram330using selected asset polynomial output values in the selection of assets for inclusion in an Asset Family332when such values are compared against the Asset Family Selection Criteria334from the System Operator121.1) The system receives the Asset Family Selection Criteria from the System Operator.2) Selected Asset Family Selection Criteria inputs are used in the execution of the selected Asset polynomials.3) The output values from the execution of the asset polynomials are compared to the Asset Family Selection Criteria outputs (values, timings, memory allocations) for inclusion in the associated Asset Family. FIG.18is a diagram340showing the details of Asset Cross-Family generation fromFIG.13as follows:1) The system receives Proposed Asset Market Structures from the Market Maker.2) Proposed Asset Market Structure Inputs are used in the execution of the Asset polynomials of Assets that are within families.3) The outputs of the executed Asset polynomial outputs are compared to the Proposed Market Structure output values (values, timings, memory allocations) for inclusion in the associated Asset Cross-Family. FIG.19is a diagram350detailing the inclusion of selected Asset Family assets into Asset Cross-Families as presented in theFIG.18discussion. Assets within Asset Cross-Families can be accessed by various categories of partners. FIG.20shows a diagram with two graphs. The first graph360shows the use of advanced time complexity to determine when the execution of a group of type I chained TALPs, each with different starting times, will complete. Using the Value Complexity polynomial allows the output values of the TALPs to be known for any given time period. The TALPs are chained together in such that regardless of the TALP starting times, all of their ending times are linked. If the TALPs represent investment units then the output values could be cash flows. In order to ensure that all units complete at the same time, the associated Advanced Time Complexity polynomial is used. To understand how much time needs to be added or subtracted from the chain of linked units requires the use of the associated Advanced Speedup polynomial on an array comprised of all chained assets associated with the chained units. Consider that how much memory to allocate for the units is directly proportional to the number of units currently activated. If at some point in time the number of predicted active units is not what is expected, this indicates a problem with the chain of units. To chained unit problems requires the use of both Advanced Time Complexity and Freeup prediction. It should be noted that Freeup prediction requires the use of Advanced Space Complexity. The second graph362shows the use of linked units combined with Value Complexity polynomials to determine the output of the type II chain of linked units at any given time period. The linkage of software codes (for example investment units converted to TALP form) allows for the prediction of software output values (for example cash flow values). As with the first graph, chained unit error prediction requires the use of Advanced Time Complexity, Advanced Space Complexity, and Freeup. FIG.21shows a graph370of three unit type Ill chains (from bond analysis software linked with investment unit software). Each linked unit chain completes execution at the same time. In a sense, Type Ill chained units function like a combination of Type I and Type II unit chains but with bonds converted into algorithmic form. As shown forFIG.20, chained unit error prediction requires the use of Advanced Time Complexity, Advanced Space Complexity, and Freeup. FIG.22shows two diagrams. The first diagram380shows a set of TALP Families used to generate a pooled but unoptimized output dataset. This pooled output dataset is then discretized. The unoptimized, discretized pooled data can now be made available to different user categories. The second diagram382shows the same diagram using a set of funds used to generate a pooled but unoptimized output data. The unoptimized, pooled data is then discretized for use by different partner categories (general, senior limited, junior limited, etc.) FIG.23shows two diagrams. The first diagram390shows a set of TALP Families used to generate a pooled, but unoptimized output dataset that is then sent to the Data Discretization Optimization engine. This engine breaks up the pooled dataset, using input dataset values, into groups that are optimized to minimize some values and maximize other values. The optimized discretized data is then ready for distribution to different categories of users (super user, senior, junior, etc.). The second diagram392shows a set of asset Families used to generate a pooled unoptimized investment fund output dataset (returns, risk, interest rates, etc.) that is then sent to the rRAV engine394. This engine breaks up the pooled dataset, using economic conditions, into groups that are optimized to minimize some values and maximize other values. The optimized discretized pooled data is then ready for distribution to different categories of partners (general partner, senior limited partner, junior limited partner, etc.). The rRAV engine394is used by both the Fund and Portfolio Generation and the Market Management components as shown inFIG.13. FIG.24shows two diagrams. The first diagram400shows a TALP Family's pooled, optimized output data re-sent to the Data Discretization Optimization engine. The TALP's pooled output data is re-optimized based on new input data values by comparing the TALP output values to the required output values, eliminating any TALP whose output values decrease values that are to be maximized or increase those values that are to be minimized until either the minimum number of TALP Family types are present and/or the best-valued TALPs are included. The continuously optimized discretized data groups are available for distribution to different categories of users (super user, senior, junior, etc.) The second diagram402shows an Asset Family's pooled, optimized output data sent to the Risk/Return Allocation Vehicle (rRAV) engine394. The Asset Family's pooled output data is re-optimized based on new input data values by comparing the Asset output values to the required output values, eliminating any Asset whose output values decrease values that are to be maximized or increase those values that are to be minimized until either the minimum number of Asset Family types are present and/or the best-valued Assets are included. The pooled output data is composed of investment units or securities (seeFIG.13). Some units or securities maximize certain output values such as returns while others minimize output values like risk. The continuously optimized units are available for distribution to different categories of partners (general partner, senior limited partner, junior limited partner, etc.). FIG.25shows two diagrams. The first diagram410shows the outputs of each TALP within a TALP Family pooled according to the TALP type. The second diagram412shows this pooling using the outputs of each asset of an investment fund or portfolio. The output of all assets in a fund are pooled using information from the various asset types within the fund or portfolio. FIG.26shows a diagram420of a detailed example of a Data Discretization Optimization (DDO) engine. Optimized TALP pool data422and the current input data values424are received by the Modify Input Variable Attribute Values Using Input Data Values software component426. This data uses the TALP polynomials with their associated prediction polynomials to predict future TALP pool values in the TALP-based Modeling software component428. These predicted output values are compared to the optimization criteria in the TALP Pool Modification software component430to determine if the TALPs are applicable in the future. The predicted TALP values are also used to select software that are in algorithmic form in the Software Selector component432then sent to the Optimized TALP Output Data component434for further use. FIG.27shows a diagram440of a detailed example of a risk/Return Allocation Vehicle (rRAV) engine394. Optimized Fund or portfolio pooled data and the current economic conditions are received by the Modify Input Variable Attribute Values Using Economic Conditions software component442. This data uses the TALP polynomials, with associated prediction polynomials created for the Fund assets to predict future fund or portfolio values in the TALP-based Modeling software component444. These predicted values are compared to the optimization criteria in the Fund or Portfolio Modification Software component446to determine if Fund or Portfolio asset values are applicable in the future. Unlike the DDO engine shown inFIG.26, the rRAV engine394shows the Software Selection component as Bond Management448and Derivative Management components450where the predicted asset output values are used to select assets sent to the Optimized Investment Units452component for further use. FIG.28shows an example of an rRAV engine work flow460. Various input data sources are entered into the asset TALP that represents fund assets, generating predicted output data (payments, payment timings, principle, interest rates, capital call events, etc.). To optimize the set of pooled combined cashflow input data requires the combined current input data and the following:1) Payment Collection462: Verification, collation, and matching output data to required inputs to ensure that the received data is associated with the correct partner and asset.2) Payment Analysis464: Calculating, predicting, and routing, using the current data combined with predicted data to ensure that the future minimum and maximums for each asset in each fund remains acceptable.3) Payment Distribution466: Method, timing, and notifying to ensure that the associated partners are notified of any predicted deviations in the output data of any asset, either in value or timing. FIG.29shows a diagram470with additional detail for the optimization portion of the rRAV Optimized engine work flow shown inFIG.28.1) Payment Collection472: Identify an asset's payment attributes then verify the settlement of the received payment; collate with concurrent asset payments including unit time and current epoch; and match with predefined parameters.2) Payment Analysis474: Calculate and save records and historical attributes; predict payment expectations and future asset attribute sets; and select the partners that will receive payment and other asset attribute information.3) Payment Distribution476: Select distribution method and distribution timing and send notifications to the correct partner. FIG.30shows an example480of the rRAV engine394discretizing the Fund or Portfolio asset output values into multiple types of Investment Units, with only the cash flow output dataset shown. These units are called prioritized payouts because distribution to a succeeding unit type only occurs after the payout to the preceding Investment Unit type. The Senior Limited Partner484is shown as having the highest priority, receiving the payout for these units first. The Subordinate Limited Partner486(sometimes called the Junior Limited Partner) is paid out next, followed by the General Partner488. FIG.31shows an example490of assets492from Asset Families or Asset Cross-Families pooled, discretized and optimized into Investment Units in the rRAV engine394for General Partners, Senior Limited Partners, Junior Limited Partners, and others, using various assets with different percentages allocated to different partnership categories. FIG.32shows a graph500of predicted asset values over time, breaking up the asset lifetime into epochs and showing multiple output events (cashflows). Predictions can be generated using the prediction polynomials associated with an asset TALP and viewed by partners for any asset TALP, asset Family, or asset Cross-Family. Various concepts, systems, and methods of the present invention can include a method and system of software enhancement and management that comprises inputting one or more data transformation algorithms, wherein the one or more data transformation algorithms do not include software application source code, decomposing the one or more data transformation algorithms into a plurality of TALPS, executing the plurality of TALPs using a set of test data to generate associated value complexity prediction polynomials, advanced time complexity prediction polynomials, and advanced space complexity prediction polynomials, simulating TALP behavior by executing the generated, associated prediction polynomials, selecting one or more of the plurality of TALPs based on acceptance criteria, wherein the acceptance criteria includes one or more expected input to output value ranges, one or more expected TALP execution timings, and one or more expected TALP memory allocation requirements, modeling one or more outcomes with actual expected input data values using the value complexity prediction polynomials, the advanced time complexity prediction polynomials, and the advanced space complexity prediction polynomials for each of the selected one or more TALPs, and defining optimum TALP groupings for solution sets. In various embodiments, the acceptance criteria further includes enhancements. In various embodiments, the method and system further comprises parallelizing the selected one or more of the plurality of TALPs. In various embodiments, the method and system further comprises inputting TALP Family Selection criteria. In various embodiments, the method and system further comprises grouping the selected one or more of the plurality of TALPs into TALP Families based on the TALP Family Selection criteria. In various embodiments, one or more of the grouped TALP families are included in one or more TALP Cross-Families. In various embodiments, the TALP Family Selection criteria is inputted from a System Operator. In various embodiments, the one or more data transformation algorithms include asset algorithms. In various embodiments, the plurality of TALPs are a plurality of asset TALPs. In various embodiments, the acceptance criteria includes asset acceptance criteria used in selecting the one or more of the plurality of TALPs. In one or more embodiments, a system of the present invention comprises a processor configured to execute program code stored in memory, operatively coupled with the processor, to: input one or more data transformation algorithms, wherein the one or more data transformation algorithms do not include software application source code; decompose the one or more data transformation algorithms into a plurality of TALPs; execute the plurality of TALPs using a set of test data to generate associated value complexity prediction polynomials, advanced time complexity prediction polynomials, and advanced space complexity prediction polynomials; simulate TALP behavior by executing the generated, associated prediction polynomials; select one or more of the plurality of TALPs based on acceptance criteria, wherein the acceptance criteria includes one or more expected input to output value ranges, one or more expected TALP execution timings, and one or more expected TALP memory allocation requirements; model one or more outcomes with actual expected input data values using the value complexity prediction polynomials, the advanced time complexity prediction polynomials, and the advanced space complexity prediction polynomials for each of the selected one or more TALPs; and define optimum TALP groupings for solution sets. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any on the above-described embodiments or examples. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. It is understood that any specific order or hierarchy of steps in any disclosed process is an example of a sample approach. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order and are not meant to be limited to the specific order or hierarchy presented. While the present invention has been described in connection with various aspects and examples, it will be understood that the present invention is capable of further modifications. This application is intended to cover any variations, uses or adaptation of the invention following, in general, the principles of the invention, and including such departures from the present disclosure as come within the known and customary practice within the art to which the invention pertains. It will be readily apparent to those of ordinary skill in the art that many modifications and equivalent arrangements can be made thereof without departing from the spirit and scope of the present disclosure, such scope to be accorded the broadest interpretation of the appended claims so as to encompass all equivalent structures and products. For purposes of interpreting the claims for the present invention, it is expressly intended that the provisions of 35 U.S.C. § 112(f) are not to be invoked unless the specific terms “means for” or “step for” are recited in a claim.
57,752
11861337
DETAILED DESCRIPTION OF REPRESENTATIVE EMBODIMENTS While the present invention is susceptible of embodiment in many different forms, there are shown in the drawings and will be described herein in detail specific exemplary embodiments thereof, with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the specific embodiments illustrated. In this respect, before explaining at least one embodiment consistent with the present invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of components set forth above and below, illustrated in the drawings, or as described in the examples. Methods and apparatuses consistent with the present invention are capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as the abstract included below, are for the purposes of description and should not be regarded as limiting. As an example and without limitation, a representative computing architecture is illustrated and discussed below with reference toFIGS.1-10, as a background to illustrate the operation of the representative compiler400,500, with a compiler400referring to the apparatus or system which implements the compiler or compilation method500, which are collectively referred to as the representative compiler400,500. It should be understood that the representative compiler400,500may be operative with and applied to innumerable computing or other circuit architectures, and all such variations are considered equivalent and within the scope of the disclosure. The structure and operation of the representative compiler400,500is discussed below with reference toFIGS.11-21. The representative compiler400,500supports DL accelerators with custom ISA and the supporting software elements to execute a model defined in a high-level framework. This representative compiler400,500also employs optimization methods to achieve improved performance on an inference engine discussed below. The representative compiler400,500supports the open neural network exchange (“ONNX”) interchange format, allowing it to parse models from different frameworks, for example and without limitation. An intermediate representation was created to represent low level operations that matches with the hardware accelerator's capabilities. Multi-level optimizations were created to minimize memory bandwidth and maximize performance. The representative compiler400,500correctly generated custom instructions for various DNN models trained for face identification, image segmentation, style transfer, speech identification and speech command. The generated code was also benchmarked on different FPGA systems. Systems with 256-2048 processing units were evaluated. I. A Representative Computing Architecture: FIG.1is a block diagram of a representative embodiment of an inference engine circuit architecture (or system)50comprising a matrix-matrix (MM) processor circuit200and one or more matrix-matrix (MM) accelerator circuits100. The inference engine circuit architecture50, as a system, further comprises a memory interface60for read (load) and write (store) access to a memory circuit25, which access may be through a memory controller40optionally. As another option, the inference engine circuit architecture50may also comprise a general purpose processor75, which may be any type or kind of processor as described in greater detail below, such as a microprocessor, for performing computations and/or executing control code which are not being performed or executed by the MM processor circuit200and the one or more MM accelerator circuits100. The inference engine circuit architecture50also typically includes a communication interface (or other input-output interface)45, described in greater detail below, for communication between the inference engine circuit architecture50and other, typically off-chip components which may be part of a larger system or board15(e.g., a rack-mounted board of a server, for example and without limitation), such as the memory circuit25(via a communication bus20, which may have any type of kind of communication bus structure, for example and without limitation). In a representative embodiment, the communication interface45includes the functionality of the memory interface60, or vice-versa, and only one such memory interface60or communication interface (or other input-output interface)45is included in the inference engine circuit architecture (or system)50. In a representative embodiment, the inference engine circuit architecture50is embodied as an integrated circuit, such as an application specific integrated circuit (“ASIC”) or a field programmable gate array (“FPGAs”). A memory controller40optionally may be included as part of the inference engine circuit architecture50or system15. The memory circuit25is typically embodied as a separate IC. Depending upon the embodiment, the inference engine circuit architecture50may also be coupled to additional processors75, such as via the communication bus20. The MM processor circuit200, the one or more MM accelerator circuits100, the memory interface60and the processor75are typically coupled to each other over a first, data distribution network80(and/or first, data distribution network80A, illustrated in and discussed below with reference toFIG.4). The first, data distribution network80,80A is for data transfer and other communication, such as for data transfer from the memory circuit25through the communication interface45, such as for reading and obtaining maps and kernel data, and for data transfer from the MM processor circuit200and one or more MM accelerator circuits100for storage in the memory circuit25, also via the communication interface45. The first, data distribution network80is also utilized for data transfer between or among the one or more of MM accelerator circuits100, and optionally for data transfer between or among the MM processor circuit200and the one or more MM accelerator circuits100. The first, data distribution network80,80A can implement one or more priority protocols to determine which data is transferred when and in what order, such as a round-robin protocol or a hierarchical priority in which various components have a higher priority for data transmission or reception, such as providing a higher priority to the MAC circuits190, followed by a MAX circuit130as next in priority, followed by a load operation from the memory circuit25as next in priority, etc., and any and all such variations are considered equivalent and within the scope of the disclosure. In a representative embodiment, the MM processor circuit200optionally may also have a separate control bus or network195for data and control communication with the one or more MM accelerator circuits100. In another representative embodiment, the first, data distribution network80also may be combined with the second, data access network110, discussed below, as a first, data distribution network80A illustrated inFIG.4. When a separate control bus195is not implemented, control information also may be distributed via the first, data distribution network80and the second, data access network110and/or first, data distribution network80A, any and all such variations are considered equivalent and within the scope of the disclosure. FIG.2is a schematic diagram of three-dimensional volume of maps or kernel data128used in a representative embodiment of an inference engine circuit architecture50. The inference engine circuit architecture50implements a “tensor” construct, in which data operation may be performed across a three-dimensional volume of non-contiguous data, and not merely a series of adjacent vectors of data. The tensor decoder250of the MM processor circuit200and/or MM accelerator circuits100can process a single instruction to read such a volume of data, tensor144, given a start address132, a depth134(for obtaining data in a “depth first” pattern), a stride138from a second or end address136of the first data plane151to the next non-contiguous data start address140of the second or next data plane152, and continuing through the last or end address142, bringing in multiple planes151,152, and153of maps or kernel data of the selected tensor144, as illustrated inFIG.2. It should be noted that the stride for proceeding from address146to148is the same as the stride138. This use of tensor data is helpful in diminishing or minimizing the bandwidth requirements for obtaining data from the memory circuit25, for example. As illustrated inFIG.2, the length of the tensor144comprises the combined lengths of all of the data in each such data plane, i.e., the length of all of the data vectors from start address132to the last or end address136of the first data plane151of the tensor144, plus the length of all of the data vectors from start address140to the last or end address146of the second data plane152of the tensor144, plus the length of all of the data vectors from start address148to the last or end address142of the third data plane153of the tensor144. FIG.3is a block diagram of a representative first embodiment of a MM accelerator circuit100having a plurality of matrix-vector (MV) accelerator circuits115and at least one MAX circuit130.FIG.4is a block diagram of a representative second embodiment of a MM accelerator circuit100A having a plurality of MV accelerator circuits and at least one MAX circuit130. The first and second embodiments of the MM accelerator circuit100,100A differ only insofar as in the MM accelerator circuit100A, the functionality of the first, data distribution network80and the functionality of the second, data access network110have been combined into a singular first, data distribution network80A, which functions identically to the separate first, data distribution network80as described above combined with the functionality of the second, data access network110described below. Alternatively, also illustrated inFIGS.3and4, an MM accelerator circuit100B can also be considered to comprise merely an array102of a plurality of MV accelerator circuits115, as a third embodiment, and if so, then the inference engine circuit architecture50will be considered to comprise and include the other components (105,185,110or80A,205,120,125,130) illustrated inFIGS.3and4for MM accelerator circuits100,100A. As illustrated inFIGS.3and4, the MM accelerator circuit100includes a MAX circuit130and a number “N” of MV accelerator circuits115, illustrated as MV accelerator circuit1151, MV accelerator circuit1152, through MV accelerator circuit115N. Each MM accelerator circuit100is structured or adapted to perform a complete multiplication of a first matrix (“A”) by another, second matrix (“B”) (C=A*B), using a plurality of MV accelerator circuits115, each of which is structured or adapted to perform a complete multiplication of the first matrix by one of the vectors comprising the second matrix (c=A*b). The MAX circuit130is comprised of a plurality of comparators280(illustrated inFIG.9), and is utilized to obtain a maximum of the operands being compared, such as to implement a maxpooling operation of a CNN as part of the functions of the MM accelerator circuit100. The MV accelerator circuits115are coupled to each other and to a maps (operand data) buffer (or other memory or registers)105over the second, data access network110(or first, data distribution network80A), and each MV accelerator circuit115has access to all of the maps data stored in the maps buffer105. The maps buffer105may be implemented as any type of buffer, memory or registers, and stores maps data, which may have been obtained from the memory circuit25or which may have other maps data which has been stored as results following computations from the various MAC circuits190and/or MAX circuit130. The maps buffer105is typically divided into separate banks (not separately illustrated), which may be accessed separately by the physical links of the second, data access network110(or first, data distribution network80A). In a representative embodiment, the maps buffer105implements double-buffering, illustrated as maps buffer105A and maps buffer105B, so data may be pre-fetched or written into a first part of the maps buffer105, as maps buffer105A, while data is being read from a second part of the maps buffer105, as maps buffer105B, and vice-versa, reducing latency in obtaining maps data for computations or other processing. In such an embodiment, the MV accelerator circuits115will alternate or “ping-pong” between the maps buffer105A and the maps buffer105B of the double-buffered maps buffer105to obtain or store the corresponding maps data. In addition, the second, data access network110(operating at a double data rate (DDR)) typically also performs the functions described above for the first, data distribution network80, also functioning as a data highway, implementing priority protocols, controlling the data load and bandwidth, gathering data to write to one or more data vectors to the memory circuit25, and controlling read and write operations from and to the maps buffer105and kernel buffers125. Each of the MV accelerator circuits115is also coupled, via a third, serial network120, to separate a kernel (weights) buffer (or other memory or registers)125, illustrated as kernel buffer1251coupled via third, serial network1201to the MV accelerator circuit1151, kernel buffer1252coupled via third, serial network1202to the MV accelerator circuit1152, through kernel buffer125Ncoupled via third, serial network120Nto the MV accelerator circuit115N. The third, serial network120is discussed in greater detail below with reference toFIG.5. The kernel buffers125may be implemented as any type of buffer, memory or registers, and each stores kernel (weights) data, which may have been obtained from the memory circuit25. The kernel buffers125also may implement double-buffering, as previously described, so kernel (weights) data may be pre-fetched or written into a first part of the kernel buffer125, as kernel buffer125A, while kernel (weights) data is being read from a second part of the kernel buffer125, as kernel buffer125B, and vice-versa, also reducing latency in obtaining kernel (weights) data for computation and other processing (illustrated as kernel buffer125A1, kernel buffer125B1, kernel buffer125A2, kernel buffer125B2, through kernel buffer125AN, and kernel buffer125BN. For this representative embodiment, each MV accelerator circuit115has access only to the kernel (weights) data of the corresponding kernel buffer125to which it is coupled via the third, serial network120, e.g., MV accelerator circuit1151accesses only the kernel data in the corresponding kernel buffer1251via the corresponding third, serial network1201. In a first mode, each of the kernel buffers125contain different kernel (weights) data for use in the various computations (using the same maps data), while in a second mode, each of the kernel buffers125contain the same kernel (weights) data for use in the various computations (using different maps data or different parts of the same maps data). As a result, in the multiplication process performed by the MV accelerator circuits115, all of the relevant maps data of the maps buffer105will be multiplied by all of the different kernel (weights) data held in the kernel buffers125, without repeating a computation, for example, by not repeating a multiplication of maps data with the same kernel data in an MV accelerator circuit115that has already been utilized by another MV accelerator circuit115. Higher performance of the inference engine circuit architecture50has been achieved using the maps buffer105, which is global to a MV accelerator circuit115), and separate kernel buffers125, each of which is specific to a single MV accelerator circuit115(including its array of VV accelerator circuits150). In a representative embodiment, the second, data access network110is comprises a crossbar switch205together with any selected bus structure, which may be embodied in any of a wide variety of ways (e.g., as a folded CLOS configuration, for example and without limitation), any and all of which are considered equivalent and within the scope of the disclosure. The second, data access network110and/or first, data distribution network80A provides for each vector-vector (VV) accelerator circuit150of each MV accelerator circuit115to have complete read (load) and write (store) access to all of the maps data in the entire maps buffer105. For example, each VV accelerator circuit150of each MV accelerator circuit115has its own physical links to the memory banks of the maps buffer105, via the second, data access network110(or first, data distribution network80A). To avoid conflict, a maps buffer arbiter circuit185is included, either as a separate component or within the second, data access network110and/or first, data distribution network80A, or alternatively within the maps buffer105, for example and without limitation. As mentioned above, in the second representative embodiment, first, data distribution network80A includes the functionality of the second, data access network110, such that the crossbar switch205is implemented between all of the VV accelerator circuits150and the maps buffer105, and that the maps buffer arbiter circuit185is included (e.g., separately, or within the first, data distribution network80A, or within the maps buffer105, or within the MM accelerator circuit100,100A more generally, for example and without limitation), and any and all such variations are considered equivalent and within the scope of the disclosure. It should also be noted that when a first, data distribution network80A is implemented, combining the functionality of the first, data distribution network80and the second, data access network110into the first, data distribution network80A, then the third, serial network120may be referred to equivalently as a second, serial network120or more simply as a serial network120. Each of the MV accelerator circuits115is also coupled to the MM processor circuit200, to receive control information, such as to determine the operational mode of the MV accelerator circuit115and/or VV accelerator circuits150. As discussed in greater detail below, in a first embodiment, the MM processor circuit200includes a tensor decoder circuit250. In a second representative embodiment, illustrated inFIG.4, each MM accelerator circuit100includes separate a tensor decoder circuit250(which is then no longer included within the MM processor circuit200), such that multiple tensor decoder circuits250may be implemented and distributed within the inference engine circuit architecture50, for example and without limitation. In another representative embodiment, not separately illustrated, other components of the MM processor circuit200(such as an operand collector260or a mode control circuit255discussed below) also may be duplicated and distributed within the inference engine circuit architecture50, such that each MM accelerator circuit100includes a separate such component (which is then no longer included within the MM processor circuit200), also for example and without limitation. Any and all such variations are considered equivalent and within the scope of the disclosure. It should also be noted that for any of these various first and second (or third) embodiments of the MM accelerator circuit100,100A, the various components may be clustered or included, or not included, in a wide variety of ways, each of which is considered equivalent and within the scope of the disclosure. For example and without limitation, as another alternative, an MM accelerator circuit100,100A may be considered to comprise an array of the plurality of MV accelerator circuits115and the at least one MAX circuit130, with all other components (such as the maps buffer105, the kernel buffers125, the second, data access network110, the tensor decoder circuit250, etc.) considered to be part of a larger “cluster” circuit configuration such as the system50. Also for example and without limitation, as another alternative, an MV accelerator circuit115may be considered to comprise an array of a plurality of VV accelerator circuits150and the at least one MAX circuit130, with all other components (such as the maps buffer105, the kernel buffers125, the second, data access network110, the tensor decoder circuit250, etc.) considered to be part of a larger “cluster” circuit configuration such as the MM accelerator circuit100. Any and all of these various combinations and permutations are considered equivalent and within the scope of the disclosure. FIG.5is a block diagram of a representative embodiment of a matrix-vector (MV) accelerator circuit115having an array104of a plurality of vector-vector (VV) accelerator circuits150, illustrated as VV accelerator circuit1501, VV accelerator circuit1502, VV accelerator circuit1503, through VV accelerator circuit150N. Each of the VV accelerator circuits150are coupled to each other and to a maps (operand data) buffer (or other memory or registers)105over the second, data access network110and/or first, data distribution network80A, and each VV accelerator circuit150has access to all of the maps data stored in the maps buffer105, as described above. With each MM accelerator circuit100structured or adapted to perform a complete multiplication of a first matrix by another, second matrix to produce a resulting matrix “C” (C=A*B), using a plurality of MV accelerator circuits115, and each MV accelerator circuit115of a selected MM accelerator circuit100structured or adapted to perform a complete multiplication of the first matrix by one of the vectors comprising the second matrix (c=A*b), and in turn, each VV accelerator circuit150of a selected MV accelerator circuits115is structured or adapted to perform a complete multiplication of one or the vectors comprising the first matrix by one of the vectors comprising the second matrix (c=a*b). In addition to performing a vector multiplication, the array of VV accelerator circuits150(forming a MV accelerator circuit115) can implement any numerical function via piecewise linear approximation and also provide counting, pointers and group operations such as averaging, computation of least and most value. These VV accelerator circuits150provide all the atomic operations needed to compute virtually any neural network layer, for example and without limitation. Each of the VV accelerator circuits150of a single MV accelerator circuit115is also coupled, via a corresponding third, serial network120, to a corresponding kernel (weights) buffer (or other memory or registers)125, as mentioned above. The kernel (weights) data from the kernel (weights) buffer125is transferred to a first VV accelerator circuit1501, then sequentially transferred to the next VV accelerator circuit1502, such as by using a buffer (driver, amplifier or other data transfer) circuit285(for example and without limitation), and so on, with the kernel (weights) data propagating within a few clock cycles over the third, serial network120to all of the VV accelerator circuits150of the given MV accelerator circuit115. This use of the third, serial network120significantly increases efficiency, without any significant increase in latency, and further reduces the size (area), power consumption, and fanout of the bus structure forming the third, serial network120, particularly compared to any network or bus structure which would provide the kernel data in parallel to all of the VV accelerator circuits150of an MV accelerator circuit115or entire MM accelerator circuit100. As mentioned above, when a first, data distribution network80A is implemented, the third, serial network120may be referred to equivalently as a second, serial network120or more simply as a serial network120. It is irrelevant if any of these communication networks or buses is referred to as first, second, or third, for example. Rather, what is significant is that a separate serial network120is utilized to couple each VV accelerator circuit150to a corresponding kernel (weights) buffer125, while a more global network (the first, data distribution network80and the second, data access network110or the first, data distribution network80A) is utilized to couple the VV accelerator circuits150to the maps buffer105for more global sharing of maps data. Each of the VV accelerator circuits150is also coupled to the MM processor circuit200or tensor decoder circuit250to receive control information, such as to determine the operational mode of the VV accelerator circuit150, using a mode control word, such as via the control bus195as illustrated or via the first, data distribution network80,80A and/or the second, data access network110. As discussed in greater detail below, these operational modes include an independent mode, a cooperative mode, and several combined or blended cooperative and independent modes, each of which generates different sequences of outputs from each VV accelerator circuit150. FIG.6is a block diagram of a representative embodiment of a VV accelerator circuit150having an array106of a plurality of multiply and accumulate (MAC) circuits190, illustrated as MAC circuit1901, MAC circuit1902, through MAC circuit190N. Each MAC circuit190comprises a multiplier145, a first adder155, and optionally a first control multiplexer (“MUX”)160. The multiplier145multiplies map data (as a first word) from the maps buffer105by corresponding kernel (weight) data (as a second word) from the corresponding kernel buffer125, and provides as its output an intermediate, multiplicative product as a first input (156) to the first adder155. The first adder155, in turn, will add that intermediate, multiplicative product to a second input (158), which second input (158) is the output from the first control MUX160, and provide that resulting sum as a first (or next) accumulation sum. The first control MUX160receives, as a first input (162), a first bias parameter (which may be from a register165(as illustrated) or provided via the second, data access network110and/or first, data distribution network80A (not separately illustrated)), and as a second input (164), feedback of the first (or next) accumulation sum. The first bias parameter, for example, may be a parameter or other value (e.g., a constant or a variable value) utilized to normalize the resulting data. The feedback of the first (or next) accumulation sum is provided for the first adder155to perform an ongoing accumulation, adding the first (or next) accumulation sum to the current multiplicative product received from the multiplier145, to produce a next accumulation sum. Under the control of a first mode control word provided by the tensor decoder circuit250and/or the MM processor circuit200, one of these two inputs (162or164), namely, the first bias parameter or the first (or next) accumulation sum, is selected by the first control MUX160as the second input (158) into the first adder155, which adds it to the current multiplicative product received from the multiplier145to produce a next accumulation sum, which is output (166) to a shift register170. Alternatively, in the event that nothing is to be added to the current multiplicative product by the first adder155, the first bias parameter or the second input (158) can be set to zero and selected as the second input into the first adder155. Also alternatively, in the event a bias parameter is not to be selectively added to the current multiplicative product by the first adder155, optionally the first control MUX160may be omitted, with the feedback of the first (or next) accumulation sum provided directly as an input to the first adder155for ongoing accumulation. The shift register170receives such accumulation sum outputs (166) from each of the MAC circuits190, and sequentially shifts these outputs to provide them as one or more first inputs (172) to a second, reduction adder175. The reduction adder175, in turn, will add that accumulation sum output from a MAC circuit190(provided via shift register170) to a second input (174) to the reduction adder175, which second input (174) is the output from a second control multiplexer (“MUX”)180, to provide a second (or next) accumulation sum. The second control MUX180also receives, as a first input (176), a second bias parameter (which may be from a register168(as illustrated) or provided via the second, data access network110and/or first, data distribution network80A (not separately illustrated)), and as a second input (178), feedback of the second (or next) accumulation sum. The second bias parameter, for example, may be a parameter or other value (e.g., a constant or a variable value) utilized to normalize the resulting data, and may be the same as or different from the first bias parameter. The feedback of the second (or next) accumulation sum is provided for the reduction adder175to perform an ongoing accumulation, such as across all or some of the MAC circuits190, adding the second (or next) accumulation sum to the current output from a MAC circuit190provided via the shift register170. Under the control of a second mode control word provided by the tensor decoder circuit250and/or the MM processor circuit200, one of these two inputs (176,178), namely, the second bias parameter or the second (or next) accumulation value, is selected by the second control MUX180as the second input into the reduction adder175. The reduction adder175adds the selected second input to the current accumulation sum received from the shift register170to produce the second (or next) accumulation sum, which can be provided as an output184from the VV accelerator circuit150(to the second, data access network110and/or first, data distribution network80A (such as for storage in memory25or the maps buffer105), or to another linear or nonlinear computation circuit182, or to another MAC circuit190or to the MAX circuit130(such as for maxpooling), for example and without limitation) or which can be fed back through the second control MUX180for ongoing accumulation to produce a next accumulation sum. Alternatively, in the event that nothing is to be added to the accumulation sum output from a MAC circuit190, either the second bias parameter or the second input (174) can be set to zero and selected as the second input into the reduction adder175. Those having skill in the art will recognize that as multiple outputs are provided, multiple multiply and accumulation operations can be performed successively and iteratively across such outputs, eventually obtaining the complete multiplication of a matrix by another matrix. It should be noted that in representative embodiments, at least one of the first control multiplexers (MUXes)160or the second control multiplexer (MUX)180are included in the VV accelerator circuit150. For example, the first control MUX160may be optional, and may be included or not included in the MAC circuit190depending upon the granularity of the control selected or desired to be implemented, as described in greater detail below. Also for example, when the first control MUX160is included, the second control MUX180may be optional, and vice-versa. It should also be noted that depending upon the description and ordering within the description, a second control MUX180may be referred to as a first control MUX180and a first control MUX160may be referred to as a second control MUX160. For example, when the first control MUX160is optional, the second control MUX180is then typically referred to as a first control MUX180or more simply as a control MUX180. Whether either of these multiplexers160,180is referred to as first or second is largely irrelevant; rather, what is important is that a control MUX180is coupled to the reduction adder175, while a control MUX160is coupled to a first adder155, as an option. In another representative embodiment, both the first control multiplexers (MUXes)160and the second control multiplexer (MUX)180are included in the VV accelerator circuit150. For such a representative embodiment, each of the first control multiplexers160and the second control multiplexer180of each of the VV accelerator circuits150are also coupled to the tensor decoder circuit250and/or the MM processor circuit200to receive, respectively, first and second mode control words, which determine the operational mode of the VV accelerator circuit150. In another representative embodiment, only the control MUX180is included in the VV accelerator circuit150. For any of these various embodiments, these operational modes of the VV accelerator circuit150may include: (1) An independent mode, in which each of the MAC circuits190provides a corresponding first (or next) accumulation sum as a complete result to the shift register170, each of which may then be modified by the reduction adder175to the extent of adding in a second bias parameter (or not, if such a bias parameter was already added as the first bias parameter by the first adder155, or if no bias parameter is being included). In a representative example, without limitation, when sixteen MAC circuits190are implemented in a VV accelerator circuit150, such a second (or first) mode control word may be [1111111111111111], for example and without limitation, such as selecting the addition by the reduction adder175of the second bias parameter to each first (or next) accumulation sum, which are then provided as sixteen successive outputs from the VV accelerator circuit150. (2) A cooperative mode, in which each of the MAC circuits190provides a first (or next) accumulation sum as a corresponding partial result to the shift register170. In a first cycle of the reduction adder175, the first such partial result is added to a second bias parameter (if any) to provide a second (or next) accumulation sum, and in successive cycles, each successive second (or next) accumulation sum is selected for feedback through the second control multiplexer (MUX)180)) and is then successively added by the reduction adder175to the current second (or next) accumulation sum provided by the shift register170, thereby providing a single overall accumulation result as a single output from the VV accelerator circuit150. Also in a representative example, without limitation, when sixteen MAC circuits190are implemented in a VV accelerator circuit150, such a second mode control word may be [1000000000000000], selecting the addition by the reduction adder175of the second bias parameter to the first (or next) accumulation sum in the first cycle, followed by selecting successive feedback of the second (or next) accumulation sum, which is then successively added by the reduction adder175to the current second (or next) accumulation sum provided by the shift register170, resulting in a single overall accumulation result provided as a single output from the VV accelerator circuit150. (3) Any of several combined, blended or intermediate cooperative and independent modes, each of which generates different sequences of outputs from each VV accelerator circuit150with varying degrees of accumulation. Continuing with the example, without limitation, when sixteen MAC circuits190are implemented in a VV accelerator circuit150, different groups of MAC circuits190may be selected to operate in a cooperative mode, independently of other groups of MAC circuits190which are each collectively operating in their own cooperative modes, allowing for selection of:(A) a single output selected (a fully cooperative mode, with a second (or first) mode control word being, for example and without limitation, [1000000000000000]), as discussed above;(B) 16 separate outputs selected (a fully independent mode, with a second (or first) mode control word being, for example and without limitation, [1111111111111111]);(C) two outputs selected (a first intermediate cooperative and independent mode) in which two groups of eight MAC circuits190are selected, functioning in a cooperative mode within a group, but with each group itself operating independently from the other group (with a second mode control word being, for example and without limitation, [1000000010000000]);(D) four outputs selected (a second intermediate cooperative and independent mode) in which four groups of four MAC circuits190are selected, also functioning in a cooperative mode within a group, but with each group itself operating independently from the other group (with a second mode control word being, for example and without limitation, [1000100010001000]); and(E) eight outputs selected (a third intermediate cooperative and independent mode) in which eight groups of two MAC circuits190are selected, also functioning in a cooperative mode within a group, but with each group itself operating independently from the other group (with a second mode control word being, for example, [1010101010101010]). Those having skill in the art will recognize that other combinations of combined, blended or intermediate cooperative and independent modes are available, depending upon the number of MAC circuits190which are implemented. Those having skill in the art will also recognize that for any of these operational modes, any bias parameter may be added in separately by the first adder155. In addition, additional control is also provided using the first control MUX160of the MAC circuits190, each of which can also be selected for feedback of a first (or next) accumulation sum for addition to the multiplicative product, also providing for significant accumulation by each MAC circuit190, e.g., over hundreds to thousands of instructions or cycles. This is especially helpful in preserving high resolution, as some bits and resolution may be lost if intermediate results have to be stored to memory25(e.g., reducing accumulation at 32 bits to storage of only 16 bits). Any and all of these selections, implementations or combinations are considered equivalent and within the scope of the disclosure In addition, using these different operational modes, different kinds of parallelism may be exploited, and in different combinations. For example, fully independent modes and intermediate cooperative and independent modes may be more efficient for parallelism across kernels (inter-kernel parallelism). The MM accelerator circuit100architecture of the inference engine circuit architecture50allows for complete exploitation of any type of relevant parallelism. For example, the MAC circuits190of a VV accelerator circuit150may share both maps data (intra-map parallelism) and kernel data (intra-activation parallelism), for providing one output in a cooperative mode. Also for example, the MAC circuits190of a VV accelerator circuit150may share maps data (intra-map parallelism) and utilize different kernel data (inter-activation parallelism), for providing multiple outputs in an independent mode. Also for example, the MAC circuits190of a VV accelerator circuit150may utilize different maps data and share kernel data, also for providing multiple outputs in an independent mode. Also for example, the MAC circuits190of a VV accelerator circuit150may share different parts of the maps data (e.g., the same pixel position in different maps layers) and utilize different kernel data, also for providing one output in a cooperative mode. All of these types of exploitation of parallelism are possible with the inference engine circuit architecture50. Referring again toFIGS.3and4, the MM accelerator circuit100, or the maps buffer105, or the second, data access network110(or first, data distribution network80A) includes a maps buffer arbiter circuit185. The maps buffer arbiter circuit185provides the request response mechanism for the maps buffer105. The maps buffer arbiter circuit185receives addresses from the operand collector260(discussed below) and the various decoders of the tensor decoder circuit250. Using the received address, the maps buffer arbiter circuit185obtains the requested data from the appropriate bank of the maps buffer105, and then forwards the requested data (via the second, data access network110(or first, data distribution network80A)) to either the operand collector260or to the appropriate MAC circuit190or MAX circuit130. The maps buffer arbiter circuit185typically comprises state machine and control logic circuits290and various registers270, illustrated and discussed below with reference toFIG.8. As various incoming requests may conflict with each other for access to the maps buffer105, the requests may be stored in the registers270, and conflicts resolved (arbitrated) (e.g., implementing a round robin protocol, for example and without limitation) by the state machine and control logic circuits290, such that all of the various data requests are or become fulfilled. FIG.7is a block diagram of a representative embodiment of a MM processor circuit200. A representative embodiment of a MM processor circuit200comprises a tensor decoder circuit250and a control core275(also referred to as a “pipeline” controller or processor275). Another representative embodiment of a MM processor circuit200comprises the control core275and, as mentioned above, in such embodiments, the tensor decoder circuit250is not included within the MM processor circuit200. Instead, a plurality of tensor decoder circuits250are distributed throughout the inference engine circuit architecture50, such as providing a tensor decoder circuit250for or in each MM accelerator circuit100, as illustrated inFIG.4. The tensor decoder circuit250functions the same, however, regardless of the location of the tensor decoder circuit250within either a MM accelerator circuit100or a MM processor circuit200or elsewhere within the inference engine circuit architecture50. As the inference engine circuit architecture50receives instructions for execution or other processing, those instructions are provided to or fetched from an instruction cache (buffer or register)244by the control core275, by fetch circuit230, and are decoded into a type of instruction, using decoder circuit235. The instruction cache (buffer or register)244is illustrated as being located within the control core275, but can be located within any of the various registers or memories of the MM accelerator circuit100. Any tensor instructions (used for running tensor operations) or vector instructions (used for preloading vector data and loading configurations in the MM accelerator circuit100), for execution by a MM accelerator circuit100, are provided to and queued in the tensor instruction buffer238by the dispatch circuit240. Other instructions (e.g., scalar instructions, used for metadata manipulation and bookkeeping) which are not for the MM accelerator circuits100are dispatched for execution by the control core275(dispatch circuit240), and subsequently may be executed by the control core275, such as using execution (or processor) core245. For example, the decoder circuit235will determine whether a scalar instruction has any data dependencies, and if so, will hold the instruction until the dependencies resolve. The decoder circuit235will also serve to synchronize instructions which will be going out the VV accelerator circuits150(MV accelerator circuits115). One or more scalar register(s)242within the control core275includes registers to hold various metadata, such as to manipulate or add to various addresses. The scalar register(s)242can also be used for general purposes, such as a scratchpad for computing, and for any special functions, such as for address manipulation in the dispatch stage. The dispatch circuit240receives decoded or partially decoded instructions. The dispatch circuit240also conducts data dependency checks for both the maps buffer105and the kernel buffers125, such as checking for read after write operations (e.g., if a load operation is not complete, it will stall a MAC circuit190). The dispatch circuit240also collects any metadata, which it will send to the appropriate execution unit (execution (or processor) core,245, VV accelerator circuits150or MV accelerator circuits115). Whether located within the various MM accelerator circuits100or located within the MM processor circuit200, the tensor decoder circuit250is structured or adapted to offload significant processing work from the control core275, enabling the control core275to execute other instructions and perform other computations, particularly while the MM accelerator circuit100may be involved in computations occurring over hundreds to thousands of cycles, for example and without limitation. For example, the control core275may generate a base address to the tensor decoder circuit250, which will then compute all of the addressing to move data into or out of a MM accelerator circuit100and/or between the memory circuit25and a MM accelerator circuit100. The tensor decoder circuit250receives tensor instructions from the tensor buffer238, such as instructions for execution by the MAC circuits190(MAC instructions), the MAX circuit130(MAX instructions), and instructions to move data into or out of the MM accelerator circuit100(tensor data move and vector data move instructions), such as to and from the memory circuit25and to and from any of the MAC circuits190or MAX circuits130. The tensor decoder circuit250comprises a MAC decoder circuit210, a MAX decoder circuit215, a vector data move (“VMOV”) decoder circuit220, a tensor data move (“TMOV”) decoder circuit225, an operand collector260, and a mode control circuit255. Each of these components has the same general circuit structure, illustrated inFIG.8.FIG.8is a block diagram of a representative embodiment of a maps buffer arbiter circuit185, and of a decoder circuit210,215,220,225, a mode control circuit255, and an operand collector circuit260of a tensor decoder circuit250. Each of the maps buffer arbiter circuit185, decoder circuits210,215,220,225, mode control circuit255, and operand collector circuit260typically comprise state machine and control logic circuits290structured or adapted to perform the functions of the particular component and registers270to store various kinds of data, such as addresses. In addition, the decoder circuits210,215,220,225also typically comprise an address generator circuit265, used to calculate and generate addresses for obtaining or storing data in any of the memory circuit25, the maps buffer105, or the kernel buffers125. In operation, the VMOV decoder circuit220generates and holds addresses until the memory25may be ready to receive the data request. Like the MAC decoder circuit210, the TMOV decoder circuit225checks for the availability of physical resources for data movement, and checks to avoid data conflicts, such as write after read dependencies. The TMOV decoder circuit225generates and holds addresses for data movement between the maps buffer105and the memory circuit25, and also between and among the MM accelerator circuits100. For example, the TMOV decoder circuit225may buffer data while waiting for the memory circuit25to be available. Using the VMOV decoder circuit220and TMOV decoder circuit225, address generation for data movement may be offloaded from the control core275, which is then free to perform other operations. The MAC decoder circuit210determines whether the top instruction on the tensor buffer238is a MAC instruction, and if not, it waits for a MAC instruction to appear in the tensor buffer238. When it obtains a MAC instruction, using state machine and control logic circuits290, the MAC decoder circuit210then determines if the MAC instruction(s) has resource constraints, such as whether the physical links to the maps buffer105that are used by some VV accelerator circuits150are already occupied by other activities, such as from MAX instructions, VMOV instructions, or TMOV instructions. If these links are occupied, the MAC decoder circuit210waits until those links are free, to avoid link conflicts. If these links are not occupied, the MAC decoder circuit210takes the MAC instruction from the tensor buffer238and starts executing it. Using state machine and control logic circuits290, the MAC decoder circuit210also makes sure that there is sufficient (e.g.,16cycle) latency available for writing data back to the maps buffer105, such as may be required by the shift register170and reduction adder175. If some cycles are remaining, it stalls the new tensor instruction. The MAC decoder circuit210also ensures that VV accelerator circuits150in the current tensor instruction are not addressing the same memory bank of the maps buffer105as the previous tensor instruction at the same time, to avoid bank conflicts. Using state machine and control logic circuits290and address generation circuits265, the MAC decoder circuit210executes by outputting addresses starting at a base address (e.g.,132) and incrementing (including any stride138increments) until the current address is equal to the base address plus the tensor length (plus any stride138increments for non-contiguous data which may be part of a tensor144), illustrated as last or end address142for the tensor length illustrated in and discussed with reference toFIG.2, and provides these output addresses to the operand collector260(when the operand collector260is ready). The MAX decoder circuit215functions identically to the MAC decoder circuit210, but does so for instructions applicable to the MAX circuit130. The operand collector circuit260receives addresses from the MAC decoder circuit210and transfers those addresses to the maps buffer arbiter circuit185(or the maps buffer105) and to the kernel buffers125. The maps buffer105and the kernel buffers125then transfer the requested data to the operand collector circuit260which then supplies the data appropriately to the VV accelerator circuits150(MV accelerator circuits115) based on the compute mode, as discussed above (e.g., shared maps data, different kernel data, etc., for example and without limitation). More particularly, the operand collector circuit260receive addresses from the MAC decoder circuit210. Based on the selected operational mode (e.g., independent mode (e.g., 16 outputs from a VV accelerator circuit150), cooperative mode (e.g., a single output from a VV accelerator circuit150), or a blended independent/cooperative mode (e.g., 2, 4, or 8 outputs selected from a VV accelerator circuit150), the operand collector circuit260ensures that the transaction is a hit, i.e., that the data is in the maps buffer105or the kernel buffers125. If the data is not available, the operand collector circuit260sends a stall message or flag to the MAC decoder circuit210until the data is received. Such a stall depends on the operational mode. For example, in independent mode, the stall depends on the word address within a vector and for the cooperative mode a stall is not encountered. When the data is obtained (following a hit or if the data is received following a miss), the data is buffered (e.g., in registers270) and appropriate amounts or packets of that data are then transferred to the VV accelerator circuits150(MV accelerator circuits115). In independent mode, some unnecessary words may be shifted out before sending data to the VV accelerator circuits150(MV accelerator circuits115). The operand collector circuit260may also buffer operational metadata that is sent out to cooperative mode with maps or kernel data. The mode control circuit255is optional, and in a representative embodiment, is used to hold and distribute mode control words (described above) to the first control MUX160and/or second control MUX)180of the VV accelerator circuits150(MV accelerator circuits115), via the second, data access network110and/or first, data distribution network80A or via the control bus195. The mode to be implemented is typically selected by the user, and may be implemented at compile time, for example and without limitation. When a mode control circuit255is not included, the functions of storing and/or distributing mode control words may be performed by other components of either the tensor decoder circuit250or the control core275. FIG.9is a block diagram of a representative embodiment of a MAX circuit130, and is illustrated for completeness. The MAX circuit130is comprised of a plurality of comparators280, and is utilized to obtain a maximum of the operands being compared, such as to implement a maxpooling operation of a CNN. An operand is input (input282) (via the second, data access network110and/or first, data distribution network80A) and compared with a second input (284) (which may be initialized to zero, for example, for the first comparison cycle), with the current highest value (or maximum) being fed back (input284) for ongoing comparisons, until a maximum value had been obtained and output (286). FIG.10is a flow chart of a representative embodiment of a method of performing matrix multiplication using the inference engine circuit architecture or system50, and provides a useful summary. It should be noted that the various steps illustrated inFIG.10may occur in a wide variety of orders, and all such variations are considered equivalent and within the scope of the disclosure. Beginning with start step300, an operating mode is selected, step305, such as at compile time. As discussed above, maps data is obtained, step310, kernel data is obtained, step315, and a first and/or second control word is obtained, step320. Maps data and kernel data are multiplied to generate a multiplicative product, step325, using multiplier circuit145. When a first control word indicates that a first bias parameter is to be added, step330, the first bias parameter is added to the multiplicative product, step335, using the first adder circuit155, and the method proceeds to step350. When the first control word does not indicate that a first bias parameter is to be added, step330, the method determines whether the first control word indicates that an accumulation is to occur, step340. When an accumulation is to occur in step340, the multiplicative product is added to a first (or next) accumulation sum to generate a next accumulation sum, step345, using the first adder circuit155. When accumulation is to continue, step350, the method returns to step345and iterates. When accumulation is not to continue, step350, the method provides the first or next accumulation sum to the reduction adder circuit175, step355, such as via the shift register170. When a second control word indicates that a second bias parameter is to be added, step360, the second bias parameter is added to the first or next accumulation sum, step365, using the reduction adder175, and the method proceeds to step380. When the second control word does not indicate that a second bias parameter is to be added, step360, the method determines whether the second control word indicates that an accumulation is to occur, step370. When an accumulation is to occur in step370, the first or next accumulation sum is added to a next accumulation sum to generate a second or next accumulation sum, step375, using the reduction adder circuit175. When accumulation is to continue, step380, the method returns to step375and iterates. When accumulation is not to continue, step380, the method provides the second or next accumulation sum as an output result, step385, and the method may end, return step390. Numerous advantages of the representative embodiments are readily apparent. The representative inference engine circuit architecture50provides a computing architecture capable of providing high performance for CNN applications, for applications such as artificial intelligence, machine learning, image recognition, and other inferential applications requiring mathematical computations, for example and without limitation. The inference engine circuit architecture (system)50has comparatively high efficiency, with significant utilization (about 95%), and has a comparatively low bandwidth for access to any memory integrated circuit storing the maps and kernel data. The inference engine circuit architecture50is capable of performing a complete matrix-by-matrix multiplication. The inference engine circuit architecture50reaches 99.9% efficiency in a large majority of deep neural network layers, eliminating dead or dark silicon and making the most of every watt of power. The inference engine circuit architecture50can provide easier scaling than systolic arrays, at the multiple of VV accelerator circuits150. Beyond MV accelerator circuits115, the inference engine circuit architecture50scales with MM accelerator circuits100, as shown inFIG.1. DNNs are composed of various layers of operations connected together. Commonly used layers in DNN models are discussed below. A. Spatial convolution (CONV): Spatial convolution (CONV) is the core layer of CNNs, accounting for more than 90% of the operations for most of the CNN models. CONV is the multiplication and accumulation of values from an 3D input tensor (maps) with a set of 3D kernels that produces a 3D output volume. Each 3D kernel is associated with a bias value and produces a 2D output plane, also referenced as feature maps. Each output pixel of this plane was created by striding the kernel along a window of the input. Padding is added to the corners of the input plane so that the kernel can cover the corners of the input. Padding is also used to create different output sizes. The main parameters of a CONV are: the dimension sizes of the input, kernel sizes, stride and padding. From those parameters, the output size is calculated using Equation 1. Only an equation with a subscript x is shown, but an equivalent with subscript y should be inferred. Equation 1: ox=[(ix−kx+2*padx)/sx]+1 where ix, iyand ipare input width, height and planes; ox, oyand opare output width, height and planes; kx, ky, kpand kois kernel width, height, planes and kernels; kpand ipare equal; koand opare equal; sxand syare the window strides along x and along y; padxis number of columns padded on top and bottom; padyis the number of rows padded on top and bottom. There are two types of padding: use zeros values or replication of the corner values. In any case, the representative compiler does not send padding values to the MM accelerator circuit100and/or MM processor200to save memory25usage and memory25bandwidth. It addresses this by using the appropriate addressing and computation sizes. The operations in a window are independent, hence they are well suited to multicore processors, such as GPUs and the inference engine circuit architecture50, for example and without limitation. B. Transposed Convolution (“TCONV”): CONVs produce oxand oythat are less than or equal to ixand iy. Thus the sizes of feature planes shrink in the deeper layers. To reverse this process, transpose convolution is a special case of CONV that recover the sizes of the feature planes. TCONV have oxand oyof TCONV greater or equal to its ixand iy. TCONV parameters are thought by applying a CONV on the output of the TCONV to get its input. The padding defines the amount of rows or columns padded in the output of the TCONV, so that a CONV on that would produce the right sizes of the TCONV's input. Similar thought applies for stride. TCONV also have an output padding, in which TCONV's input has extra row or column in one of the corners (not both unlike padding), so that the output width and height matches. It is possible to compute TCONV using an equivalent CONV. TCONV with stride greater than 1 is a CONV with padding/gaps in between each row and column. The gap size is stride minus 1. The expanded input size and the output size are calculated using Equation 2 and Equation 3 below. Equation 2: eix=ix+2*padx+(sx−1)*(ix−1)+opadx ox=(ix−1)*sx−2*padx+kx+opadxEquation 3: Where eixand eiyare the expanded input sizes, and opadxand opadyare output padding. The location was the extra row or column is added for output padding varies between frameworks. Pytorch adds to left and bottom, while Tensorflow adds to top and right. Expanding the input is an expensive price for the accelerator, since it increases the required memory footprint by (sx−1)*(ix−1). The representative compiler does not send the padded input, it uses the right access addresses to avoid sending padding/repeated data from memory25to the MM accelerator circuit100and/or MM processor200. C. Fully Connected (FC): A fully connected layer is a matrix-vector multiplication. The matrix parameters are learned so that they map its input into an output vector with a learned linear function. FC is used as the last layers of CNN to provide a classification vector. FC is also largely used in Recurrent Neural Networks (RNNs). A FC layer is a data movement intensive operation because it provides limited data reuse, e.g., CNNs are compute intensive whereas RNNs are data movement intensive. Memory bandwidth is a bottleneck for RNN FPGA based accelerators. Weight compression and weight pruning based on network sparsity are techniques that lower memory bandwidth requirement for this type of workload. FC layers can also be viewed as a CONV with unitary kernel sizes and strides. ipis the input vector size, which is also the width of the matrix. And opis the output size, which is also the height of the matrix. This allows the reuse of most algorithms used in CONV for FC. D. Average pooling (Avgpool) and Max pooling (Maxpool): Average pooling takes the average of the values in a window of the feature plane. Avgpool also can be implemented as a CONV with a single weight value of inverse of window size (1/(kx*ky)). Multiplying and accumulating all values in a window produce the average value of a window. Max pooling is a down-sampling technique to achieve data invariance and to compress feature representation. Max pooling is element-wise comparison and its output is the highest value in a window. The input and output size relationship is same as shown in Equation 1. E. Activation Unit: Non-linear functions are applied to some layer's outputs, so that the model will reproduce non-linear relations. Some examples of activation units used are: rectified linear unit (ReLU), softmax, tanh and sigmoid. F. Add, Multiply and Concatenation: Add, Multiply and Concatenation are important vector operations in linear algebra. Given a multiply and accumulate engine, add is y=x*1+b, where y is output, x is one input vector and b is another vector. Multiply is y=x*b+0. Vector add is mainly used as a bypass connection between layers. This idea was introduced in ResNet models. Concatenation is used to group together feature maps from different layers. This was introduced in GoogLeNet models. II. A Representative Compiler: The custom ISA for the inference engine circuit architecture50, as a representative accelerator, has instructions which implement four different functionalities: data movement, compute, flow control and memory access. For the representative compiler400,500, the most relevant instructions are: MAC, MAX, VMOV, LOAD (LD), TMOV, Branch and SYNC. Other typical or known instructions such as Add, Move, Multiply may also be implemented. MAC multiplies and accumulates a contiguous sequence of data from maps buffer105(also abbreviated as “Buf” or “MBuf” inFIGS.11-21) and kernel (weights) buffer125(also abbreviated as “WBuf” inFIGS.11-21). MAX compares two blocks of data from maps buffer105(Buf). MAC and MAX send results back to the maps buffer105. VMOV pre-loads data to the MV accelerator circuits115to set the initial value for MAC. It is used to add the bias or implement the residual addition. LOAD (or LD) sends data from external memory25to maps buffer105, kernel (weights) buffer125or the instruction cache (buffer or register)244. TMOV sends data from the maps buffer105to external memory25or to another buffer. Branch is used to create loops and if-else conditions. SYNC is an instruction that ensures all VV accelerator circuits150, and/or MV accelerator circuits115and/or MM accelerator circuits100are synchronized. FIG.11is a block diagram of a representative embodiment of a compiler system400(also referred to more simply as a “compiler400”). The representative compiler system400implements the compiler method500illustrated and discussed below with reference toFIG.12. As illustrated inFIG.11, the representative compiler system400includes one or more processors75, which may be any type of processor (such as described in greater detail below); a memory circuit25A; and a communication interface45, such as for input of the neural network (e.g., ONNX) code which is to be stored in memory25A and compiled into the instructions which are subsequently stored in instruction cache (buffer or register)244and executed by the inference engine circuit architecture50, for example and without limitation. The memory circuit25A also may be referred to as a “second memory circuit” to distinguish it from the first memory circuit25which will be utilized by the various MV accelerator circuits115and/or MM accelerator circuits100when they execute the plurality of executable instructions which are to be generated by the compiler400. These various components are typically coupled to each other, directly or indirectly, as the case may be, such as through various bus structures405as illustrated, with all such coupling arrangements and topologies considered equivalent and within the scope of the disclosure, for example and without limitation. Not separately illustrated, the compiler system400may include a plurality of different types of processors, such as graphics processors, etc., also as discussed in greater detail below. In a representative embodiment, the one or more processors75may be multi-core processors as an option, with each processor75having a plurality of processor (or processing) cores. The representative compiler system400may be embodied or implemented as an exemplary or representative computer, a server, a supercomputer, an Internet-based or “cloud” based server system (equivalently referred to as a computer server), and not separately illustrated, may be coupled through a network (such as the Internet) (along with other network equipment and various components such as a router, a wireless router, a switching center and/or base station) to a plurality of client devices. A user can interact with the representative compiler system400through one or more client devices which are coupleable through a network to the communication interface45or directly coupled to the communication interface45(e.g., through a keyboard, a mouse, etc.). Although several components are illustrated, there may be fewer or more components in the representative compiler system400. Moreover, the components can be distributed on one or more computing devices connected by one or more networks or other suitable communication mediums. FIG.12is a flow chart of a representative embodiment of a compiler or compilation method500for compiling code for execution by the inference engine circuit architecture50or other accelerator system. The compiler or compilation method500is performed by the compiler system400, and more particularly, by one or more processors75of the compiler system400. It should be noted that the various steps illustrated inFIG.12may occur in a wide variety of orders, and all such variations are considered equivalent and within the scope of the disclosure. In addition, many of the various steps illustrated inFIG.12may be subdivided into multiple steps, which also may occur in a wide variety of orders, and all such variations are considered equivalent and within the scope of the disclosure. The compiler or compilation method500starts, step505, with receipt of the neural network code, such as an ONNX file or other file having the neural network code, which is to be converted for acceleration of execution on the inference engine circuit architecture50or other accelerator circuit. The first step towards generating code for a custom accelerator is to gather information about the model's architecture. There are various high-level DL being used today, with each representing DNNs differently, and exploiting different optimizations for deploying it on CPU or GPU systems. ONNX is an intermediate exchange format that allows models from different frameworks to be converted into other formats. Adopting ONNX allows users to deploy models that were trained on any DL framework supported by ONNX. From such an ONNX or other intermediate file, parsing begins, with a plurality (e.g., an ordered list) of layer (or model) objects being created, step510, by one or more processors75of the compiler system400. For example, Thnets (ThinkNets) can be used to read a model file exported from ONNX. A list of layer (or model) objects is created to represent the layer computation sequence in the hardware accelerator such as the inference engine circuit architecture50. The layer or model objects contain information needed to generate code. In step515, operations or functions of the network layers are fused with the layer (or model) objects and, additionally, various network layers may also be fused into layer (or model) objects. For example, the vector add operation present in ResNet models is fused with a convolution layer. Non-linear operations, such as MFM and ReLU, are also merged with convolution layers. An example is shown inFIG.13, parts (A) and (B), in which ReLUs603A,603B, and603C are merged or fused into convolution layers to form convolution layer (or model) objects (CONV605,607), and separate convolution layers609(with a ReLU603B) are merged or fused into a single convolution layer (or model) object (CONV607). Main memory25shared between a host and the inference engine circuit architecture50is managed by software. Memory25regions are allocated, step517, and maps are accessed in a ping-pong fashion. When maps are shared among non-sequential layers, then extra regions are allocated to keep maps for later layers to use. Using two memory regions for sequential layers save main memory space compared to allocating memory for each layer. This is important for embedded systems applications in which main memory is limited. In some models, such as GoogLeNet and ResNet, some layers share their input and output. Those layers are labeled according to their parallel path. Later the labels are translated into memory addresses. After creating the model list and allocating memory regions, each layer goes through a selection or decision step, step520, that processes each layer's information and its neighboring layers to decide how to decompose and generate instructions for them, in a cooperative mode, an independent mode, or a combined cooperative/independent mode, as discussed above, and further, allows for user input to select whether operations will be pipelined across the various MV accelerator circuits115and/or MM accelerator circuits100. The choice of which modes (cooperative mode, an independent mode, or a combined cooperative/independent mode, along with a pipelined mode) to use is user selectable and may be defined by the layer parameters, for example and without limitation, and this step is used to generate the various mode control words described above (when the various instructions are generated in step580, described below). In a pipelined mode, a plurality of MV accelerator circuits115and/or MM accelerator circuits100are utilized, essentially in a chain one after the other, to perform a series or sequence of computation, generally without the data (e.g., maps data) being stored to the memory circuit25, but either provided directly to the next MV accelerator circuit115and/or MM accelerator circuit100or stored in the maps buffer105. As another variation, the work load may be distributed, such as by having each MV accelerator circuit115and/or MM accelerator circuit100work on a different portion of an image, for example and without limitation. Also as an example of a mode choice and without limitation, if the kernel size provides enough computational cycles and its channel size is a multiple of 16 then use a cooperative mode, and otherwise use an independent mode, or a combined cooperative/independent mode. The compilation method500will then determine the maps data size for transfer to the maps buffer105and the kernel data size for transfer to the kernel (weights) buffer125, step525, as discussed in greater detail below, partitioning the input maps data and kernel (weights) data and further mapping the dataflow into the accelerator circuit architecture. Alternatively, this partitioning may be performed after the parsing, also as discussed below. The compilation method500will then order the various computations and data transfers, effectively moving the computation to the data, where possible, to reduce or minimize data transfers to and from the memory circuit25, to optimize memory bandwidth or usage, illustrated as steps530,535,540, and545. A compute order choice is based primarily on whether to reuse kernel data or reuse maps data, or both. Kernel data and maps data tile sizes are defined by their respective buffer sizes, with the number of tiles needed being the ratio between the total size and the tile size. An estimate of the data to be sent is calculated, and the option that requires the least amount of data transferred is chosen.FIG.14illustrates a first example of compute steps and memory25usage or bandwidth requirements without reordering of data movements and computations.FIG.15illustrates a second example of compute steps and memory25usage or bandwidth requirements without reordering of kernel data movement but with reordering of maps data movement and computations.FIG.16illustrates a third example of compute steps and memory25usage or bandwidth requirements without reordering of maps data movement but with reordering of kernel data movement and computations.FIG.17illustrates a fourth example of compute steps and memory25usage or bandwidth requirements with reordering of maps data movement, kernel data movement and computations.FIGS.14and15illustrate maps data m(0)602, maps data m(1)604, maps data m(2)606, and maps data m(3)608, and kernel data k(0)610, kernel data k(1)612, and kernel data (k2)614stored in memory circuit25, which are to be transferred from the memory circuit25to the inference engine circuit architecture50to perform the computation, with instructions616,618,620,622for representative computations also illustrated inFIGS.14-17. The data movement and computation steps are illustrated as objects, such as “load m(0)” (a load object) and compute m(0)×k(0) (a compute object), for example and without limitation, and serve to illustrate differences in memory25bandwidth requirements with and without certain kinds of reordering of data movement and computations. The compilation method500will first determine the memory25usage or bandwidth requirements without reordering of data movements and computations, step530, e.g., a “naïve” ordering based upon the list of layer or model objects previously created in step510, such as illustrated inFIG.14. As illustrated inFIG.14, maps data m(0) has been loaded and remains stationary (i.e., remains loaded) for a predetermined number of computations (load m(0)624), followed by a loading of kernel data (load k(0)626) and a computation of m(0)×k(0)628, followed by a loading of kernel data (load k(1)630) and a computation of m(0)×k(1)632, followed by a loading of kernel data (load k(2)634) and a computation of m(0)×k(2)636, storing the output o(0) (store o(0)638), then loading the next maps data m(1) (load m(1)640) which remains stationary (i.e., remains loaded) for a predetermined number of computations, followed by a repeated loading of kernel data (load k(0)642) and a computation of m(1)×k(0)644, followed by a repeated loading of kernel data (load k(1)646) and a computation of m(1)×k(1)648, followed by a repeated loading of kernel data (load k(2)650) and a computation of m(1)×k(2)652, for an estimated usage or bandwidth of 6.34 GB/s in this example. The compilation method500will then determine the memory25usage or bandwidth requirements without reordering of kernel data movement but with reordering of maps data movement and computations to accommodate the particular or selected kernel data which is currently loaded and does not need to be reloaded, step535, e.g., a “kernel stationary” ordering such as illustrated inFIG.15. As illustrated inFIG.15, kernel data k(0) has been loaded and remains stationary (i.e., remains loaded) for a predetermined number of computations (load k(0)654), followed by successive loading of maps data and computations of m(0)×k(0), m(1)×k(0), m(2)×k(0), m(3)×k(0), illustrated as a loading of maps data m(0) (load m(0)656) and a computation of m(0)×k(0)658, followed by a loading of maps data m(1) (load m(1)660) and a computation of m(1)×k(0)662, followed by a loading of maps data m(2) (load m(2)664) and a computation of m(2)×k(0)666, followed by a loading of maps data m(3) (load m(3)668) and a computation of m(3)×k(0)670, then storing the output o(0) (store o(0)672), loading the next kernel data k(1) (load k(1)674) which remains stationary (i.e., remains loaded) for a predetermined number of computations, followed by successive loading of maps data and computations of m(0)×k(1), m(1)×k(1), m(2)×k(1), m(3)×k(1), illustrated as a repeated loading of maps data m(0) (load m(0)676) and a computation of m(0)×k(1)678, followed by a repeated loading of maps data m(1) (load m(1)680) and a computation of m(1)×k(1)682, followed by a repeated loading of maps data m(2) (load m(2)684) and a computation of m(2)×k(1)686, followed by a repeated loading of maps data m(3) (load m(3)688) and a computation of m(3)×k(1)690, for an estimated usage or bandwidth of 2.45 GB/s in this example. In comparison with the “naïve” ordering, this reordering in which the kernel data remains loaded or stationary, with a reordering of maps data movement and corresponding computations to accommodate the particular or selected kernel data which is currently loaded and does not need to be reloaded, has resulted in appreciable improvement in memory bandwidth requirements, reducing the estimated bandwidth from 6.34 GB/s to 2.45 GB/s in these examples. The compilation method500will then determine the memory25usage or bandwidth requirements without reordering of maps data movement but with reordering of kernel data movement and computations to accommodate the particular or selected maps data which is currently loaded and does not need to be reloaded, step540, e.g., a “kernel flip” ordering such as illustrated inFIG.16. As illustrated inFIG.16, maps data m(0) has been loaded and remains stationary (i.e., remains loaded) for a predetermined number of computations (load m(0)692), followed by successive loading of kernel data and computations of m(0)×k(0), m(0)×k(1), m(0)×k(2), illustrated as a loading of kernel data (load k(0)694) and a computation of m(0)×k(0)696, followed by a loading of kernel data (load k(1)698) and a computation of m(0)×k(1)702, followed by a loading of kernel data (load k(2)704) and a computation of m(0)×k(2)706, storing the output o(0) (store o(0)708), then loading the next maps data m(1) (load m(1)710) which remains stationary (i.e., remains loaded) for a predetermined number of computations, followed by successive loading of kernel data in the reverse order (k(2) followed by k(1) followed by k(0) and computations of m(1)×k(2), m(1)×k(1), m(1)×k(0), illustrated as a repeated loading of kernel data (load k(2)712) and a computation of m(1)×k(2)714, followed by a repeated loading of kernel data (load k(1)716) and a computation of m(1)×k(1)718, followed by a repeated loading of kernel data (load k(0)720) and a computation of m(1)×k(0)722, for an estimated bandwidth of 3.33 GB/s in this example. It should be noted, however, that as discussed in greater detail below, from the first k(2) computation, the k(2) data has already been loaded (load k(2)704) and remains in the kernel (weights) buffer125, as does the k(1) data (load k(1)698) when double-buffering is implemented, such that the second load k(2) and load k(1) steps (load k(2)712and load k(1)716) may be eliminated, further saving memory25usage or bandwidth. In comparison with the “naïve” ordering, this reordering in which the maps data remains loaded or stationary, with a reordering of kernel data movement and corresponding computations to accommodate the particular or selected maps data which is currently loaded and does not need to be reloaded, has resulted in appreciable improvement in memory bandwidth requirements, reducing the estimated bandwidth from 6.34 GB/s to 3.33 GB/s in these examples, with additional memory25bandwidth savings occurring from the elimination of redundant kernel loading steps, as the currently loaded kernel data may be reused for the next several computations following the loading of the next maps data. The compilation method500will then determine the memory25usage or bandwidth requirements with reordering of maps data movement, kernel data movement and computations to accommodate the particular or selected maps and/or kernel data which is currently loaded and does not need to be reloaded, step545, e.g., a “mixed” ordering such as illustrated inFIG.17. As illustrated inFIG.17, maps data m(0) has been loaded and remains stationary (i.e., remains loaded) for a predetermined number of computations (load m(0)724), followed by successive loading of kernel and maps data and computations of m(0)×k(0), m(1)×k(0), m(0)×k(1), m(1)×k(1), illustrated as a loading of kernel data (load k(0)726) and a computation of m(0)×k(0)728, followed by a loading of maps data (load m(1)730) and a computation of m(1)×k(0)732, followed by a loading of kernel data (load k(1)734) and two successive computations, a computation of m(0)×k(1)736and a computation of m(1)×k(1)738, followed by a loading of kernel data (load k(2)740) and a computation of m(0)×k(2)742, storing a first output o(0) (store o(0)744), followed by a computation of m(1)×k(2)746and storing a second output o(1) (store o(1)748), then loading the next maps data m(2) (load m(2)750) which remains stationary (i.e., remains loaded) for a predetermined number of computations, followed by a computation of m(2)×k(2)752(without reloading of any kernel data), followed by loading of the next maps data m(3) (load m(3)754) which remains stationary (i.e., remains loaded) for a predetermined number of computations, followed by three successive computations, a computation of m(3)×k(2)756, a computation of m(2)×k(1)758and a computation of m(3)×k(1)760, a repeated loading of kernel data k(0) (load k(0)762) followed by computation of both m(2)×k(0)764and m(3)×k(0)766, and so on, for an estimated usage or bandwidth of 2.38 GB/s in this example. It also should be noted that additional or repeated loading steps have been eliminated when double-buffering is implemented. It should be noted that one of the advantages of the representative compilation method500is that it also provides for storage of intermediate results, typically in the maps buffer105, which further improves performance by decreasing usage of the memory circuit25(decreases memory bandwidth requirements). This is illustrated with the storing a first output o(0) (store o(0)744) and storing a second output o(1) (store o(1)748) inFIG.17. In comparison with the previously illustrated orderings, this reordering of maps data movement, kernel data movement and computations to accommodate the particular or selected maps and/or kernel data which is currently loaded and does not need to be reloaded, has resulted in appreciable improvement in memory25usage or bandwidth requirements, reducing the estimated bandwidth from 6.34 GB/s to 2.38 GB/s in these examples, with additional memory25bandwidth savings occurring from the elimination of redundant kernel and maps loading steps, as the currently loaded kernel and maps data may be reused for the next several computations following the loading of the next maps or kernel data, for example and without limitation. Essentially, the compilation method500has moved the computation to the currently loaded or available data, for both kernel data and maps data, rather than the more traditional movement of the data to the computation. This may also involve or result in network layer fusion as well, as various computations of different layers may be utilizing some of the same maps or kernel data as previous layers, and those computations from one layer may be moved or fused into another layer, such as a previous layer. For example, inFIG.18, the loading of the next maps data m(3) (load m(3)754), followed by three successive computations, a computation of m(3)×k(2)756, a computation of m(2)×k(1)758and a computation of m(3)×k(1)760, may be the result of a layer fusion in which the kernel data (here, k(1) and k(2) are held in place in the kernel (weights) buffer125and reused, bringing these three computations to this static kernel data, rather than leaving the three computations in their own subsequent layer and reloading this kernel data (i.e., moving the data to the computation) in that subsequent layer. For example, accelerators may be limited mostly by their off-chip memory bandwidth. The required bandwidth for a layer is a ratio between total amount of data transferred by the expected execution time. Loop rearrangement is a method that reduces the total amount of data movement by exploiting data reuse, which leads to memory bandwidth savings. Some CONV layers have large kernels, whereas others have large maps, but usually neither completely fits into the kernel (weights) buffer125and/or maps buffer105. Maps and kernels need to be partitioned and processed in buffer-sized tiles. A map tile needs to go through each kernel tile, leading to repeated kernel loads when the next map tile is loaded. Alternatively, a kernel tile needs to be processed with every map tile, resulting in repeated map loads for the following kernel tile. The total amount of data moved is different depending on kernel/map load repetition for a particular CONV layer. An advantage of the representative compiler400,500is that the compiler400,500estimates the amount of data to be transferred for both configurations and chooses the one that sends less data. Following these memory circuit25usage or bandwidth estimation steps, the compilation method500then selects the ordering and data movement mode, and corresponding ordering of operations, which reduces or minimizes memory25usage or bandwidth requirements, step550, and the parsing portion of the compilation method is completed. After the model parsing task completes, if not previously performed in step525, the compilation method500partitions the input maps data and kernel (weights) data and further maps the dataflow into the hardware's architecture. The inference engine circuit architecture50and other DNN accelerators are composed of an on-chip memory buffer (kernel (weights) buffer125and maps buffer105) to store data, a group of processing elements such as MM accelerator circuits100, and a control core275of a MM processor circuit200. This leads to three main operations: load, compute and store. In the compilation method500, a sequence of load, compute and store is grouped into a compute step. Each step consumes part of the layer's input and produces part of the layer's output. The compilation method500creates a list of compute steps based on the layer parameters. The limits imposed by buffer size and layer parameters are first calculated (step525) before creating the compute list. Based on these limits, load objects (“LO”) are created such that a balance between input data and output data coexists in the same buffer. LO sends data from external memory25into the maps buffer105and kernel (weights) buffer125. As the inference engine circuit architecture50has separate buffers for weights (kernel (weights) buffer125), the compilation method500aggregates as many weights as possible that fit in kernel (weights) buffer125. Double buffering is accounted for during LO creation, such that a compute step will pre-fetch data for the following compute step, eliminating latency and maximizing available memory25bandwidth. After LO creation, compute objects (CO) are generated based on the data available in the maps buffer105and the kernel (weights) buffer125. Accordingly, in step555, load objects and compute objects are generated (which will be further ordered based on data pre-fetching, in addition to ordering based on the data movement and memory bandwidth optimization previously discussed). For a CONV, the minimum input necessary to produce an output is kx×ky×kp. Maps data is arranged with planes first, column and row last (p,x,y). To ensure data is contiguous and to avoid issuing multiple LOAD (LD) instructions, ix×ky×kpis needed to create one output row. The division order is rows first, columns and planes last and a greedy approach tries to create as many output rows it can. If input rows and output row doesn't fit into the maps buffer105, then other strategies are used. The input rows are divided into parts, which requires multiple LOAD (LD) instructions to put the maps data into the maps buffer105. In a cooperative mode, parts of the output row can be sent to different maps buffers105of different MM accelerator circuits100and/or MV accelerator circuits115, which requires multiple TMOV instructions. Another approach is to divide the planes into parts and the MM accelerator circuits100and/or MV accelerator circuits115would create partial results in a compute step. In ResNet models, it is common to have a CONV followed by an ADD. In this case, the maps buffer105will contain the CONV's input, output and the ADD's input, and allows for layer fusion. This fused layer CONV+ADD needs to have both its inputs in different maps buffer105regions so that both inputs can be accessed simultaneously. CO contains information about the vector compute operations necessary for producing a number of output values. This encompasses up to three loops: stride on y-axis, x-axis and accumulate. The accumulate loop issues multiple instructions that accumulate the results before producing an output pixel. This is because not all data needed to produce an output value is contiguous. CO will be translated into nested loops of vector instructions to perform multiply and accumulate or compare. The loop boundaries are a repeat variable and the data access address is incremented by an offset variable. CO also has an extension with variables for vector register load, which implements residual add and bias addition. There are three types of CO: MAC, MAX and COPY. MAC CO generates instructions to multiple and accumulate values from maps buffer105and kernel (weights) buffer125, and it conditionally creates VMOV instructions. MAX CO generates instructions to compare values in maps buffer105. COPY uses self-comparison (MAX instructions) to copy maps buffer105values to a different location in maps buffer105. In step560, store objects (SO) are created to return the output in maps buffer105to external memory25, so that the processor75can access the results, and a list of compute steps is generated, step565. Compute step creation is shown inFIG.13, part (C), as compute steps611. In the example, assume input data to CONV is in m0, which is also the input to a residual add in a following RESADD layer; ml is another memory location that has the output of the previous CONV and it is the input of the CONV part of the RESADD; w0 and w1 are other kernel (weights) buffer125locations for weights. Precise address offsets were omitted inFIG.13. Every CO uses data from a LO or previous CO and creates data for a SO or for a following CO. The list of compute steps (611) needs to guarantee that all LOs needed by a CO happened before it. All results of a CO are stored with a following SO or consumed by another CO. Following these rules, in step570, the compilation method500provides computational ordering of the compute steps (each of which typically comprises one or more LO, CO, SO, as illustrated in part (C) ofFIG.13), pipelining of the operations across the MV accelerator circuits115and/or MM accelerator circuits100(when selected as a parameter by a user), and optimizes across all compute steps using data pre-fetching, sequencing or advancing load object operations, moving and grouping compute steps to use currently loaded data, and eliminating redundancies. For example, LO to different maps buffer105or kernel (weights) buffer125regions are moved to a previous compute step to allow data pre-fetching. A LO in the next layer is moved to a previous layer if that doesn't create any true data dependency, providing additional layer fusion. Compute steps with LOs accessing the same data are grouped together in a sequence to improve data reuse. This may cause some of the LOs to become redundant, which can be eliminated as previously discussed. LOs that loads data that are already present in maps buffer105or kernel (weights) buffer125or LOs that are not used by any CO are removed. LOs with same address offset are merged into one LO with a loop variable. The computational ordering (LO, CO, SO) also accounts for computational latency, step575, adjusting the ordering as may be necessary or desirable. For example, a step having a certain computational latency may be moved to earlier in a sequence of operations to avoid stalling the MV accelerator circuits115while waiting on data from such a computation. FIG.18illustrates a fifth example of reordering of maps data movement, kernel data movement and computations with data pre-fetching (and other optimizations, such as with redundancies having been removed). In this fifth example, the compilation method500has reordered the maps data movement, kernel data movement and computations of the fourth example illustrated inFIG.17, using pre-fetching of maps and kernel data in advance of their use in computations. As illustrated inFIG.18, prior to any computations, maps data m(0) has been loaded (load m(0)768), kernel data has been loaded (load k(0)770), maps data has been loaded (load m(1)772), and kernel data has been loaded (load k(1)774), each of which remains stationary (i.e., remains loaded) for a predetermined number of computations. This is followed by successive computations of m(0)×k(0)776and m(1)×k(0)778, loading of kernel data (load k(2)780) as a pre-fetching step, followed by successive computations of m(0)×k(1)782, m(1)×k(1)784and m(0)×k(2)786(with kernel data k(2) having been pre-fetched), storing a first output o(0) (store o(0)788), followed by loading of maps data (load m(2)790) as a pre-fetching step, computation of m(1)×k(2)792and storing a second output o(1) (store o(1)794), loading of maps data (load m(3)796) as a pre-fetching step, followed by three successive computations, a computation of m(2)×k(2)798(with maps data m(2) having been pre-fetched), a computation of m(3)×k(2)802(with maps data m(3) having been pre-fetched), and a computation of m(2)×k(1)804, followed by a repeated loading of kernel data (load k(0)806) as a pre-fetching step, and followed by three successive computations, a computation of m(3)×k(1)808, a computation of m(2)×k(0)810(with kernel data k(0) having been pre-fetched), and a computation of m(3)×k(0)812, and so on. It also should be noted that additional or repeated loading steps have been eliminated when double-buffering is implemented. Once a list of compute steps is created, ordered and pipelined, a code generation phase converts each load, compute and store object into a list of instructions or other executable code for the inference engine circuit architecture50or other accelerator, step580. Most of the computational structure was developed as a result from compute step creation and the ordering previously discussed. This code generation phase provides for instruction level optimizations, such as register assignment, branch delay slot filling, loop creation and unrolling. Each object (LO, CO, SO) has a corresponding function that creates corresponding instructions or other executable code. An example of generation of instructions or other executable code for a compute step is shown in part (D) ofFIG.13. In a representative embodiment, instructions are grouped into basic blocks (“BB”). A list of BB is created per MM accelerator circuit100. Each BB runs in sequence; thus the destination of any control flow instruction cannot be at any other BB. This way makes scanning instructions for potential optimizations and error checking bounded within a BB, rather than all or a fixed number of instructions. In a representative embodiment, instruction cache244is separated into two banks of512instructions. For each bank a load is needed to load instructions to other bank. In step585, the compiler400,500compiler merges BBs together to form instruction banks (groups of512instructions), i.e., merges or groups all the instructions into these banks (e.g., sections of512), while ensuring that there is not a branch from one instruction bank to another. Some instructions that are not in a loop or if else condition from the following BB are moved to the instruction bank to so that the instruction cache244is better utilized. Absence of branches across instruction banks is ensured by the BBs. As part of step585, as an option, at the beginning of each instruction bank, a load for the following bank is inserted, and at the end, a jump to a next bank is inserted, to align the section of useful instructions to512and to provide for movement to a next instruction bank. In MAC CO, the accumulate loop issues multiple instructions that accumulate the results before producing an output pixel. This is because not all data needed to produce an output value is contiguous. In the case of a CONV with kx and ky equal to three, three MAC instructions with three different addresses in the maps buffer105and kernel (weights) buffer125is needed. CO creates the accumulate loop that issues a MAC and increment each address by a constant. MAC CO code generation function unrolls the accumulate loops if they are small enough. They also add VMOVs in case of CONV+ADD. If two consecutive output values need to be compared like in CONV+MFM layers in LightCNN, then two sets of MAC instructions are created in sequence. Loop over all kernels in kernel (weights) buffer125, y-loop and x-loop are conditionally created, as part of this code generation. MAX CO and COPY CO also creates those loops if needed. In step590of the compilation method, SO conditionally creates synchronize (“sync”) instructions to avoid data corruption. Load, vector compute and store instructions have variable latency and can be issued in parallel. In cases when there is a latency mismatch, a sync instruction is needed to avoid vector data hazards. The inference engine50has multiple compute clusters, each with an independent control core275, maps buffer105and kernel (weights) buffer125. In a representative embodiment, a single sync instruction can be used to synchronize all execution across the MV accelerator circuits115and/or MM accelerator circuits100. For example, if a MV accelerator circuit115and/or MM accelerator circuit100finishes producing half of the outputs in a CONV, it will wait at the sync instruction for the other MV accelerator circuits115and/or MM accelerator circuits100to also react a barrier and complete the CONV layer before going to a second layer. In a representative embodiment, another alternative within the scope of the disclosure is to send different inputs for all MV accelerator circuits115and/or MM accelerator circuits100in which barrier is not needed. As instructions are created, they are checked for inconsistencies or other errors, step595, such as inconsistencies in their arguments, such as an immediate being above a certain number of bits, the register value has overflown, special registers can only be used with some instructions, etc. As part of step595(or as a separate step), as an option, instructions are also labeled to determine some properties of the instruction, such as whether they cause write after write (“WAW”) or read after write (“RAW”) on the scalar registers, they are in a branch delay slot, they are redundant, they are in nested loops or they have an immediate that will be modified (e.g., for address reallocation). As part of step595(or as a separate step), as an option, after a BB creation is finished, instruction optimizations are applied to the BB. For example, tensor or vector instructions (MAC/MAX) take variable amounts of cycles to produce a result. Within these cycles, there are other operations occurring, such as loop control, conditional branches, buffer address increments and load instructions, for example and without limitation. If those operations were to take more cycles on average than such MAC/MAX latency, then the MV accelerator circuits115and/or MM accelerator circuits100could stall. As another example, read after write (RAW) may occur when an instruction reads from a register that was just written to in less than predetermined number of cycles, such as a 4 cycle gap. In case of RAW, the instruction will stall up to 4 cycles to get access to the required register.FIG.19illustrates an example of optimization or transformation of instructions or other executable code to account for execution latencies. A common situation is shown inFIG.19, where some instructions set some registers to have the maps buffer105and kernel (weights) buffer125addresses for a MAC/MAX. To avoid RAW between the registers sets and the MAC/MAX instructions, some instructions for setting addresses are grouped together, using different registers, such that the following set of MAC/MAX instructions will not be stalled due to RAW. Also as illustrated inFIG.19, pre-loading of addresses into different registers solves this issue, provided there are enough registers. This instruction level transformation is useful for CONV/TCONV with a comparatively small kernel size, which has low MAC latency, or max-pool layers. For example, second AlexNet's max-pool layer reduced the execution time from 0.53 to 0.31 ms. and TCONV with 3×3 kernel, 2×2 stride, 32×32×64 input, op=64 reduced the execution time from 2.202 to 1.498 ms. An array determines whether a register is currently available, and this further determines if instructions can or cannot use a particular register at that time. It also determines how far an instruction can be moved without affecting other instructions. Redundant instructions or dead code are eliminated. Branch delay slot filling follows are a similar approach to RAW, in which potential independent scalar instructions inside a loop are moved into an empty delay slot. Following the error checking an optimizations of the instructions or other executable code in step595, the compilation method500may end, return step600.FIG.20illustrates an example of instructions or other executable code. After code generation, weights data is arranged to make kernel loads contiguous. For example, in independent mode, each MAC190processes a different kernel, so each kernel (weights) buffer125of each MV accelerator circuit115and/or VV accelerator circuit150may have a group of 16 kernels, for example and without limitation. Each MV accelerator circuit115and/or MM accelerator circuit100has a group of 64 kernels, assuming kernel loads are in broadcast mode. If two groups of kernels fit in the kernel (weights) buffer125, then kernel0to15and64to79are arranged in sequence so that one load is needed. Bias values are attached in beginning of each kernel. The memory circuit25contains all the data needed for execution. The memory layout reserves memory locations for temporary, intermediate results for layers, input, output, weights and instructions. The inference engine circuit architecture50accesses those locations to run. Memory layout, arranged weight data and instructions are saved in a file, which can be read by a decoding program to bypass recompilation of the same DNN model. Instructions that access memory25are labeled in code generation phase, and a reallocation table is created and saved. It is possible to instantiate the inference engine circuit architecture50on different FPGA cards with one host processor75. In this case, a separate inference engine circuit architecture50object is created for each FPGA card. Different FPGAs can run different models or different inputs. The inference engine circuit architecture50may provide some configuration registers that enable an initial load instruction to populate the instruction cache244with the first set of instructions. Another register can be used to count the amounts of data sent to and received from memory25. Software can be used to poll the output counter register to check whether processing has finished or not. Additional variations for the compiler system400and compilation method500are also available and may be included, as one or more options. A “quantize mode” is an optional variation or addition to the compilation method500and may be performed by the compiler system400. Data values in neural network models are typically quantized into fixed point representations to save or decrease the number of bytes transferred. The location of the fixed point is chosen ahead-of-time by the compiler. The fixed point location can be different for each layer in the model or different sections of the model to reduce degradation of accuracy due to floating to fixed point quantization. The compiler system400chooses the fixed point for weight values based on its distribution. The compiler system400uses one or more inputs of the model to choose the fixed point for input and output of each layer. Each layer output's fixed point location should match the following layer input's fixed point location. For each input, the compiler system400executes the model in floating point using the CPU, and keeps track of the best fixed point location for each layer. After reaching a stable point, the compiler system400saves the fixed point configuration of each layer to be used in the future executions of that model. As an option, a quantize mode can be included in which the compiler system400uses some real inputs and measures the scale of the output of each layer in floating point mode. In this way the compiler system400can choose the right scale to use for each layer, enabling different quantization types for different layers (e.g., block floating point, where each layer is considered a block and has fixed point for that layer). The compiler system400may also implemented various different strategies to do the same job. For example, one choice that the compiler system400performs is the order in which loops are done, such as, for example: (1) loading a partition of the map data and then looping through different kernels to get the different outputs planes, i.e., the outer loop is the maps partitioning, the inner loop is the kernels partitioning (first original implementation); or (2) keeping the kernels in memory and looping through different maps partitions to get the different output parts for the same output planes, i.e., the outer loop is kernel partitioning, the inner loop is maps partitioning (kernel stationary mode). This choice can be done by an approximate calculation that calculates the amount of data for loading maps partitions from memory25and kernels partitions from memory25and choosing the strategy that loads less data and is then in theory faster. As another option, the compiler system400can split the network in different layers and benchmark each layer with the two different strategies, and use the measured fastest choice for each layer, separately or independently of another layer. The compilation method500may include an optimizer loop to search which compilation options may be the best given the run-time profiling. The compilation method500can run each layer in a model separately or the entire model with a compilation option. It will store run-time measurements of that execution with the compilation options that are being evaluated. The compiler system400(compilation method500) then stores the options that gave the best results and applies them in future neural network executions. In kernel stationary mode, the output planes may need to be contiguous in memory, so with the kernel loop being the outer loop, and the output being stored in blocks, each outer loop iteration will store its output, which will be in output planes blocks. So the output organization will be organized as [output planes blocks] [height] [width] [planes inside planes block] or alternatively also [height] [output planes blocks] [width] [planes inside planes block]. This should be reordered correctly as [height] [width] [planes]. This can done with an additional, separate layer. Additional strategies that can be included, as options, are:(1) Saving directly as [height] [width] [planes] by performing a strided saving to memory25(instead of saving the entire generated block as is, it is divided into parts and the parts are scattered in memory25as required). This can be done when the parts, i.e., the number of output planes in each block is a multiple of 64, with this being the minimum memory granularity.(2) Saving directly as [height] [width] [planes] by rearranging data correctly in the MM accelerator circuit100buffers before storing. This requires loading an entire output row, filling the missing planes and storing that row back to memory, essentially interleaving the additional (rearranging) layer work with the calculations. A “progressive mode” may also be implemented as a variation of the “mixed mode” discussed above, in which either maps or kernel data may be maintained in their respective buffers105,125while the kernel or maps data may be loaded or reloaded, and which may be explained by an example. Suppose we have 6 input rows with 3 rows kernel, which will produce 4 output rows. One way to do this is:Load rows 1-3 in bank 1;Load rows 2-4 in bank 2;Process bank 1 and output row 1;Load rows 3-5 in bank 1;Process bank 2 and output row 2;Load rows 4-6 in bank 2;Process bank 1 and output row 3;Process bank 2 and output row 4. In a “progressive mode”, processing may instead be ordered as:Load rows 1-3 in positions 1-3;Load row 4 in position 4;Process rows 1-3 (positions 1-3) and output row 1;Load row 5 in position 1;Process rows 2-4 (positions 2-4) and output row 2;Load row 6 in position 2;Process rows 3-5 (positions 3, 4, 1) and output row 3;Process rows 4-6 (positions 4, 1, 2) and output row 4. In case of kernel stationary mode, where this loop will need to be repeated for different kernels, for the next kernel it will not restart from the beginning, but will start from the end in order not to reload something that is already in memory, so even kernel iterations will start outputting the first row, odd kernel iterations will start outputting the last row, for example. In this progressive mode variation of the mixed mode, the data may be split or divided in a more fine-grained approach into rows in the maps buffer105or the kernel buffer125, and new rows of data may be loaded in the maps buffer105or the kernel buffer125while the MAC circuits190are executing using other rows of data, i.e., comparatively smaller blocks of data are being utilized and transferred, as a single row of data, rather than multiple rows, of the respective maps buffer105or the kernel buffer125. This mode may also be utilized in the comparisons previously discussed and illustrated. In a representative embodiment, as an option, all MAC circuits190in a VV accelerator circuit150execute the same MAC instruction in lockstep. Different computation patterns (stencils) may cause some of the MAC circuits190to be under-utilized. CONV layers have a different computation pattern in their padding regions. If padding zeros are not sent into the maps buffer105, then the computation window size is smaller in the corners of the input. In CONV layers with padding/corner cases ran in independent mode, the input of the top corner case is not considered as a separate compute step and the MM accelerator circuit100can be disabled during processing of the top corner case. This saves a load compared to the naïve approach of considering the top corner as a separate compute step. In a representative embodiment, as an option, the compiler system400can divide the work across MM accelerator circuits100or MV accelerator circuits115along columns or rows, depending on the amount of data in the maps buffer105. As an example using 4 MM accelerator circuits100, if the maps buffer105contains data to produce less than 4 rows, then each row can be divided across the MM accelerator circuits100. For example, using 4 MM accelerator circuits100for each of the 3 rows is better than using 3 MM accelerator circuits100for 3 rows. When there is not padding, the distinction between rows and column for workload distribution is not needed. A CONV k=1×1 with o=6×6 can be represented as o=36×1 and all MM accelerator circuits100produces 9 pixels. The column and row division trade-off comes from different computation patterns. Going back to the 3 rows example, now there is padding in the right and left most pixels of each row. Using 3 MM accelerator circuits100for all 3 rows is not worse than using 1 MM accelerator circuit100for corner cases for each row. This is more evident in TCONV layers with stride greater than 1, in which adjacent output pixels may require a different computation pattern. The compiler system400(compilation method500) approaches this in a top-down fashion. It starts with the largest possible groups of output pixels that are broken into smaller groups hoping that it will create a better distribution. Kernel indexes are the computation pattern to be matched, since weights are shared across VV accelerator circuits150. An output pixel is associated with a sequence of accessed kernel indexes. And a group is a sequence of adjacent output pixels. Different output pixels in a group combines their kernel indexes to create a larger kernel index sequence. A set data structure is used to identify unique sequences of kernel indexes that can be distributed across MV accelerator circuits115. Transpose convolution has been changed by creating a memory25structure that will contain all the operations to do:1. A transposed convolution is made of a list of output rows blocks to generate.2. Each output rows block is made of several rows and a common set of parameters for that block, these parameters being the number of such blocks, the numbers of MV accelerator circuits115to use and a set of offsets for MV accelerator circuits115and the output.3. Each row is made of several row parts.4. Each row part is made of several pixels and a common set of parameters for that part, these parameters being the number of such parts, the numbers of MV accelerator circuits115to use and a set of offsets for MV accelerator circuits115and the output5. Each pixel contains the offsets into maps and kernels for generating that pixel. FIG.21is a set of bar graphs illustrating (A) bandwidth measurements, (B) performance measurements, and (C) efficiency measurements for the inference engine circuit architecture50for different DNN models with instructions or other executable code generated by the compilation method500. The required bandwidth is illustrated with stripped bars and the measured bandwidth is the solid bar. The inference engine circuit architecture50used in this work had 512 KB of kernel (weights) buffer125and 256 KB of maps buffer105, 4 KB of instruction cache244and 256 MACs190per MM accelerator circuit100. A representative inference engine circuit architecture50operating at 187 MHz was implemented using AC510 having a HMC memory and a Xilinx KU060 FPGA. The performance achieved was measured for some DNN models as shown inFIG.21. The execution time did not account for linear layers. The input size selected generally was 3×224×224, while for LightCNN9 the input size was 1×128×128, for Inception-v3 the input size was 299×299×3, for Linknet and styletransfer the input sizes were 256×256×3. Using an EX750 backplane, multiple AC510 cards were added. The measurements were run on 1 FPGA (1f) or 2 FPGAs (2f), using 1 input image (1i) or 2 images (2i) and using one MM accelerator circuit100(1c) or two MM accelerator circuits100(2c). For example, 1f1i2c means that 1 image was distributed into two MM accelerator circuits100within one FPGA, and 1f2i2c 1 image was processed by each MM accelerator circuit100on one FPGA. Efficiency is calculated as ratio between measured execution time and expected execution time at peak performance. Memory bandwidth takes into account input, weights, output and instructions that are moved into/from the inference engine circuit architecture50. The maximum bandwidth that was achieved on one FPGA was 7 GB/s. The bandwidth required for each layer is plotted in stripped bars. The inference engine circuit architecture50was able to scale its performance across MM accelerator circuits100and FPGA cards. 1f2i2c bandwidth requirement is higher because it had to send 2× of the input data using the bandwidth provided by 1 FPGA. In 2f2i1c, 2 FPGAs provide more bandwidth, thus it shows higher efficiency. 1f1i2c shows 2× performance boost as expected from using 2× more MACs on same number of operations. The measured power consumption of one AC510 FPGA was 24W, Tegra TX1 14W, Titan-X 154W. On Alexnet without linear layers, performance per power consumed achieved in one AC510 FPGA was 3.8 Gops/W. Numerous advantages of the representative compiler400,500are readily apparent. The compiler400,500significantly reduces memory usage or bandwidth and increases data reuse, resulting in significant performance and efficiency of accelerator circuits such as the inference engine circuit architecture50. The compiler400,500merges or fuses neural network layers, including merger or fusion of different functions or operations into network layers, and provides for rearranging instructions or other executable code, all for additional performance and efficiency of accelerator circuits such as the inference engine circuit architecture50. The compiler400,500also provides for selection of various operating modes, such as cooperative modes, independent modes, mixed cooperative and independent modes, in addition to selection of pipelining modes. Representative embodiments provide a highly intelligent solution to the complexities of compiling potentially millions of lines of code for a DNN. The bandwidth determinations alone, for innumerable nodes across innumerable layers of a DNN, including all of the potential millions upon millions of combinations and permutations of data movement and computation movement, plus merging of layers, simply cannot be performed both accurately and within a practical or reasonable period of time without the representative automated compilation method. The representative embodiments automate the compilation process to produce a tangible result, namely, reducing memory usage or bandwidth and increasing data reuse, resulting in significant performance and efficiency of accelerator circuits such as the inference engine circuit architecture50. As a further result, the representative embodiments improve the functioning of accelerator circuits, eliminating the prior art computational bottleneck of data transfers from memory circuits, decreasing the memory requirements, and further serving to decrease the load of the various system components. This improvement of the performance and efficiency of accelerator circuits such as the inference engine circuit architecture50further allows deployment of such accelerator circuits in new environments, especially those requiring immediate or other time-sensitive results, such as for artificial intelligence applications, including image recognition, autonomous vehicles, self-driving cars, for example and without limitation. As used herein, a “processor”75or control core275may be any type of processor or controller, and may be embodied as one or more processor(s)75,275configured, designed, programmed or otherwise adapted to perform the functionality discussed herein. As the term processor or controller is used herein, a processor75may include use of a single integrated circuit (“IC”), or may include use of a plurality of integrated circuits or other components connected, arranged or grouped together, such as controllers, microprocessors, digital signal processors (“DSPs”), array processors, graphics or image processors, parallel processors, multiple core processors, custom ICs, application specific integrated circuits (“ASICs”), field programmable gate arrays (“FPGAs”), adaptive computing ICs, associated memory (such as RAM, DRAM and ROM), and other ICs and components, whether analog or digital. As a consequence, as used herein, the term processor or controller should be understood to equivalently mean and include a single IC, or arrangement of custom ICs, ASICs, processors, microprocessors, controllers, FPGAs, adaptive computing ICs, or some other grouping of integrated circuits which perform the functions discussed herein, with associated memory, such as microprocessor memory or additional RAM, DRAM, SDRAM, SRAM, MRAM, ROM, FLASH, EPROM or EPROM. A processor75or control core275, with associated memory, may be adapted or configured (via programming, FPGA interconnection, or hard-wiring) to perform the methodology of the invention, as discussed herein. For example, the methodology may be programmed and stored, in a processor75or control core275with its associated memory (and/or memory25) and other equivalent components, as a set of program instructions or other code (or equivalent configuration or other program) for subsequent execution when the processor75or control core275is operative (i.e., powered on and functioning). Equivalently, when the processor75or control core275may implemented in whole or part as FPGAs, custom ICs and/or ASICs, the FPGAs, custom ICs or ASICs also may be designed, configured and/or hard-wired to implement the methodology of the invention. For example, the processor75or control core275may be implemented as an arrangement of analog and/or digital circuits, controllers, microprocessors, DSPs and/or ASICs, collectively referred to as a “processor” or “controller”, which are respectively hard-wired, programmed, designed, adapted or configured to implement the methodology of the invention, including possibly in conjunction with a memory25. The memory circuit25,25A, maps buffer105, kernel buffer125, and other registers or memory herein, which may include a data repository (or database), may be embodied in any number of forms, including within any computer or other machine-readable data storage medium, memory device or other storage or communication device for storage or communication of information, currently known or which becomes available in the future, including, but not limited to, a memory integrated circuit (“IC”) (for memory25), or memory portion of an integrated circuit (such as the resident memory within a processor75or control core275or processor IC, or such as maps buffer105, kernel buffer125, and other registers or memory herein), whether volatile or non-volatile, whether removable or non-removable, including without limitation RAM, FLASH, DRAM, SDRAM, SRAM, MRAM, FeRAM, ROM, EPROM or EPROM, or any other form of memory device, such as a magnetic hard drive, an optical drive, a magnetic disk or tape drive, a hard disk drive, other machine-readable storage or memory media such as a floppy disk, a CDROM, a CD-RW, digital versatile disk (DVD) or other optical memory, or any other type of memory, storage medium, or data storage apparatus or circuit, which is known or which becomes known, depending upon the selected embodiment. The memory circuit25,25A, maps buffer105, kernel buffer125, and other registers or memory herein may be adapted to store various look up tables, parameters, coefficients, other information and data, programs or instructions, and other types of tables such as database tables. As indicated above, the processor75or control core275is hard-wired or programmed, using software and data structures of the invention, for example, to perform the methodology of the present invention. As a consequence, the system and related methods of the present invention, including the various instructions of a configuration memory, may be embodied as software which provides such programming or other instructions, such as a set of instructions and/or metadata embodied within a non-transitory computer readable medium, discussed above. In addition, metadata may also be utilized to define the various data structures of a look up table or a database. Such software may be in the form of source or object code, by way of example and without limitation. Source code further may be compiled into some form of instructions or object code (including assembly language instructions or configuration information). The software, source code or metadata of the present invention may be embodied as any type of code, such as C, C++, Matlab, SystemC, LISA, XML, Java, Brew, SQL and its variations (e.g., SQL 99 or proprietary versions of SQL), DB2, Oracle, or any other type of programming language which performs the functionality discussed herein, including various hardware definition or hardware modeling languages (e.g., Verilog, VHDL, RTL) and resulting database files (e.g., GDSII). As a consequence, a “construct”, “program construct”, “software construct” or “software”, as used equivalently herein, means and refers to any programming language, of any kind, with any syntax or signatures, which provides or can be interpreted to provide the associated functionality or methodology specified (when instantiated or loaded into a processor or computer and executed, including the processor75or control core275, for example). The software, metadata, or other source code of the present invention and any resulting bit file (object code, database, or look up table) may be embodied within any tangible, non-transitory storage medium, such as any of the computer or other machine-readable data storage media, as computer-readable instructions, data structures, program modules or other data, such as discussed above with respect to the memory circuit25, maps buffer105, kernel buffer125, and other registers or memory herein, e.g., a floppy disk, a CDROM, a CD-RW, a DVD, a magnetic hard drive, an optical drive, or any other type of data storage apparatus or medium, as mentioned above. The communication interface45,45A is utilized for appropriate connection to a relevant channel, network or bus; for example, the communication interface45,45A may provide impedance matching, drivers and other functions for a wireline or wireless interface, may provide demodulation and analog to digital conversion for a wireless interface, and may provide a physical interface, respectively, for the processor75or control core275and/or memory circuit25,25A, with other devices. In general, the communication interface45,45A is used to receive and transmit data, depending upon the selected embodiment, such as program instructions, parameters, configuration information, control messages, data and other pertinent information. The communication interface45,45A may be implemented as known or may become known in the art, to provide data communication to and from the inference engine circuit architecture50and any type of network or external device, such as wireless, optical, or wireline, and using any applicable standard (e.g., one of the various PCI, USB, RJ 45, Ethernet (Fast Ethernet, Gigabit Ethernet, 300ase-TX, 300ase-FX, etc.), IEEE 802.11, Bluetooth, WCDMA, WiFi, GSM, GPRS, EDGE, 3G and the other standards and systems mentioned above, for example and without limitation), and may include impedance matching capability, voltage translation for a low voltage processor to interface with a higher voltage control bus, wireline or wireless transceivers, and various switching mechanisms (e.g., transistors) to turn various lines or connectors on or off in response to signaling from processor75or control core275. In addition, the communication interface45,45A may also be configured and/or adapted to receive and/or transmit signals externally to the inference engine circuit architecture50and/or compiler400, such as through hard-wiring or RF or infrared signaling, for example, to receive information in real-time for output on a display, for example. The communication interface45,45A may provide connection to any type of bus or network structure or medium, using any selected architecture. By way of example and without limitation, such architectures include Industry Standard Architecture (ISA) bus, Enhanced ISA (EISA) bus, Micro Channel Architecture (MCA) bus, Peripheral Component Interconnect (PCI) bus, SAN bus, or any other communication or signaling medium, such as Ethernet, ISDN, T1, satellite, wireless, and so on. The present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the specific embodiments illustrated. In this respect, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of components set forth above and below, illustrated in the drawings, or as described in the examples. Systems, methods and apparatuses consistent with the present invention are capable of other embodiments and of being practiced and carried out in various ways. Although the invention has been described with respect to specific embodiments thereof, these embodiments are merely illustrative and not restrictive of the invention. In the description herein, numerous specific details are provided, such as examples of electronic components, electronic and structural connections, materials, and structural variations, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, components, materials, parts, etc. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention. In addition, the various Figures are not drawn to scale and should not be regarded as limiting. Reference throughout this specification to “one embodiment”, “an embodiment”, or a specific “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention and not necessarily in all embodiments, and further, are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any specific embodiment of the present invention may be combined in any suitable manner and in any suitable combination with one or more other embodiments, including the use of selected features without corresponding use of other features. In addition, many modifications may be made to adapt a particular application, situation or material to the essential scope and spirit of the present invention. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered part of the spirit and scope of the present invention. For the recitation of numeric ranges herein, each intervening number there between with the same degree of precision is explicitly contemplated. For example, for the range of 6-9, the numbers 7 and 8 are contemplated in addition to 6 and 9, and for the range 6.0-7.0, the number 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, and 7.0 are explicitly contemplated. In addition, every intervening sub-range within range is contemplated, in any combination, and is within the scope of the disclosure. For example, for the range of 5-10, the sub-ranges 5-6, 5-7, 5-8, 5-9, 6-7, 6-8, 6-9, 6-10, 7-8, 7-9, 7-10, 8-9, 8-10, and 9-10 are contemplated and within the scope of the disclosed range. It will also be appreciated that one or more of the elements depicted in the Figures can also be implemented in a more separate or integrated manner, or even removed or rendered inoperable in certain cases, as may be useful in accordance with a particular application. Integrally formed combinations of components are also within the scope of the invention, particularly for embodiments in which a separation or combination of discrete components is unclear or indiscernible. In addition, use of the term “coupled” herein, including in its various forms such as “coupling” or “couplable”, means and includes any direct or indirect electrical, structural or magnetic coupling, connection or attachment, or adaptation or capability for such a direct or indirect electrical, structural or magnetic coupling, connection or attachment, including integrally formed components and components which are coupled via or through another component. With respect to signals, we refer herein to parameters that “represent” a given metric or are “representative” of a given metric, where a metric is a measure of a state of at least part of the regulator or its inputs or outputs. A parameter is considered to represent a metric if it is related to the metric directly enough that regulating the parameter will satisfactorily regulate the metric. A parameter may be considered to be an acceptable representation of a metric if it represents a multiple or fraction of the metric. Furthermore, any signal arrows in the drawings/Figures should be considered only exemplary, and not limiting, unless otherwise specifically noted. Combinations of components of steps will also be considered within the scope of the present invention, particularly where the ability to separate or combine is unclear or foreseeable. The disjunctive term “or”, as used herein and throughout the claims that follow, is generally intended to mean “and/or”, having both conjunctive and disjunctive meanings (and is not confined to an “exclusive or” meaning), unless otherwise indicated. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Also as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The foregoing description of illustrated embodiments of the present invention, including what is described in the summary or in the abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein. From the foregoing, it will be observed that numerous variations, modifications and substitutions are intended and may be effected without departing from the spirit and scope of the novel concept of the invention. It is to be understood that no limitation with respect to the specific methods and apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.
136,298
11861338
DETAILED DESCRIPTION FIG.1illustrates a system100configured to control configurations of deployments of sets of enterprise software applications135to users. For consumer software, an individual installation of a specific version of a software application, installed on one or more particular computing devices, may be common, but this model and/or mechanism of distribution and/or installation may not work well, or may not be adequate and/or flexible enough for sets of enterprise software applications135. Enterprise software applications135may be distributed among enterprises, corporate clients, and/or other groups of employees or other people interacting and/or working together. As used herein, a corporate client may refer to a group of people working together and/or sharing some responsibilities and/or goals as a group. For example, a corporate client may refer to a corporation, a company, a business, an enterprise, a government entity, a partnership, an organization, and/or another group of people working together and/or sharing some responsibilities and/or goals as a group. In some implementations, a corporate client may include and/or form a legal entity, or be associated with a legal entity. As used herein, an instance of an enterprise software application may simply be referred to as an enterprise software application (or software application). Enterprise software applications135may include executable code of (machine-readable) instructions that form a program. In some implementations, executable code and/or instructions may be executed by a processor to perform one or more particular features, tasks, and/or functionality. As used here, a processor is a machine and not a person. In some implementations, execution by a processor may include execution by a machine that is assisted, helped, controlled, managed, and/or otherwise jointly operated by a person. Enterprise software applications135may include a first software application135a, a second software application135b, a third software application135c, a fourth software application135d, and so forth. In some implementations, multiple enterprise software applications may be interconnected and/or otherwise combined to form more elaborate software applications or perform more elaborate functions than the individual software applications. For example, in some implementations, multiple software applications may be combined to form one or more pipelines of software applications. For example, in a software pipeline, the output and/or result produced and/or generated by first software application135amay subsequently be used as input and/or source for second software application135b, and so forth. Referring toFIG.1, in some implementations, system100may include one or more servers102, deployment server(s)134, client computing platform(s)104, user interface(s)128, and/or external resources132. Server(s)102may be configured to communicate with one or more client computing platforms104or deployment servers134according to a client/server architecture and/or other architectures. Client computing platform(s)104may be configured to communicate with other client computing platforms via server(s)102and/or according to a peer-to-peer architecture and/or other architectures. In some implementations, users127may access system100via client computing platform(s)104and/or user interface(s)128. In some implementations, users127may access system100via user interfaces128. Users127may include a first user, a second user, a third user, a fourth user, and/or other users. One or more of users127may be administrative users, such as a first administrative user, a second administrative user, a third administrative user, and so forth. An administrative user may deploy a particular set of enterprise software applications135(also referred to as a “suite”) on one or more deployment servers134. By virtue of the systems and methods described in this disclosure, the administrative user may configure such a deployment, modify the configuration of a deployment, and/or perform other tasks related to the use of a deployment or a deployment server134. In some implementations, one or more sets of users may be organized under one or more corporate clients. For example, a first set of users may be organized under a first corporate client, e.g., as the employees of the first corporate client. In some implementations, one or more sets of users may be organized under one or more organizational subdivisions of an individual corporate client. For example, a second set of users may be organized under a first subdivision of the first corporate client. As used herein, organizational subdivisions may be (based on) groups of employees (e.g., a research group, or the junior associates), departments (e.g., a compliance department), locations (e.g., the San Francisco office), and/or other entities within corporate clients or legal entities. In some implementations, an administrative user may be associated with one or more corporate clients and/or one or more organizational subdivisions of a corporate client. In some implementations, a particular deployment of a suite may be specific to a particular corporate client, a particular organization subdivision of an individual corporate client, or to a group of people. In some implementations, individual ones of users127may be associated with individual client computing platforms104. For example, a first user may be associated with a first client computing platform104, a second user may be associated with a second client computing platform104, and so forth. In some implementations, individual user interfaces128may be associated with individual client computing platforms104. For example, a first user interface128may be associated with a first client computing platform104, a second user interface128may be associated with a second client computing platform104, and so forth. Server(s)102may be configured by machine-readable instructions106. Machine-readable instructions106may include one or more instruction components. The instruction components may include computer program components. The instruction components may include one or more of a storage component108, a deployment component110, a patch component112, a modification component114, a notification component116, a monitoring component118, a presentation component120, and/or other instruction components. Storage component108may be configured to electronically store information. In some implementations, storage component108may be configured to electronically store information in electronic storage130. Stored information may include one or more sets of software applications135, including but not limited to a particular set of enterprise software applications135. Stored information may include executable code of software applications. Stored information may include binary code to install software applications. Stored information may include executable code to install software applications. Stored information may include installed software applications that are executable by users127. By way of non-limiting example, the software applications may include one or more of first software application135a, second software application135b, third software application135c, fourth software application135d, and so forth. In some implementations, the software applications may be organized in different sets and/or subsets, which may in some cases overlap, and in some cases be mutually exclusive. In some implementations, particular sets of interconnected individual software applications may form software pipelines. In some implementations, sets of interoperating individual software applications may form software pipelines. In some implementations, the stored information may include one or more configuration databases137, including but not limited to a particular configuration database137a. Configuration database137amay include a set of deployment-specific configuration settings and corresponding setting values that define a deployment on a particular deployment server134of set of enterprise software applications135. As used herein, the term “deployment-specific” may refer to a particular deployment (of software applications) on a particular deployment server134. The set of deployment-specific configuration settings include one or more of (a) connection parameters to control connections between configuration database137a, set of enterprise software applications135, and/or particular deployment server134, (b) environment variables, (c) resource parameters to control available computational resources and available storage resources, (d) one or more infrastructure parameters that control one or more of a filesystem, one or more databases, and a cluster of particular deployment server134, and/or other deployment-specific configuration settings. In some implementations, the set of deployment-specific configuration settings includes one or more parameters that control individual software applications (e.g., which version is currently the default version, or which version is to be used in a particular type of software pipeline) and/or individual software pipelines (e.g., which particular versions of software applications to include or combine in a particular software pipeline). In some implementations, the set of deployment-specific configuration settings includes one or more parameters that control a Kubernetes-based platform (not depicted inFIG.1), or other container orchestration platforms. By way of non-limiting example, one or more of these parameters may be related to mounting a file system onto a Kubernetes cluster, e.g., through a volumeMount configuration block. Kubernetes supports different types of volumes for storage, including but not limited to ephemeral volumes, persistent volumes, and/or other types of volumes. By way of non-limiting example, one or more of these parameters may be related to Central Processing Unit (CPU) or memory allocation to a container running in a particular Kubernetes cluster, or to the number of replicas of a particular container to run in the particular Kubernetes cluster. Deployment component110may be configured to effectuate the deployment of set of enterprise software applications135on one or more deployment servers134(e.g., on a first deployment server134). In some implementations, deployment component110may deploy set of enterprise software applications135on a particular deployment server134. In some implementations, deployment may include storing and/or installing software applications such that users can access and/or execute the software applications on their client computing platforms104(in other words, the particular deployment server134is accessible by client computing platforms104that are associated with the users). Deployment may include installing, setting up, and/or configuring the particular deployment server134such that client computing platforms104execute the software applications through the particular deployment server134(e.g., the front-end and/or user interaction for a particular software application may be executed on a client computing platform104, while the back-end and/or resource-intensive operations (e.g., in terms of one or more of memory or storage usage, computation, bandwidth, file handles, network sockets, etc.) may be executed on the particular deployment server134). The users may interact with the software applications through user interfaces128associated with client computing platforms104. Deployment by deployment component110may be based on a set of deployment-specific configuration settings and corresponding setting values, e.g., as included in configuration database137a. A particular deployment on particular deployment server134may be in accordance with the set of deployment-specific configuration settings and corresponding setting values that are included in configuration database137a. In some implementations, the configuration settings that control operation of a particular deployment are part of that deployment, and may be not only included in the deployment, but accessible and modifiable as well. Note that set of enterprise software applications135may include multiple versions of the same software application. By way of non-limiting example,FIG.4depicts multiple exemplary software pipelines40including multiple software applications, as may be used by system100. As depicted, software pipelines40include software applications41a,41b,41c,41d,41e,41f,41g,41h,41i,41J,41K, and41L, labeled “A1”, “A2”, “A3”, “B1”, “B2”, “B3”, “C1”, “C2”, “C3”, “D1”, “D2”, and “D3” as shown, respectively. Software application41a(labeled “A1”), software application41b(labeled “A2”), and software application41c(labeled “A3”) may be different versions of the same software application, such that A1is the oldest version, A2is newer than A1, and A3is newer than A2. Similarly, software applications41d,41e,41fmay be different versions of a different software application, software applications41g,41h,41imay be different versions of yet another software application, and software applications41J,41K,41L may be different versions of a fourth software application. Accordingly,FIG.4may depict 81 distinct software pipelines (or at least possible software pipelines). In some implementations, a set of software pipelines as depicted may be included in the same stored executable code, so that all distinct software pipelines are available at the same time, to multiple users, without requiring installations or re-installations of any software applications. In some implementations, a single deployment of the stored executable code supports execution of all distinct software pipelines at the same time. For example, the same user may launch different pipelines at the same time (say, a first software pipeline and a second software pipeline) such that output generated by each of the different pipelines is presented to the same user at the same time. Referring toFIG.1, patch component112may be configured to obtain and/or otherwise retrieve one or more databases, including but not limited to configuration databases, modification databases, and/or other databases. As used herein, a modification database may be referred to as a patch or as a “configuration-modification database”. For example, as depicted inFIG.1, configuration databases137may include, by way of non-limiting example, a first modification database137b, a second modification database137c, and/or other configuration databases. In some implementations, patch component112may obtain multiple modification databases (e.g., first modification database137band second modification database137c). Individual ones of the multiple modification databases include one or more modification-specific configuration settings and one or more corresponding setting values. The multiple modification databases may be organized in a particular order. For example, according to the particular order, first modification database137bmay be ahead of second modification database137c(for modifications by modification component114). Modification component114may be configured to modify deployments of sets of software applications, including but not limited to set of enterprise software applications135. In some implementations, modification component144may modify one or more deployment servers134(e.g., a particular deployment server134). For example, modification component114may add a deployment-specific configuration setting (and set it to a corresponding setting value) that was previously unknown and/or otherwise not set or used for a particular deployment. For example, modification component114may modify a deployment-specific configuration setting that was previously set and/or otherwise used by deployment component110for a particular deployment. By way of non-limiting example, assume a particular deployment (as deployed by deployment component110) uses (i) a first connection parameter that controls connections between set of enterprise software applications135and particular deployment server134, (ii) a first environment variable for using cloud-based services, (iii) a first resource parameter that controls storage resources available to set of enterprise software applications135, and (iv) a first infrastructure parameter that controls a particular filesystem available to set of enterprise software applications135. In some implementations, modification component114may modify one or more of these four parameters and/or variables for the particular deployment. In some implementations, modification component114may modify all of these four parameters and/or variables for the particular deployment. Modification component114may be configured to use individual ones of multiple modification databases according to a particular order, i.e., the particular order in which the multiple modification databases are organized as described in relation to patch component112. For example, modification component may modify a deployment (e.g., as deployed by deployment component110) based on configuration database137aby first making modifications based on first modification database137b, followed by making modifications based on second modification database137c. The particular order of the multiple modifications databases is maintained by modification component114. For example, assume configuration database137aincludes a first environment variable for using cloud-based services, which is set according to its corresponding setting value (e.g., a first Uniform Resource Locator or URL) to link to a first particular cloud-based server. Assume first modification database137bincludes the same first environment variable for using cloud-based services, but with a second setting value of a second URL. Assume second modification database137cincludes the same first environment variable for using cloud-based services, but with a third setting value of a third URL. By making modifications in the particular order as described, the first environment variable will be set to the third URL after these available modifications are finalized. In some implementations, modifications by modification component114may be made according to the particular order such that individual modification-specific configuration settings included in first modification database137bare modified ahead of individual modification-specific configuration settings included in second modification database137c. In some implementations, a new deployment-specific configuration setting in a modification database (e.g., in first modification database137b) may be added to a deployment. By modifying or adding deployment-specific configuration settings, modification component114may create (the state of) a new configuration database that controls and/or defines the current configuration of a particular deployment on particular deployment server134(i.e., this controls the operations of the particular deployment). In other words, modification component114creates the state of the current configuration of a particular deployment on particular deployment server134. This new configuration database or this state may be referred to as the “materialized configuration table” or the “final state of the control plane”. In some implementations, modifications by modification component114may be made such that the particular deployment on the particular deployment server134continues to be accessible by client computing platforms104. In some implementations, modifications by modification component114may be made without taking down the deployment or redeploying set of enterprise software applications135on the particular deployment server134. Alternatively, and/or simultaneously, modifications by modification component114may be made without restarting, resetting, or rebooting the particular deployment server134. In some cases, only affected software applications may need to be restarted. In some implementations, modification component114may be configured to allow the most recent modification of a particular deployment-specific configuration setting to be undone, or “rolled-back”. For example, modification component114may modify the same first environment variable for using cloud-based services (as described above) through a third modification database, and undo the most recent change. Accordingly, the first environment variable will be set to the second URL after the modifications included in the third modification database are finalized. This mechanism may be referred to as “preserving” a rollback for the first environment variable. Notification component116may be configured to generate, transfer, and/or present notifications to users127. For example, notification component116may present a notification (or otherwise notify) an administrative user regarding a particular deployment on a particular deployment server134. For example, notification component116may present a notification (or otherwise notify) an administrative user regarding modifications of a particular deployment based on one or more modification databases. In some implementations, notifications may be triggered by and/or based on operations of other components of system100, including but not limited to monitoring component118. Monitoring component118may be configured to monitor deployment servers134, e.g., while being used by users127. Monitor component118may monitor usage of a particular deployment, including but not limited to rates of usage of different resources, such as memory, storage, computation, bandwidth, file handles, network sockets, etc. In some implementations, monitoring component118may determine whether a particular usage (or rate of usage) is outside of a preferred range for a particular resource. In some implementations, determinations by monitoring component118may trigger and/or otherwise form the basis for a notification by notification component116. Presentation component120may be configured to present user interfaces128to users127, through their client computing platforms104. In some implementations, users127may access and/or otherwise use set of enterprise software applications135through users interfaces128. For example, a particular deployment server134may be accessible by client computing platforms104through a particular URL. In some implementations, all or most of a particular software application may be executed on client computing platforms104(including, at least, the front-end). In some implementations, all or most of a particular software application may be executed on particular deployment server134(including, at least, the back-end). Users127may interact with set of enterprise software applications135through users interfaces128. By way of non-limiting example,FIG.3Adepicts a set30aof exemplary software pipelines, such that each as depicted includes versions of four applications (labeled “Application A”, “Application B”, “Application C”, and “Application D”, which, in some implementations may correspond to first software application135a, second software application135b, third software application135c, and fourth software application135das depicted inFIG.1). The columns inFIG.3Adepict different applications, and the rows depict different versions of those applications. The current version may be indicated by a number “n”. As depicted, a first software pipeline31aincludes multiple software applications, in particular versions “n−2” of Application A, Application B, Application C, and Application D. In some implementations, first software pipeline31amay be referred to by its components, for example as follows: A(n−2)B(n−2)C(n−2)D(n−2). In some implementations, first software pipeline31amay collectively be referred to by some indicator and/or name (e.g., a release name). For example, first software pipeline31amay be referred to as Software Pipeline X. Variations may be named based on the differences with Software Pipeline X. For example, a variation of Software Pipeline X in which version n−1 of Application A is used could be referred to as Software Pipeline X−A(n−1). A second software pipeline31bmay include different versions of these software applications, in particular A(n−1)B(n)C(n−1)D(n−2). In some implementations, second software pipeline31bmay collectively be referred to by some indicator and/or name (e.g., a release name). For example, second software pipeline31bmay be referred to as Software Pipeline Y. In some implementations, updated versions of Software Pipeline Y may be referred to by some indicator and/or name that references the differences with Software Pipeline Y. In some implementations, numbers such as “n”, “n−1”, “n−2” may include or refer to a particular date (e.g., the release date for that version of an application and/or software pipeline). Alternatively, and/or simultaneously, version indicators of software applications may increase over time, e.g., based on one or more points of origin. For example, the particular versions used for a named software pipeline (such as Software Pipeline X) may be referred to by an indicator or number based on that name, and may serve as a point of origin in the naming scheme. For example, Software Pipeline X may be defined to include A(x)B(x)C(x)D(x). In some implementations, a variation of Software Pipeline X in which version “n−1” of Application A is used could be referred to as Software Pipeline X−A(x+1), or as Software Pipeline X−A1. A third software pipeline31cmay include different versions of these software applications, in particular A(n)B(n)C(n+1)D(n). For example, version “n+1” may refer to a beta version of an application. If the software pipeline using the current versions is referred to as Software Pipeline Z, then third software pipeline31cmay be referred to as Software Pipeline X-C1. By way of non-limiting example,FIG.3Bdepicts a set30bof exemplary software pipelines, such that each as depicted includes versions of four applications (labeled “Application A”, “Application B”, “Application C”, and “Application D”). The columns inFIG.3Bdepict different applications, and the rows depict different versions of those applications, such that newer versions are placed below older versions. In other words, time progresses as indicated on the left side ofFIG.3B, and the version of Application A used in a first software pipeline32ais older than the version of Application A used in a second software pipeline32b, which is older than the version of Application A used in a third software pipeline32c. The naming convention for versions of Application A can use one or more of release dates, incremental numbers, major release names, and/or other (alphanumerical) names. As depicted inFIG.3B, the naming convention for versions of Application A may be independent of (or even unrelated to) the naming conventions for versions of Application B, Application C, and/or Application D. For example, development of these software applications may be independent (e.g., by different corporate entities). In some implementations, software pipelines may use a naming convention as well, and this naming convention may be independent of the naming convention for individual software applications. For example, first software pipeline32amay be named “Rosebud”, second software pipeline32bmay be named “Nairobi”, and third software pipeline32cmay be named “Dragon”. In some implementations, variations of these software pipelines may be named based on the difference with a named software pipeline. Referring toFIG.1, presentation component120may be configured to present information to users127. Presented information may include output generated by software applications and/or software pipelines. In some implementations, information may be presented on client computing platforms104. In some implementations, information may be presented through user interfaces128. In some implementations, output generated by a first software pipeline may be presented to a first user at the same time that output generated by a second software pipeline (which may be different from the first software pipeline) is presented to a second user. In some implementations, server(s)102, deployment servers134, client computing platform(s)104, and/or external resources132may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via one or more networks13such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which server(s)102, client computing platform(s)104, and/or external resources132may be operatively linked via some other communication media. A given client computing platform104may include one or more processors configured to execute computer program components. The computer program components may be configured to enable an expert or user associated with the given client computing platform104to interface with system100and/or external resources132, and/or provide other functionality attributed herein to client computing platform(s)104. By way of non-limiting example, the given client computing platform104may include one or more of a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms. User interfaces128may be configured to facilitate interaction between users and system100and/or between users and client computing platforms104. For example, user interfaces128may provide an interface through which users may provide information to and/or receive information from system100. In some implementations, user interface128may include one or more of a display screen, touchscreen, monitor, a keyboard, buttons, switches, knobs, levers, mouse, microphones, sensors to capture voice commands, sensors to capture eye movement and/or body movement, sensors to capture hand and/or finger gestures, and/or other user interface devices configured to receive and/or convey user input. In some implementations, one or more user interfaces128may be included in one or more client computing platforms104. In some implementations, one or more user interfaces128may be included in system100. External resources132may include sources of information outside of system100, external entities participating with system100, and/or other resources. In some implementations, external resources132may include a provider of modification databases on which system100and/or its components may operate. In some implementations, external resources132may include a provider of documents, including but not limited to electronic source documents on which system100and/or its components may operate. In some implementations, some or all of the functionality attributed herein to external resources132may be provided by resources included in system100. Server(s)102may include electronic storage130, one or more processors124, and/or other components. Server(s)102may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s)102inFIG.1is not intended to be limiting. Server(s)102may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s)102. For example, server(s)102may be implemented by a cloud of computing platforms operating together as server(s)102. In some implementations, some or all of the functionality attributed herein to server102and/or system100may be provided by resources included in one or more client computing platform(s)104. Electronic storage130may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage130may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s)102and/or removable storage that is removably connectable to server(s)102via, for example, a port (e.g., a Universal Serial Bus or USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage130may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., Electrically Erasable Programmable Read-Only Memory or EEPROM, Random Access Memory or RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage130may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage130may store software algorithms, information determined by processor(s)124, information received from server(s)102, information received from client computing platform(s)104, and/or other information that enables server(s)102to function as described herein. Processor(s)124may be configured to provide information processing capabilities in server(s)102. As such, processor(s)124may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s)124is shown inFIG.1as a single entity, this is for illustrative purposes only. In some implementations, processor(s)124may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s)124may represent processing functionality of a plurality of devices operating in coordination. Processor(s)124may be configured to execute components108,110,112,114,116,118, and/or120, and/or other components. Processor(s)124may be configured to execute components108,110,112,114,116,118, and/or120, and/or other components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s)124. As used herein, the term “component” may refer to any component or set of components that perform the functionality attributed to the component. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components. It should be appreciated that although components108,110,112,114,116,118, and/or120are illustrated inFIG.1as being implemented within a single processing unit, in implementations in which processor(s)124includes multiple processing units, one or more of components108,110,112,114,116,118, and/or120may be implemented remotely from the other components. The description of the functionality provided by the different components108,110,112,114,116,118, and/or120described below is for illustrative purposes, and is not intended to be limiting, as any of components108,110,112,114,116,118, and/or120may provide more or less functionality than is described. For example, one or more of components108,110,112,114,116,118, and/or120may be eliminated, and some or all of its functionality may be provided by other ones of components108,110,112,114,116,118, and/or120. As another example, processor(s)124may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components108,110,112,114,116,118, and/or120. FIG.2illustrates a method200of controlling configurations of deployments of sets of enterprise software applications to users, in accordance with one or more implementations. The operations of method200presented below are intended to be illustrative. In some implementations, method200may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method200are illustrated inFIG.2and described below is not intended to be limiting. In some implementations, method200may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method200in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method200. At an operation202, information for the set of enterprise software applications is stored in electronic storage. The set of enterprise software applications includes at least a first software application, a second software application, and a third software application. The information includes (i) executable code of the set of enterprise software applications, and (ii) a configuration database that includes a set of deployment-specific configuration settings and corresponding setting values that define a deployment on a first deployment server of the set of enterprise software applications. The set of deployment-specific configuration settings include one or more of (a) connection parameters to control connections between the configuration database and the set of enterprise software applications, (b) environment variables, and (c) resource parameters to control available computational resources and available storage resources. In some embodiments, operation202is performed by a storage component the same as or similar to storage component108(shown inFIG.1and described herein). At an operation204, the deployment of the set of enterprise software applications is effectuated on the first deployment server in accordance with the set of deployment-specific configuration settings and the corresponding setting values. Subsequent to the deployment, the first deployment server is accessible by the client computing platforms associated with the users. The first deployment server is configured, subsequent to the deployment, such that to the client computing platforms execute the set of enterprise software applications through the first deployment server. In some embodiments, operation204is performed by a deployment component the same as or similar to deployment component110(shown inFIG.1and described herein). At an operation206, multiple modification databases are obtained including a first modification database and a second modification database. Individual ones of the multiple modification databases include one or more modification-specific configuration settings and one or more corresponding setting values. The multiple modification databases are organized in a particular order such that the first modification database is ahead of the second modification database in the particular order. In some embodiments, operation206is performed by a patch component the same as or similar to patch component112(shown inFIG.1and described herein). At an operation208, the deployment of the set of software applications on the first deployment server is modified, for individual ones of the multiple modification databases according to the particular order, by modifying individual ones of the set of deployment-specific configuration settings that match individual ones of the one or more modification-specific settings to individual ones of the one or more corresponding setting values. Modifications of the individual ones of the set of deployment-specific configuration settings are made while the deployment on the first deployment server continues to be accessible by the client computing platforms. The modifications are made according to the particular order such that individual modification-specific configuration settings included in the first modification database are modified ahead of individual modification-specific configuration settings included in the second modification database. In some embodiments, operation208is performed by a modification component the same as or similar to modification component114(shown inFIG.1and described herein). Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
41,347
11861339
DETAILED DESCRIPTION FIG.1is an example setting100for deploying software applications using application catalogs. The software applications may be medical devices. As shown, the setting100includes an application system110in communication with an environment101through a network190. The network190may include a combination of private and public networks (e.g., the internet). The software application105may be a medical device and may be configured to operate within an environment101. The software applications105may be developed and deployed to the environment101by the application system110. Depending on the embodiment, the environment101may be implemented in part using one or more general purpose computing devices such as the computing device500illustrated with respect toFIG.5. In addition, some aspects of the environment101may be implemented using a cloud-based computing environment. The software application105may be associated with metadata107. The metadata107may describe the various properties and features of the medical device implemented by the software application105. The application system110includes several components including a catalog engine120, a deployment engine130, and an audit engine140. More or fewer components may be included. Each component is described further below. The catalog engine120may create and maintain what is referred to as an application catalog115for each application105. An application catalog is a collection of files or parameters that define desired states for the application105when deployed to the environment101. Examples of states include available resources (e.g., computational and storage resources), access to particular APIs or webservices, software or service version numbers, and permission settings. Referring toFIG.2is an example application catalog115that may be created and maintained by the catalog engine120for an application105. As shown the application catalog115includes several parameters including application parameters205, service parameters210, API parameters215, environmental parameters225, and infrastructure parameters230. More or fewer parameters may be supported. Depending on the embodiment, each set of parameters may be stored as a separate data structure or file. A suitable file is a YAML file. Other types of files may be supported. The application parameters205may include identifiers of the application105(or applications105) managed by the application system110. The application parameters205may include unique identifiers of each application105along with a reference to a record that describes the application105. Depending on the embodiment, the application parameters205may also include identifiers of the services that comprise the application105and version numbers associated with the services. The service parameters210may include parameters about each service used by the application105and identified in the application parameters205. Example service parameters210for a service may include, but are not limited to, a unique identifier of the service, an ingress path (e.g., URL) associated with the service, any interfaces or API paths used by the service, other servicers that the service may depend on or invoke, the network or network layer in which the service is deployed, any variables (e.g., environmental variables) or other metadata associated with the service, and information about how the service should be deployed in the environment101. Other information may be included. The API parameters215may include parameters about the various interfaces and APIs (e.g., web APIs) that are used by the services of the application105. The parameters for each API may include version numbers and a path that is used to access the resources exposed by each API. The feature parameters220may include parameters related to business and functional features. The parameters may include a unique identifier for each feature, the service or services that use or are associated with the particular feature, and an indicator of whether or not the feature is enabled or disabled with respect to a particular service. The environmental parameters225may include parameters related to the environment101that the application105will be deployed into. The environmental parameters225may include parameters such as the name of the cloud environment101that will execute the application105, indications of naming schemes and regional configurations for the environment101, and identifiers of the applications105that will execute in the environment101. Note that in some embodiments, the application105may be deployed to multiple environments101, and each environment101may be associated with its own environmental parameters225. The infrastructure parameters230may include parameters related to the infrastructure needed to deploy the application105in the environment101. These parameters may include definitions and names of storage configurations, definitions of databases, virtual machines, Kubernetes clusters, message queues, and other related information. The infrastructure parameters230may further include operating systems used by the application105, IP address ranges, and other information that may be used to configure the infrastructure for the environment. Returning toFIG.1, a user or administrator may use the catalog engine120to specify the states for each of the parameters of the application catalog115to use when deploying the application105. In some embodiments, the catalog engine120may further allow the user or administrator to add new services, features, and APIs to the application105. When adding a new service, initially the service may be restricted to operating in a non-production domain (e.g., a domain not associated with a production application105). The catalog engine120may track the testing of the service and whether or not the service has passed quality control and/or has been approved for deployment to an environment101. The catalog engine120may similarly track the testing and approval for added APIs and features. Information about the newly added services, APIs, and features may be stored by the catalog engine120as the application data125. The catalog engine120may further allow the user or administer to update or change the states of any of the parameters associated with the application catalog115. A record of all of the changes to the application catalog115may be recorded by the catalog engine120with the application data125. The deployment engine130may deploy the application105to the environment101. In particular, the deployment engine130, before deploying the application105to the environment101, may perform one or more operations to ensure that the states of all of the parameters of the application catalog115(e.g., the application parameters205, the service parameters210, the API parameters215, the feature parameters220, the environmental parameters225, and the infrastructure parameters230) are reflected in the environment101(e.g., the cloud computing environment that will execute the application105). In some embodiments, the deployment engine130may deploy the application105using what is referred to herein as a blue green deployment. In blue green deployment, a new or updated version of the application105is deployed to a new environment101(i.e., “green”) while an older version of the application105remains operating in an older environment101(i.e., “blue”). Overtime, traffic from the older version of the application105is gradually transferred to the new version of the application105. After all of the traffic has been transferred, the old application105and environment101can be pulled from production and/or can standby incase the new application105and environment101needs to be pulled from production. Depending on the embodiment, the operations performed by the deployment engine130may include, but are not limited to, ensuring that sufficient storage and computing resources are available to the environment101, ensuring that the correct operating system is installed, ensuring that the correct networking resources and/or protocols are available, ensuring that the correct versions of services and APIs have been installed, and ensuring that the correct features are enabled (or disabled) for each of the services used by the application105. The deployment engine130may ensure that the states of the parameters on the device101match the parameters of the application catalog115before deploying the application105to the environment101. In the event that the environment101has some parameters with states that cannot be updated or made equal to the states of the corresponding parameters of the application catalog115, the deployment engine130may generate an error or may otherwise alert the user or administrator. The deployment engine130may further record proof or evidence that the parameters of the environment101were verified to be in the same state as the parameters of the application catalog115, or actions were taken to ensure that parameters of the environment101were verified to be in the same state as the parameters of the application catalog115. The proof or evidence may be stored by the deployment engine130with the application data125. As may be appreciated, defining the desired parameter states in an application catalog115, and automatically verifying that the parameters of the device101are in the correct states prior to deploying the application105in the environment101, is an improvement over prior art systems for application deployment. Previously, before a medical device such as an application105could be deployed to an environment101such as a cloud-computing environment, an operator would typically go to the location of the environment101and use a printed manual to manually verify that each parameter of the environment101was in the correct state. For those parameters not in the correct state, the operator would have to manually configure the environment101to be in the correct state by, for example, manually installing or updating services, configuring network or other infrastructure settings, and activating or deactivating one or more features. Such manual installation is time consuming and error prone. Moreover, any changes made to the parameters of the application105by the software developers would require updating the manual or providing updated instructions to the developer, which adds further expense and time to application105development. Accordingly, the deployment of software applications105to environments101using an application catalog115is an improvement to the technical field of software development and an improvement to medical devices deployed using application catalogs115. The audit engine140may generate an audit log145that may be used to provide proof of compliance with one or more regulatory agencies. In some embodiments, the audit log145may include the history of changes that were made to the application catalog115during development of the application105, various services, APIs, and features that were added to the application105, and the testing or quality control status of each service, API, and feature. The audit log145may further include, for each application105deployment, proof that each of the parameters of the environment101were in the same state as the corresponding parameters in the application catalog115. FIG.3is an illustration of an example method for deploying medical devices such as applications using an application catalog. The method300may be implemented by the application system110. At305, a selection of an application to deploy in an environment is received. The selection of the application105may be received by the application system110from a user through the network190. The application105may be a medical device. The environment101may be a cloud-based computing environment. At310, an application catalog corresponding to the selected application is retrieved. The application catalog115may be retrieved by the catalog engine120. The application catalog115includes a plurality of parameters and each parameter is associated with a desired state. The parameters may include one or more of application parameters, service parameters, API parameters, feature parameters, environmental parameters, and infrastructure parameters. Other types of parameters may be supported. At315, parameter states of the environment are configured to match the states of the application catalog. The parameter states of the device101may be configured by the deployment engine130. For example, the deployment engine130may determine the parameter states of the environment101, compare the determined parameter states with the parameter states of the application catalog, and may identify those parameter states of the environment101that do not match and need to be configured. The deployment engine130may configure the identified parameter states by installing or upgrading one or more services of the environment101, allocating computing resources to the environment101, installing or upgrading a particular operating system of the environment101, and setting one or more permissions on the environment101. At320, whether the configuration was successful is determined. The determination may be made by the deployment engine130. If the configuration was successful then the method may continue at330. Else the method may continue at325. At325, an error is generated. The error may be generated by the deployment engine130in response to determining that one or more parameters of the environment101were not successfully configured to match the state of the corresponding parameter of the application catalog115. The error may be an electronic message and may indicate which parameters were not successfully configured. At330, the application is deployed to the environment. The application105may be deployed to the environment101by the deployment engine130. FIG.4is an illustration of a method for generating and using an application catalog. The method400may be implemented by the application system110. At405, an indication to create an application is received. The indication may be received by the application system110from a user or administrator. The application105may be a medical device designed to execute in an environment101. The environment101may be a cloud-based computing environment. At410, an application catalog is created. The application catalog115may be created by the catalog engine120in response to the creating of the application105. The application catalog115includes a plurality parameters that each have an associated state. The states may be desired states for the environment101when the application105is deployed to the environment101. The states may be set by the user or administrator or may be default states that are set automatically when the application catalog115is created. At415, a change to a parameter is received. The change may be received by the catalog engine120from the user or administrator. For example, the user or administrator may determine that a newer version of a particular service should be used by the application105or may specify that more computing resources should be allocated to a virtual machine that executes the application105. At420, the change is logged. The change may be logged in an audit log145by audit engine140. The audit engine140may log or record all changes made to the application105and application catalog115for purposes of regulatory compliance, for example. FIG.5shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing device environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality. Numerous other general purpose or special purpose computing devices environments or configurations may be used. Examples of well-known computing devices, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like. Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices. With reference toFIG.5, an exemplary system for implementing aspects described herein includes a computing device, such as computing device500. In its most basic configuration, computing device500typically includes at least one processing unit502and memory504. Depending on the exact configuration and type of computing device, memory504may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated inFIG.5by dashed line506. Computing device500may have additional features/functionality. For example, computing device500may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.5by removable storage508and non-removable storage510. Computing device500typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device500and includes both volatile and non-volatile media, removable and non-removable media. Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory504, removable storage508, and non-removable storage510are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device500. Any such computer storage media may be part of computing device500. Computing device500may contain communication connection(s)512that allow the device to communicate with other devices. Computing device500may also have input device(s)514such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s)516such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here. It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
21,393
11861340
DETAILED DESCRIPTION Systems and methods are described for providing a universal return to factory image (RTFI) process in a distributed computing system. In some embodiments, a method performed by one or more processing resources of one or more computer systems comprises implementing, in a storage node, a multi-tiered file system comprising a read-only layer that contains a base configuration for the storage node and a read-write layer that contains modifications to the base configuration; and combining the read-only layer and the read-write layer into an overlay file system to be presented to an operating system. In other embodiments, a system comprises a processing resource and a non-transitory computer-readable medium, coupled to the processing resource, having stored therein instructions that when executed by the processing resource cause the processing resource to implement, in a storage node, a multi-tiered file system comprising a read-only layer that contains a base configuration for the storage node and a read-write layer that contains modifications to the base configuration and combine the read-only layer and the read-write layer into an overlay file system to be presented to an operating system. In other embodiments, a non-transitory computer-readable medium comprises instructions that when executed by the processing resource cause the processing resource to implement, in a storage node, a multi-tiered file system comprising a read-only layer that contains a base configuration for the storage node and a read-write layer that contains modifications to the base configuration; and combine the read-only layer and the read-write layer into an overlay file system to be presented to an operating system. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. Terminology Brief definitions of terms used throughout this application are given below. A “computer” or “computer system” may be one or more physical computers, virtual computers, or computing devices. As an example, a computer may be one or more server computers, cloud-based computers, cloud-based cluster of computers, virtual machine instances or virtual machine computing elements such as virtual processors, storage and memory, data centers, storage devices, desktop computers, laptop computers, mobile devices, or any other special-purpose computing devices. Any reference to “a computer” or “a computer system” herein may mean one or more computers, unless expressly stated otherwise. As used herein, “compute load parameters” generally refers to performance, configuration and/or other system data of a processing device. Non-limiting examples of compute load parameters for a distributed computing system include latency, utilization, a number of input output operations per second (IOPS), a slice service (SS) load, Quality of Service (QoS) settings, or any other performance related information. The terms “connected” or “coupled” and related terms are used in an operational sense and are not necessarily limited to a direct connection or coupling. Thus, for example, two devices may be coupled directly, or via one or more intermediary media or devices. As another example, devices may be coupled in such a way that information can be passed there between, while not sharing any physical connection with one another. Based on the disclosure provided herein, one of ordinary skill in the art will appreciate a variety of ways in which connection or coupling exists in accordance with the aforementioned definition. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The phrases “in an embodiment,” “according to one embodiment,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present disclosure, and may be included in more than one embodiment of the present disclosure. Importantly, such phrases do not necessarily refer to the same embodiment. Example Data Processing Environment FIG.1is a block diagram illustrating an environment100in which various embodiments may be implemented. In various embodiments described herein, an administrator (e.g., user112) of a distributed storage system (e.g., cluster135) or a managed service provider responsible for multiple distributed storage systems of the same or multiple customers may monitor various telemetry data of the distributed storage system or multiple distributed storage systems via a browser-based interface presented on computer system110. In one embodiment, the administrator and/or automated means may use various statistics, analytics and/or visual representations of the gathered data as feedback to improve the functioning of the monitored systems by, for example, tuning various configuration parameters of the managed distributed storage systems and/or delivering storage operating system patches, version upgrades, or the like to the managed distributed storage systems. In the context of the present example, the environment100includes a data center130, a cloud120, a computer system110, and a user112. The data center130, the cloud120, and the computer system110are coupled in communication via a network105, which, depending upon the particular implementation, may be a Local Area Network (LAN), a Wide Area Network (WAN), or the Internet. The data center130may represent an enterprise data center (e.g., an on-premises customer data center) that is build, owned, and operated by a company or the data center130may be managed by a third party (or a managed service provider) on behalf of the company, which may lease the equipment and infrastructure. Alternatively, the data center130may represent a colocation data center in which a company rents space of a facility owned by others and located off the company premises. The data center130is shown including a distributed storage system (e.g., cluster135) and a performance manager138. Those of ordinary skill in the art will appreciate additional IT infrastructure would typically be part of the data center130; however, discussion of such additional IT infrastructure is unnecessary to the understanding of the various embodiments described herein. As illustrated in the embodiments shown inFIG.1, the cluster135can include multiple storage nodes136a-nand an Application Programming Interface (API)137. In the context of the present example, the multiple storage nodes136a-nare organized as a cluster and provide a distributed storage architecture to service storage requests issued by one or more clients (not shown) of the cluster. The data served by the storage nodes136a-nmay be distributed across multiple storage units embodied as persistent storage devices, including but not limited to hard disk drives, solid state drives, flash memory systems, or other storage devices. A non-limiting example of a storage node136is described in further detail below with reference toFIG.2. The API137may provide an interface through which the cluster135is configured and/or queried by external actors (e.g., the performance manager138, the computer system110, and a cloud-based, centralized normalizing agent (e.g., normalizing agent230shown inFIG.2). Depending upon the particular implementation, the API137may represent a Representational State Transfer (REST)ful API that uses Hypertext Transfer Protocol (HTTP) methods (e.g., GET, POST, PATCH, DELETE, and OPTIONS) to indicate its actions. Depending upon the particular embodiment, the API137may provide access to various telemetry data (e.g., performance, configuration and other system data) relating to the cluster135or components thereof. In one embodiment, a first API call (e.g., GetNodeStats) may be used to obtain information regarding a custom, proprietary, or standardized measure of the overall load (e.g., SS load) or overall performance (e.g., IOPS) of a particular storage node136or a second API call (e.g., ListNodeStats) may be used to obtain information regarding the overall load or performance of multiple storage nodes136. As those skilled in the art will appreciate various other types of telemetry data may be made available via the API137, including, but not limited to measures of latency, utilization, and/or performance at various levels (e.g., the cluster level, the storage node level, or the storage node component level). In various embodiments, the storage node(s)136a,136b,136nmay comprise or be communicatively coupled to a performance manager138. Performance manager138may be implemented locally within the same data center in which the cluster135resides as illustrated inFIG.1. In other embodiments, performance manager138may be located external to cluster135. Performance manager138can be configured to periodically poll and/or monitor for compute load parameters of the cluster135via the API137. In some examples the polling may be performed on static periodic intervals. In other examples the polling interval may vary based upon one or more parameters (e.g., load, capacity, etc.). Depending upon the particular implementation, the polling may be performed at a predetermined or configurable interval (e.g., X milliseconds or Y seconds). The performance manager138may locally process and/or aggregate the collected compute load parameters (e.g., latency, utilization, IOPS, SS load, Quality of Service (QoS) settings, etc.) over a period of time by data point values and/or by ranges of data point values and provide frequency information regarding the aggregated compute load parameters retrieved from the cluster135to the normalizing agent230. While for sake of brevity, only a single data center and a single cluster are shown in the context of the present example, it is to be appreciated that multiple clusters owned by or leased by the same or different companies may be monitored in accordance with the methodologies described herein and such clusters may reside in multiple data centers of different types (e.g., enterprise data centers, managed services data centers, or colocation data centers). Example Storage Node FIG.2is a block diagram illustrating a storage node200in accordance with an embodiment of the present disclosure. Storage node200represents a non-limiting example of storage nodes136a-n. In the context of the present example, storage node200includes a storage operating system210, one or more slice services220a-n, and one or more block services215a-q. The storage operating system (OS)210may provide access to data stored by the storage node200via various protocols (e.g., small computer system interface (SCSI), Internet small computer system interface (ISCSI), fibre channel (FC), common Internet file system (CIFS), network file system (NFS), hypertext transfer protocol (HTTP), web-based distributed authoring and versioning (WebDAV), or a custom protocol. A non-limiting example of the storage OS210is NetApp Element Software (e.g., the SolidFire Element OS) based on Linux and designed for SSDs and scale-out architecture with the ability to expand up to 100 storage nodes. In some embodiments, the storage node200may comprise one or more centralized normalizing agents (e.g., normalizing agent230). The normalizing agent230may receive (e.g., periodically, continuously, or on a set schedule) monitored information, including raw and/or processed compute load parameters (e.g., data representing aggregated compute load parameters over time) of multiple clusters (e.g., cluster135inFIG.1) from multiple distributed performance managers (e.g., performance manager138inFIG.1) operable within respective data centers (e.g., data center130inFIG.1) of one or more customers of the managed service provider. Depending upon the particular implementation, the monitored information may be pushed from the performance manager138or pulled from the performance manager138in accordance with a monitoring schedule or responsive to an event (e.g., a request issued by user112to the normalizing agent230). In some examples aggregating compute load parameters may be accomplished by combining all the various compute load parameters into a single “load” parameter for use in determining how to throttle various subsystem processes. For example, a scale that measures between 0-100 may be used to represent latency, where 1 ms client latencies equate to a load of 50 on said scale. Such a parameter can then be aggregated with another compute load parameter, cache fullness, that is easily represented on a scale that represents the cache capacity (e.g., a 0-100% fullness scale). Each slice service220may include one or more volumes (e.g., volumes221a-x, volumes221c-y, and volumes221e-z). Client systems (not shown) associated with an enterprise may store data to one or more volumes, retrieve data from one or more volumes, and/or modify data stored on one or more volumes. The slice services220a-nand/or the client system may break data into data blocks. Block services215a-qand slice services220a-nmay maintain mappings between an address of the client system and the eventual physical location of the data block in respective storage media of the storage node200. In one embodiment, volumes221a-zinclude unique and uniformly random identifiers to facilitate even distribution of a volume's data throughout a cluster (e.g., cluster135). The slice services220a-nmay store metadata that maps between client systems and block services215. For example, slice services220a-nmay map between the client addressing used by the client systems (e.g., file names, object names, block numbers, etc. such as Logical Block Addresses (LBAs)) and block layer addressing (e.g., block identifiers) used in block services215. Further, block services215a-qmay map between the block layer addressing (e.g., block identifiers) and the physical location of the data block on one or more storage devices. The blocks may be organized within bins maintained by the block services215for storage on physical storage devices (e.g., SSDs). A bin may be derived from the block ID for storage of a corresponding data block by extracting a predefined number of bits from the block identifiers. In some embodiments, the bin may be divided into buckets or “sublists” by extending the predefined number of bits extracted from the block identifier. A bin identifier may be used to identify a bin within the system. The bin identifier may also be used to identify a particular block service215a-qand associated storage device (e.g., SSD). A sublist identifier may identify a sublist with the bin, which may be used to facilitate network transfer (or syncing) of data among block services in the event of a failure or crash of the storage node200. Accordingly, a client can access data using a client address, which is eventually translated into the corresponding unique identifiers that reference the client's data at the storage node200. For each volume221hosted by a slice service220, a list of block identifiers may be stored with one block identifier for each logical block on the volume. Each volume may be replicated between one or more slice services220a-nand/or storage nodes200, and the slice services for each volume may be synchronized between each of the slice services hosting that volume. Accordingly, failover protection may be provided in case a slice service220fails, such that access to each volume may continue during the failure condition. The above structure allows storing of data evenly across the cluster of storage devices (e.g., SSDs), which allows for performance metrics to be used to manage load in the cluster. For example, if the cluster is under a load meeting or exceeding a particular threshold, clients can be throttled or locked out of a volume by, for example, the storage OS210reducing the amount of read or write data that is being processed by the storage node200. As noted above, in some embodiments, a performance manager module (e.g., performance manager138shown inFIG.1) may poll an API (e.g., API137shown inFIG.1) of a distributed storage system (e.g., cluster135shown inFIG.1) of which the storage node200is a part to obtain various telemetry data of the distributed storage system. The telemetry data may represent performance metrics, configuration and other system data associated with various levels or layers of the cluster or the storage node200. For example, metrics may be available for individual or groups of storage nodes (e.g.,136a-n), individual or groups of volumes221, individual or groups of slice services220, and/or individual or groups of block services215. The storage nodes (e.g., storage nodes136a-nand storage node200), the performance manager (e.g., performance manager138), and the monitoring system (e.g., normalizing agent230) described herein, and the processing described below with reference to the flow diagram ofFIG.4may be implemented in the form of executable instructions stored on a machine readable medium and executed by a processing resource (e.g., a microcontroller, a microprocessor, central processing unit core(s), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), and the like) and/or in the form of other types of electronic circuitry. For example, the processing may be performed by one or more virtual or physical computer systems of various forms, such as the computer system described with reference toFIG.5, below. Example Storage System FIG.3Adepicts a simplified system for centralized QoS management in a storage system300in accordance with an illustrative implementation. Storage system300includes a client layer302, a metadata layer304, a block server layer306, and storage316. Before discussing how particular implementations perform centralized QoS management, the structure of a possible system is described. Client layer302includes one or more clients308a-n. Clients308a-ninclude client processes that may exist on one or more physical machines. When the term “client” is used in the disclosure, the action being performed may be performed by a client process. A client process is responsible for storing, retrieving, and deleting data in system300. A client process may address pieces of data depending on the nature of the storage system and the format of the data stored. For example, the client process may reference data using a client address. The client address may take different forms. For example, in a storage system that uses file storage, each of clients308a-nmay reference a particular volume or partition, and a file name With object storage, the client address may be a unique object name. For block storage, the client address may be a volume or partition, and a block address. Clients308a-ncan communicate with metadata layer304using different protocols, such as small computer system interface (SCSI), Internet small computer system interface (ISCSI), fibre channel (FC), common Internet file system (CIFS), network file system (NFS), hypertext transfer protocol (HTTP), web-based distributed authoring and versioning (WebDAV), or a custom protocol. Metadata layer304includes one or more metadata servers310a-n. Performance managers314a-nmay be located on metadata servers310a-n. Block server layer306includes one or more block servers312a-n. Block servers312a-nare coupled to storage316, which stores volume data for clients308a-n. Each client308a-nmay be associated with a volume. In one implementation, only one client308a-nmay access data in a volume; however, multiple clients308a-nmay access data in a single volume. Storage316can include multiple solid-state drives (SSDs). In one implementation, storage316can be a cluster of individual drives coupled together via a network. When the term “cluster” is used, it will be recognized that cluster may represent a storage system that includes multiple disks that may not be networked together. In one implementation, storage316uses solid state memory to store persistent data. SSDs use microchips that store data in non-volatile memory chips and contain no moving parts. One consequence of this is that SSDs allow random access to data in different drives in an optimized manner as compared to drives with spinning disks. Read or write requests to non-sequential portions of SSDs can be performed in a comparable amount of time as compared to sequential read or write requests. In contrast, if spinning disks were used, random read/writes would not be efficient since inserting a read/write head at various random locations to read data results in slower data access than if the data is read from sequential locations. Accordingly, using electromechanical disk storage can require that a client's volume of data be concentrated in a small relatively sequential portion of the cluster to avoid slower data access to non-sequential data. Using SSDs removes this limitation. In various implementations, non-sequentially storing data in storage316is based upon breaking data up into one or more storage units, e.g., data blocks. A data block, therefore, is the raw data for a volume and may be the smallest addressable unit of data. The metadata layer304or the client layer302can break data into data blocks. The data blocks can then be stored on multiple block servers312a-n. Data blocks can be of a fixed size, can be initially a fixed size but compressed, or can be of a variable size. Data blocks can also be segmented based on the contextual content of the block. For example, data of a particular type may have a larger data block size compared to other types of data. Maintaining segmentation of the blocks on a write (and corresponding re-assembly on a read) may occur in client layer302and/or metadata layer304. Also, compression may occur in client layer302, metadata layer304, and/or block server layer306. In addition to storing data non-sequentially, data blocks can be stored to achieve substantially even distribution across the storage system. In various examples, even distribution can be based upon a unique block identifier. A block identifier can be an identifier that is determined based on the content of the data block, such as by a hash of the content. The block identifier is unique to that block of data. For example, blocks with the same content have the same block identifier, but blocks with different content have different block identifiers. To achieve even distribution, the values of possible unique identifiers can have a uniform distribution. Accordingly, storing data blocks based upon the unique identifier, or a portion of the unique identifier, results in the data being stored substantially evenly across drives in the cluster. Because client data, e.g., a volume associated with the client, is spread evenly across all of the drives in the cluster, every drive in the cluster is involved in the read and write paths of each volume. This configuration balances the data and load across all of the drives. This arrangement also removes hot spots within the cluster, which can occur when client's data is stored sequentially on any volume. In addition, having data spread evenly across drives in the cluster allows a consistent total aggregate performance of a cluster to be defined and achieved. This aggregation can be achieved since data for each client is spread evenly through the drives. Accordingly, a client's110will involve all the drives in the cluster. Since all clients have their data spread substantially evenly through all the drives in the storage system, a performance of the system can be described in aggregate as a single number, e.g., the sum of performance of all the drives in the storage system. Block servers312a-nand slice servers324(FIG.3B) maintain a mapping between a block identifier and the location of the data block in a storage medium of block server312. A volume includes these unique and uniformly random identifiers, and so a volume's data is also evenly distributed throughout the cluster. Metadata layer304stores metadata that maps between client layer302and block server layer306. For example, metadata servers310map between the client addressing used by one or more clients308a . . . nn(e.g., file names, object names, block numbers, etc.) and block layer addressing (e.g., block identifiers) used in block server layer306. Clients308a . . . nmay perform access based on client addresses. However, as described above, block servers312store data based upon identifiers and do not store data based on client addresses. Accordingly, a client can access data using a client address which is eventually translated into the corresponding unique identifiers that reference the client's data in storage316. Although the parts of system300are shown as being logically separate, entities may be combined in different fashions. For example, the functions of any of the layers may be combined into a single process or single machine (e.g., a computing device) and multiple functions or all functions may exist on one machine or across multiple machines. Also, when operating across multiple machines, the machines may communicate using a network interface, such as a local area network (LAN) or a wide area network (WAN). In one implementation, one or more metadata servers310may be combined with one or more block servers312in a single machine. Entities in system300may be virtualized entities. For example, multiple virtual block servers312may be included on a machine. Entities may also be included in a cluster, where computing resources of the cluster are virtualized such that the computing resources appear as a single entity. FIG.3Bdepicts a more detailed example of system300according to one implementation. Metadata layer304may include a redirector server320and multiple volume servers322. Each volume server322may be associated with a plurality of slice servers324. In this example, client308awants to connect to a volume (e.g., client address). Client308acommunicates with redirector server320, identifies itself by an initiator name, and also indicates a volume by target name that client308awants to connect to. Different volume servers322may be responsible for different volumes. In this case, redirector server320is used to redirect the client to a specific volume server322. To client308, redirector server320may represent a single point of contact. The first request from client308athen is redirected to a specific volume server322. For example, redirector server320may use a database of volumes to determine which volume server322is a primary volume server for the requested target name. The request from client308ais then directed to the specific volume server322causing client308ato connect directly to the specific volume server322. Communications between client308aand the specific volume server322may then proceed without redirector server320. Volume server322performs functions as described with respect to metadata server310. Additionally, each volume server322includes a performance manager314. For each volume hosted by volume server322, a list of block identifiers is stored with one block identifier for each logical block on the volume. Each volume may be replicated between one or more volume servers322and the metadata for each volume may be synchronized between each of the volume servers322hosting that volume. If a volume server322a . . . nfails, redirector server320may direct a client308a . . . nto an alternate volume server322a . . . n. In one implementation, the metadata being stored on volume server322may be too large for one volume server322. Thus, multiple slice servers324may be associated with each volume server322. The metadata may be divided into slices and a slice of metadata may be stored on each slice server324. When a request for a volume is received at volume server322, volume server322determines which slice server324contains metadata for that volume. Volume server322then routes the request to the appropriate slice server324. Accordingly, slice server324adds an additional layer of abstraction to volume server322. The above structure allows storing of data evenly across the cluster of disks. For example, by storing data based on block identifiers, data can be evenly stored across drives of a cluster. As described above, data evenly stored across the cluster allows for performance metrics to manage load in system300. If the system300is under a load, clients can be throttled or locked out of a volume. When a client is locked out of a volume, metadata server310or volume server322may close the command window or reduce or zero the amount of read or write data that is being processed at a time for a client308a . . . n. The metadata server310or the volume server322a . . . ncan queue access requests for client308a . . . n, such that IO requests from the client308a . . . ncan be processed after the client's access to the volume resumes after the lock out period. In some examples, the storage system300can also include one or more performance managers314a . . . nthat can monitor the use of the storage system's resources by both client processes and background processes. In addition, a performance manager314a . . . ncan facilitate regulating use of the storage system300by both client processes and background processes. The use of the storage system can be adjusted based upon performance metrics, the client's quality of service parameters, and the load of the storage system. Performance metrics are various measurable attributes of the storage system. Universal Return to Factory Image (RTFI) As mentioned above, the phrase return to factory image (RTFI) refers to the installation of operating system software components onto the memory of electronic devices. For example, some storage devices implement an RTFI process to install one or more operating system components onto a blank storage node, or to update or replace one or more operating system components on an existing installed storage node. In a traditional RTFI process, for example an installation from bootable media such as optical disk (ISO), pre-boot execution environment (PXE) or a thumb drive. In a traditional RTFI (tRFTI), the entire disk is re-partitioned and disk partitions new filesystems are created for the respective disk partitions. This process destroys most existing data on the disk, though some data can be persisted from a previously installed system. An “Inplace” RTFI process (iRFTI), is an updated process from a running system that tries to minimize install time by using kexec rather than a cold boot where possible. In an iRTFI the root filesystem partition is recreated while other partitions are left intact. Described herein are techniques referred to as a “universal” RFTI process (uRFTI). In some embodiments a uRTFI process utilizes elements of both tRTFI processes and iRTFI processes to provide a simplified method of managing the core platform and configuration changes through the use of an overlay file system. As such, it will not affect the majority of RTFI processing, yet simplifies installation of new core platform images and management of associated backups. Partitions and Filesystems Referring toFIG.4, in some embodiments a filesystem structure400comprises an overlay filesystem440, which in turn comprises a read-only layer410(indicated as the lower layer) with a read-write layer420above it (indicated as the upper layer). In some embodiments, the read-only layer410comprises the platform distribution, while the read-write layer420contains modifications to the read only layer, for example network configuration changes. Thus, the original distribution remains unmodified because changes to it are isolated to the read-write layer for the purposes of backup. Additionally, the overlay filesystem440makes the two layers transparently appear as a single directory structure to any application using it. In some embodiments the read-only layer410in an overlay filesystem440can reside on media which is read-only media. In other embodiments the read-only layer filesystem410can reside on media which is not, in fact, read-only media, but which is treated by the overlay file system440as read only media. In some embodiments, one or more filesystems may be implemented by a squashfs, which is a highly compressed, read-only filesystem-in-a-file supported by the operating system kernel directly that is generated by a build process. InFIG.4, the read-only layer410represents the squashfs. The read-write layer420represents a directory on the disk, and is persistent. The overlay layer440(also referred to the presentation layer) is the layer that the system uses and represents the top of the root filesystem. The build process generates a squashfs image (i.e., a single file) of the operating system. This squashfs image is used as a source archive during an initial installation and during an upgrade and/or downgrade. The file contents are extracted from the squashfs during the RTFI processes. In some embodiments a uRTFI uses the squashfs image as the read-only layer410of an overlay file system, without extracting its contents. This simplifies the components of the RTFI process that are error prone (e.g., backup, imaging, and rollback). In some embodiments, approximately five percent of the boot disk is allocated for the boot partition using an ext2 filesystem type, approximately seventy five percent of the boot disk is allocated for the root filesystem and twenty percent is allocated for /var/log, both using ext4. Further, in a uRTFI process the partitions and filesystems are created during initial install and are left alone during subsequent upgrades and downgrades. The boot disk need not contain a full Linux root file system. Instead, the root partition on that disk may be used for storage of squashfs images and the space and directories needed by the overlay filesystem. Changing versions of the kernel, Ember and/or Element is performed by providing a new squashfs image containing the new root file system. The read-write layer of the overlayfs will contain all runtime data including (but not limited to): linux configuration files, cluster node state information, support bundles, core files, and crashdumps. In some embodiments the boot is the first partition on the disk and contains the kernel, the initramfs, the bootloader, the kernel symbol map, and a kernel microcode patch. These files may also be contained in the/boot directory inside the squashfs. The boot partition may also have a subdirectory for the currently active boot files and symbolic links that make it possible for the bootloader to operate without modification. On some compute nodes, boot also contains several ESX ISO images. These images will live in the currently active directory with symlinks provided as needed for use by the bootloader. During an upgrade, a secondary directory is created and populated with the boot files and symlinks required to boot to a new universal RTFI kernel. A staging operation sets this secondary directory as the new current active image and the old current as the previous image. A soft reboot (i.e., kexec) is performed and the initramfs' init script mounts the new image as the active overlay and then calls the sfrtfi script to complete configuration updates in the new image. If a rollback is required, the previous image is booted as the new current image and the secondary directory is cleaned up. On a successful upgrade the previous image directory (i.e., the old image) is cleaned up during the post-install phase. In some embodiments, the root partition contains the entire runtime and all applications. This is a copy of the squashfs created by the build system extracted to disk. After the transition to a uRTFI the root partition will appear as the presentation layer of the overlayfs, with the squashfs mounted as the lower layer and bind mounts of real partitions in the presentation layer as needed to support iRTFI operations. A control file may be used to manage iRTFI operations and will consist of a list of key/value pairs that control what the initramfs (e.g., via its embedded init script) does. This includes which boot files and squashfs to use and processing required by RTFI operations such as downgrading to a pre-universal RTFI release. In some embodiments the log partition comprises all the logs the system generates. In an iRTFI using pre-universal RTFI images, the log partition contains a compressed backup of the entire root partition and a copy of the bootloader in order to support rollback, if needed. The log partition is a small partition on some nodes and the backups of the root partition can be large. If there isn't enough space to hold all the logs, the backup of the root partition, and the bootloader, the RTFI will fail and will rollback to the old version. The rollback may or may not be noticed by orchestrations systems, which will call this an RTFI failure. In some embodiments, a secondary initramfs is appended to the initial initramfs. The secondary initramfs comprises a custom init script and configuration information required to access the local boot drive partitions. The init script is purposely kept minimal, with as much staging work as possible performed in a running system. This prevents long down times and limits points of failure in the init script. The custom init script is needed to support mounting the overlay from the contents of the boot drive. State Transitions FIG.5shows state transitions and component level details for the current (pre-uRTFI) RTFI implementation covering both tRTFI and iRTFI with uRTFI updates. Blocks along the top of the diagram represent RTFI states. Blocks in diagonal hashes are only processed in tRTFI mode. Blocks in cross hashes are only processed in iRTFI mode. All other states are common to both modes. At block510a preparation state is entered. In the start state515, uRTFI will fetch one or more images516including core platform (squashfs) images and RTFI update packages. At block520the drive is unlocked. At block525the Backup state is modified and greatly simplified for iRTFI operations involving transitions from uRTFI to uRTFI releases. Backups for iRTFI operations for uRTFI to pre-uRTFI and pre-uRTFI to uRTFI still occur as they have in the past. This is because pre-uRTFI releases do not have knowledge of the new on-disk layout introduced by uRTFI. At block530firmware is installed, and at block535a hardware check operation is performed. At block540the system event log (SEL) logs are wiped and at block545a hardware test is performed. At operation550the drive is erased. At block555, in the Partition state for uRTFI, the partitions remain the same but are used within an overlay management system556. This allows the core platform to be replaced quickly and simply without having to deal with unpacking of the image. Additionally, the use of an overlay management system556simplifies restoration of run time configurations. Overlay management is setup as part of the staging that occurs in the running uRTFI image. The initramfs boots the configuration defined by that staging process. At block560, the image state holds the most changes for uRTFI. The need to uncompress an image is replaced by RTFI package update561and squashfs and bank management code562. At block560a configure operation is implemented. At block570, in an iRTFI, restoring backups in the PostInstall state will change to support the simplified backup handling571,572introduced with the use of the overlayfs system. At block575a cleanup operation is performed and at block580the process is finished. In some embodiments, upgrading a version (e.g., operating system and firmware versions) in a storage node comprises transferring a new squashfs to the node, configuring the node to use it and performing a soft reboot (e.g., via a kexec call). This enables the ability to stage cluster upgrades without affecting the cluster's operation and allows for the upgrade of each node in turn (i.e., kexec each node and wait for cluster faults to clear. In some embodiments of uRTFI, a rollback may be implemented by resetting symlinks for the boot, log and overlay directories and then booting into the rollback script of the original image. In some embodiments use file/dir names to denote current and previous versions. Installation to Blank Node FIG.6is a flow diagram illustrating operations in an install process, according to embodiments. Referring toFIG.6, in some examples an ISO (or similar) image610is booted to Linux with RTFI as the initial process. The install starts at block615, and at block620the RTFI process checks for partitions on the root drive. In some examples the RTFI process finds the root drive at block622, creates one or more partitions at block624, and creates one or more filesystems at block626. Optionally, the root partition may be encrypted at block628. At block630the filesystems created at block626are mounted and at block632one or more files are copied to the filesystems. At block634one or more overlays are set up, and at block636a file system is mounted in the one or more overlays. At block638the root director is changed to point to the image640. At block650a cleanup operation is implemented, which may include setting up a bootloader initramfms at block652and appending a custom initialization and config file at block654. In some embodiments, the uRTFI process separates the process of configuration of a running system from the installation or upgrade of that system. Existing configuration operations are specific state changes to an installed system and can therefore be called separate from the installation of the core components. There is no need to reparation the boot drive since its contents are more easily and quickly updated with uRTFI. There is limited need for a backup, as the data to be backed up with uRTFI is now limited only to changed files and not the entire root file system. A uRTFI rollback comprises kexecing to the previous squashfs and the associated overlay. Since this is not destroyed or modified by the upgrade process before the upgrade completes successfully the rollback is nearly immediate and cleanup fast. Upgrade/Downgrade uRTFI Version FIG.7is a flow diagram illustrating operations in an install upgrade/downgrade process, according to embodiments. Referring toFIG.7, at block710an iRTFI operation is started to switch from version X to version Y. At block720the Y version squashfs is downloaded to a running system. At block730the uRTFI config file is updated and at block740bank switching is configured to point to the new squashfs and kernel. At block712RTFI then kexecs to the init script at block714in the new image. At block722a setup is performed to identify devices724, modules726and drives728. At block734the runtime is prepared, and at block736one or more overlays are configured and at block738a file system is mounted in the overlay. At block742a switch_root is setup and at block744the switch_root to the overlay's presentation layer is executed. At block750post install handling is run in the new image, and at block752a bootloader for initramfs is setup. Any further custom work is performed at block754. FIG.8is a flow diagram illustrating operations in an install upgrade/downgrade process, according to embodiments. Referring toFIG.8, at block810an iRTFI operation is started. At block812a setup operation is started to generate a timestamp814, a source816, a generation, and to identify one or more parse options820. At block822a platform identify operation is implemented. At block824a second setup operation is started to identify the temporary file storage paradigm (tmpfs)826, a console828, and one or more hooks830. At block832a preparation operation is conducted and at block834options are declared. At block836a reboot is prepared and at operation838command arguments are saved. At block840an image of the filesystem is fetched, at block842the version is set, and at block844the version is checked. If, at block846, the update is a uRFTI, then at block848the generation of the filesystem is updated. At block850the banks are setup and at block852the kexec is setup. At operation854new kernel and/or initramfs is extracted and at block856a new config file for a uRFTI is setup. At block858the mode is set to uRFTI. At operation860a systemd shutdown is implemented. If, at block862the update is a uRFTI then at operation864a kexec is implemented. uRTFI Node Reset FIG.9is a flow diagram illustrating operations in a uRFTI node reset process, according to embodiments. Referring toFIG.9, at block905a sfnodereset operation is initiated. At block910a sfrfti_inplace operation is initiated. At block920a new squash file system is downloaded. At block930the uRTFI config file934is set and at block940banks are updated. At block912RTFI then kexecs to the init script at block914in the new image. At block922a setup is performed to identify devices924, modules926and drives928. At block934the runtime is prepared, and at block936one or more overlays are configured and at block938a file system is mounted in the overlay. At block942a switch_root is setup and at block944the switch_root is executed At block950post install handling is run in the new image, and at block952a bootloader for initramfs is setup. Any further custom work is performed at block954. FIG.10is a flow diagram illustrating operations in a uRFTI node reset process, according to embodiments. Referring toFIG.10, at block1002a sfnodereset operation is started. At block1004a sfrtfi_inplace operation is implemented and at operation1006a sf agent-sfnodereset operation is implemented. At block1012a setup operation is started to generate a timestamp1014, a source1016, a generation1018, and to identify one or more parse options1020. At block1022a platform identify operation is implemented. At block1024a second setup operation is started to identify the temporary file storage paradigm (tmpfs)1024, a console826, and one or more hooks1028. At block1032a preparation operation is conducted and at block1034options are declared. At block1036a reboot is prepared and at operation1038command arguments are saved. At block1040an image of the filesystem is fetched, at block1042the version is set, and at block1044the version is checked. If, at block1046, the update is a uRFTI, then at block1048the generation of the filesystem is updated. At block1050the banks are setup and at block1052the kexec is setup. At operation1054new kernel and/or initramfs is extracted and at block1056a new config file for a uRFTI is setup. At block1058the mode is set to uRFTI.At operation1060a systemd shutdown is implemented. If, at block1062the update is a uRFTI then at operation1064a kexec is implemented. uRTFI Inplace Upgrade: Pre-uRTFI to uRTFI FIG.11is a flow diagram illustrating operations in an upgrade process, according to embodiments. Referring toFIG.11, in some embodiments an sfrtfi_inplace operation is initiated at block1102. At block1104a new squashfs is downloaded. At block1106the process pivots to an sfrtfi and at block1108a kexec is executed. The install starts at block1110. At block1112one or more backups are generated and at block1114one or more keep paths are archived. At block1120the RTFI process checks for partitions on the root drive. In some examples the RTFI process finds the root drive at block1122, mounts one or more filesystems at block1124, and one or more filed are copied to the filesystems at block1126. At block1130old directories are cleared and at block1132one or more overlays are set up, and at block1134a file system is mounted in the one or more overlays. At block1136the keep paths archived in block1114are restored and at the root directory is changed to point to the image1140. At block1150a cleanup operation is implemented, which may include setting up a bootloader initramfms at block1152and appending a custom initialization and config file at block1154. uRTFI Inplace Downgrade FIGS.12-13are flow diagrams illustrating operations in a downgrade process, according to embodiments. Referring toFIG.12, at block1210a sfrtifi_inplace operation is initiated. At block1220the Y version squashfs is downloaded to a running system. If, at block1230, this is not a downgrade process then a uRFTI to uRFTI process is initiated as described with reference toFIG.8. By contrast, if at block730this is a downgrade process then at block1214the system is set to an sfrtfi_inplace oneshot. At block1250the uRTFI config file is updated. At block1212RTFI then kexecs to the init script at block1214in the new image. At block1222a setup is performed to identify devices1224, modules1226and drives1228. At block1234the old directories are prepared, and at block1236one or more overlays are configured and at block1238the uRTFI is unpacked. At block1239the mounts are fixed up. At block1242a switch_root is setup and at block1244the switch_root is executed. On failure, the processing indicated inFIG.13is activated. Referring toFIG.13, the process begins at block1305with a sfrfti_inplace in the destination image. At block1310a sfrtfi_rollback is in the uRFTI image. If, at block1316, there is not an rtfi-classic in the config file (urtfi.cfg) then at block1320the process reverts to a traditional rollback. By contrast, if at block1316, there is an rtfi-classic in the config file (urtfi.cfg) then at block1330an overlay directory is rebuilt, at block1335the bootloader is setup, at block1340a kexec operation is setup, and at block1350a config file is merged. At block1312RTFI then kexecs to the init script at block1314in the new image. At block1322a setup is performed to identify devices1324, modules1326and drives1328. At block1334a runtime is prepared, and at block1336one or more overlays are configured and at block1238the filesystem is mounted in the overlay. At block1342a switch_root is setup and at block1344the switch_root is executed. On failure, the processing indicated inFIG.13is activated. FIG.14is a flow diagram illustrating operations in a downgrade process from uRFTI to a pre-uRFTI, according to embodiments. Referring toFIG.14, at block1410an irfti_inplace operation is started. At block1412a setup operation is started to generate a timestamp1414, a source1416, a generation1418, and to identify one or more parse options1420. At block1422a platform identify operation is implemented. At block1424a second setup operation is started to identify the temporary file storage paradigm (tmpfs)1426, a console1428, and one or more hooks1430. At block1432a preparation operation is conducted and at block1434options are declared. At block1436a reboot is prepared and at operation1438command arguments are saved At block1440an image of the filesystem is fetched, at block1442the version is set, and at block1444the version is checked. If, at block1446, the update is a uRFTI, then at block1448the generation of the filesystem is updated. At block1450the banks are setup and at block1452the kexec is setup. At operation1454new kernel and/or initramfs is extracted and at block1456a new config file for a uRFTI is setup. At block1458the mode is set to uRFTI.At operation1460a systemd shutdown is implemented. If, at block1462the update is a uRFTI then at operation1464a downgrade is setup and at operation1466a kexec is implemented Example Computer System Embodiments of the present disclosure include various steps, which have been described above. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a processing resource (e.g., a general-purpose or special-purpose processor) programmed with the instructions to perform the steps. Alternatively, depending upon the particular implementation, various steps may be performed by a combination of hardware, software, firmware and/or by human operators. Embodiments of the present disclosure may be provided as a computer program product, which may include a non-transitory machine-readable storage medium embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Various methods described herein may be practiced by combining one or more non-transitory machine-readable storage media containing the code according to embodiments of the present disclosure with appropriate special purpose or standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (e.g., physical and/or virtual servers) (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps associated with embodiments of the present disclosure may be accomplished by modules, routines, subroutines, or subparts of a computer program product. FIG.15is a block diagram that illustrates a computer system1500in which or with which an embodiment of the present disclosure may be implemented. Computer system1500may be representative of all or a portion of the computing resources associated with a storage node (e.g., storage node136), a performance manager (e.g., performance manager138), a monitoring system (e.g., monitoring system230) or an administrative work station (e.g., computer system110). Notably, components of computer system1500described herein are meant only to exemplify various possibilities. In no way should example computer system1500limit the scope of the present disclosure. In the context of the present example, computer system1500includes a bus1502or other communication mechanism for communicating information, and a processing resource (e.g., a hardware processor1504) coupled with bus1502for processing information. Hardware processor1504may be, for example, a general purpose microprocessor. Computer system1500also includes a main memory1506, such as a random access memory (RAM) or other dynamic storage device, coupled to bus1502for storing information and instructions to be executed by processor1504. Main memory1506also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1504. Such instructions, when stored in non-transitory storage media accessible to processor1504, render computer system1500into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system1500further includes a read only memory (ROM)1508or other static storage device coupled to bus1502for storing static information and instructions for processor1504. A storage device1510, e.g., a magnetic disk, optical disk or flash disk (made of flash memory chips), is provided and coupled to bus1502for storing information and instructions. Computer system1500may be coupled via bus1502to a display1512, e.g., a cathode ray tube (CRT), Liquid Crystal Display (LCD), Organic Light-Emitting Diode Display (OLED), Digital Light Processing Display (DLP) or the like, for displaying information to a computer user. An input device1514, including alphanumeric and other keys, is coupled to bus1502for communicating information and command selections to processor1504. Another type of user input device is cursor control1516, such as a mouse, a trackball, a trackpad, or cursor direction keys for communicating direction information and command selections to processor1504and for controlling cursor movement on display1512. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Removable storage media1540can be any kind of external storage media, including, but not limited to, hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc-Read Only Memory (CD-ROM), Compact Disc—Re-Writable (CD-RW), Digital Video Disk-Read Only Memory (DVD-ROM), USB flash drives and the like. Computer system1500may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware or program logic which in combination with the computer system causes or programs computer system1500to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system1500in response to processor1504executing one or more sequences of one or more instructions contained in main memory1506. Such instructions may be read into main memory1506from another storage medium, such as storage device1510. Execution of the sequences of instructions contained in main memory1506causes processor1504to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media or volatile media. Non-volatile media includes, for example, optical, magnetic or flash disks, such as storage device1510. Volatile media includes dynamic memory, such as main memory1506. Common forms of storage media include, for example, a flexible disk, a hard disk, a solid state drive, a magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor1504for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system1500can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus1502. Bus1502carries the data to main memory1506, from which processor1504retrieves and executes the instructions. The instructions received by main memory1506may optionally be stored on storage device1510either before or after execution by processor1504. Computer system1500also includes a communication interface1518coupled to bus1502. Communication interface1518provides a two-way data communication coupling to a network link1520that is connected to a local network1522. For example, communication interface1518may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface1518may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface1518sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link1520typically provides data communication through one or more networks to other data devices. For example, network link1520may provide a connection through local network1522to a host computer1524or to data equipment operated by an Internet Service Provider (ISP)1526. ISP1526in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”1528. Local network1522and Internet1528both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link1520and through communication interface1518, which carry the digital data to and from computer system1500, are example forms of transmission media. Computer system1500can send messages and receive data, including program code, through the network(s), network link1520and communication interface1518. In the Internet example, a server1530might transmit a requested code for an application program through Internet1528, ISP1526, local network1522and communication interface1518. The received code may be executed by processor1504as it is received, or stored in storage device1510, or other non-volatile storage for later execution.
62,526
11861341
DESCRIPTION OF EMBODIMENTS To make a person skilled in the art understand the technical solutions in the embodiments of this application better, and make the objectives, features, and advantages of the embodiments of this application clearer, the following further describes the technical solutions in the embodiments of this application in detail with reference to the accompanying drawings. First, names and concepts in the embodiments of this application are described. (1) Immutable Build Infrastructure (IBI) An immutable build infrastructure IBI may also be referred to as a standard build infrastructure. Specifically, the IBI may be a standard build system. The system is configured to perform an entire software development process, namely, implement a process from development of source code to generation of a binary product installation package. For example, the process includes: environment building, generating an image file in the build environment, deploying a node based on the image file, and receiving and executing a build task by the node. In this process, each phase in the IBI cannot be changed, and can be modified only by application through an external port and therefore cannot be changed in an IBI-based system. This ensures security and stability when the IBI is used for DevOps. (2) Build Infrastructure Code (BIC) BIC is build infrastructure code, and the BIC may include two parts of code: one part is product pipeline code, and the other part is running environment code. The running environment code in the BIC is executed to obtain an image file in the running environment. The image file is used to generate one or more nodes. When the product pipeline code in the BIC is executed, source code corresponding to a product needs to be obtained, so as to obtain a binary package of the product, for example, a binary product installation package or a binary application package. The running environment code provides a related parameter and information for executing the product pipeline code. Further, various software development kit (SDK) libraries and third-party dependencies are described in the product pipeline code. The running environment code describes a programming language and a running environment (runtime), data package management, project management, a compilation tool, and all configuration parameters. (3) Domain-Specific Language (DSL) A DSL is a computer language used to write an application, for example, a C language or a Java language. In the embodiments of this application, the DSL specifically refers to a computer language used to write build infrastructure code BIC. Second, an application scenario of the technical solutions of this application is described. The technical solutions of this application may be applied to an IBI system, use the IBI system to build a plurality of secure, stable, and compliant environments, and can provide build services based on requirements of different products in the environments. This can quickly and effectively find target nodes corresponding to each request message, to improve job dispatching efficiency. The technical solutions may be applied to all environment-as-code and dynamic job dispatching scenarios, such as a development environment, a test environment, and a running environment. In addition, the technical solutions can also be applied to any application scenario requiring dual delivery, for example, a service scenario of an e-commerce application. When delivering an application to a customer, the IBI system or a cloud platform system also delivers a DevOps environment on all cloud platforms, to meet market requirements of improving software development efficiency and quickly responding in a unified DevOps environment. FIG.1is a schematic diagram of a structure of an IBI system according to an embodiment of this application. The IBI system includes at least one client, a front-end dashboard1, a reverse proxy server2, a service cluster3, a BIC repository4, a product code repository5, a cache6, and an image repository7. Specifically, the client may be a terminal device, and is a device that provides a service and/or data connectivity for a user, a handheld device with a wireless connection function, or another processing device connected to a wireless modem, for example, a wireless terminal. Further, the wireless terminal may communicate with one or more nodes over a radio access network (RAN). The wireless terminal may be a mobile terminal, such as a mobile phone (also referred to as a “cellular” phone) and a computer with a mobile terminal, for example, may be a portable, pocket-sized, handheld, computer built-in, or vehicle-mounted mobile apparatus, which exchanges a language and/or data with the radio access network. For example, the wireless terminal may be a device such as a personal communications service (PCS) phone, a cordless telephone set, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, or a personal digital assistant (PDA). The wireless terminal may also be a subscriber unit, a subscriber station, a mobile station, a mobile terminal, a remote station, an access point (AP), a remote terminal, an access terminal, a user terminal, a user agent, a user device, or user equipment (UE). A specific technology and a specific device form used by a terminal device are not limited in the embodiments of this application. The front-end dashboard (e.g., front-end server)1or a dashboard for short is configured to receive a configuration parameter entered by at least one client. The configuration parameter is used to configure a build environment required by a user, generate BIC, and start a build job procedure based on the BIC. Further, the configuration parameter includes but is not limited to a node resource, an operating system, programming language runtime, a compiler and configuration, a third-party dependency, and the like. Further, the node resource includes, for example, a quad-core CPU and a 16 GB memory. The operating system includes, for example, Ubuntu (Linux) and an IOS system. The programming language runtime includes, for example, JVM (Java). The third-party dependency may be depending on a component module of a product open source, or the like. The reverse proxy server2is configured to: after various build environments required by the user are configured in the dashboard, and a resource pool is built, when at least one node in different build environments is deployed in the resource pool, receive a request message; search the resource pool for a target node based on the request message; and deliver the request message to the target node, so that the target node can complete a build task of a product installation package based on content of the request message. The service cluster3may be a resource pool including at least one node, and is configured to: receive the request message from the reverse proxy server, and execute the pre-generated BIC, namely, execute product pipeline code (a part of code in the BIC) in a specific running environment, to obtain the corresponding product installation package, and complete the build task. The node includes but is not limited to a processor, a chip unit, a logic circuit, and the like. In an embodiment, the service cluster3further includes a container orchestrator (or referred to as an infrastructure server), and is configured to set and record a state of each node in the service cluster. The state of each node includes an online state and an offline state. In addition, the front-end dashboard1and the reverse proxy server2may be integrated into one device, for example, a server; or may be separately deployed. This is not specifically limited in this embodiment. The BIC repository4is configured to store the BIC generated by the front-end dashboard1for invoking by the service cluster3. The product code repository5is configured to store product source code. When the selected target node performs a build job, the target node obtains the product source code from the product code repository5, executes the product pipeline code in the predefined BIC, and finally obtains the binary product installation package. The cache6may be configured to store a “node state table”. The node state table includes at least one correspondence. Each correspondence is a relationship between one node and one piece of product information. The product information includes a product name and a version number, for example, a node 1, a product A, and a version number 0.0.1. In addition, each correspondence further includes a state of the node, and the state includes an idle state and a busy state. Further, the idle state may be represented by “1”, and the busy state may be represented by “0”. In addition, each correspondence may further include more information, for example, an IP address of each node, and a port number of each node. For example, a correspondence between a node and product information is: a node 1, a product A-0.1.1-192.168.0.12-2315, 0. It can be interpreted as that an IP address of the node 1 is 192.168.0.12 and a port number of the node 1 is 2315. The node 1 can be used to execute a build task of the product A whose version number is 0.1.1 in a specific environment, so as to obtain an installation package of the product A whose version number is 0.1.1. A current state of the node 1 is idle, and can execute the build task immediately after a request message is received. The image repository7is configured to store an image file of each node, and the image file is used to deploy and generate each node. Further, the image file is generated by the reverse proxy server2when the BIC is executed. In addition, the image file can be copied. Therefore, the image file may be copied to generate or deploy a plurality of nodes. These nodes can be used to build installation packages of same products in a same environment. The following describes methods provided in the embodiments of this application. An embodiment provides a node selection method. The method may be performed by a server or a processor in an IBI system, and the server or the processor has functions of the foregoing dashboard and the reverse proxy server. Further, as shown inFIG.2, the method includes the following operations. Operation201: Receive a request message, where the request message is used to request to provide an installation package of a product required by a user, and the request message carries product information that uniquely identifies the product required by the user. The product information includes a product name and a version number corresponding to the product name, and may further include a product type and the like. In addition, the request message may be from an external system or a client. Operation202: Search, based on the product information in the request message, a node state table for a target node corresponding to the product information, where the node state table includes at least one correspondence, each correspondence is a relationship between one node and one piece of product information, and a state of each node in the node state table is idle. In an embodiment, operation202includes: The server searches the node state table for product information with the same product name and the same version number, and uses the product information as target product information. Then the server determines the target node based on the target product information and a correspondence of the target product information. The node state table is used to reflect conditions of each node in a current service cluster resource pool, including a node number, a name (or a type) of a product that can be built by each node, a version number, a node IP address, a node state, and the like. For example, a current request message includes a product name such as WeChat and a corresponding version number 10.0.1. If the server selects three nodes that can provide a build service for the request message of WeChat from the node state table, the server sequentially selects a first node as a target node. Operation203: Send the request message to the target node, so that the target node builds the corresponding product installation package for the product required by the user. In an embodiment, after receiving the request message, the target node executes pre-generated BIC code based on the product name and the version number that are carried in the request message, to obtain the corresponding product installation package. The pre-generated BIC code is used to provide a basis for generating a product installation package in a specific environment. In addition, a process of generating BIC code is performed before operation201. The server receives a configuration parameter entered by the user, where the configuration parameter is used to configure at least one build environment required when the user expects to build a product; and generates at least one piece of build environment code BIC based on the configuration parameter, and runs each piece of BIC to obtain an image file corresponding to the piece of BIC. Then the server generates at least one node based on the image file, where one node serves a build environment corresponding to one image file, and generates a unique product name and a unique version number in the build environment; establishes the at least one correspondence based on each node, and a unique product name and a unique version number generated by the node in the build environment; and finally generates the node state table based on the at least one correspondence and the state of each node. In an embodiment, BIC is generated based on a configuration parameter, the BIC is executed to generate a plurality of image files, and a plurality of nodes are deployed based on the image files, to form a service cluster resource pool for executing a build job to complete product installation package build. Because each node can execute a build job for a build environment, conditions of all nodes are integrated into a node state table of the resource pool to prepare for quick target node searching subsequently. In an embodiment, the target node corresponding to the product information can be quickly found based on the product information in the request message and the correspondence included in the node state table, and the target node is idle. In this way, after the request message is sent to the target node, the target node can quickly and effectively execute the product build job in the request message, and build the product installation package. Therefore, the method improves efficiency of searching for the target node in the resource pool. In addition, the state of each node is dynamically updated in the node state table. Data on a type of nodes is increased or decreased to ensure that a specific quantity of nodes are available in the cluster and can provide services for this type of product at any time. In an embodiment, for example, the service cluster has a plurality of first nodes, the first node is configured to build an installation package of a first product in a first environment, and the target node is one of the plurality of the first nodes. After operation203is performed. The server marks a state of the target node as “busy” in the node state table; detects whether a quantity of idle-state first nodes in the node state table is less than a first threshold; and if the quantity of idle-state first nodes in the node state table is less than the first threshold, configures and increases the quantity of first nodes, so that an increased quantity of first nodes is not less than the first threshold. The first threshold may be determined based on a speed at which the server starts and establishes the first node. Because it takes relatively long time to build one node based on an image file, the quantity of first nodes is pre-deployed and controlled, so that when a request message is received again, a target node can be quickly found to provide a service. Likewise, when the target node executes the build task in the request message, the server marks the state of the target node as “idle”, and updates the node state table. If the quantity of idle-state first nodes in the node state table exceeds a second threshold, the server decreases the quantity of first nodes, so that a decreased quantity of first nodes does not exceed the second threshold. In an embodiment of this application,FIG.3is a schematic diagram of a structure of another IBI system architecture. The diagram is extension based on the system inFIG.1. In an embodiment, the system includes a server302, a BIC repository303, a pipeline304, an image repository305, a service cluster (node307/308/309), a container orchestrator310, a cache311, a product code repository312, and the like. In addition, at least one client301and an external system306are further included outside the system. The server302includes a dashboard and a reverse proxy server. The system is configured to execute a method procedure shown inFIG.4, to quickly and efficiently search for a node in a specific environment. In an embodiment, as shown inFIG.4, the method includes two parts of procedures. A first part of the procedure is from operation401to operation405and is mainly that the server performs functions of environment configuration, BIC generation, and node deployment. A second part of the procedure is from operation406to operation412and is mainly that the server performs a function of determining a target node, and dispatching a request message to the target node, and the target node completes a build task based on the request message. In an embodiment, the first part of the procedure includes the following operations. Operation401: The server302receives a configuration parameter entered by a user, and configures, based on the configuration parameter, a build environment required by the user. Further, the configuration parameter includes but is not limited to a node resource, an operating system, programming language runtime, a compiler and configuration, a third-party dependency, and the like. In an embodiment, software development, test, and operation and maintenance personnel enter the configuration parameter by using the dashboard in the server. Operation402and operation403: The server302modifies existing BIC based on the configuration parameter and a template to generate build environment code BIC. If a plurality of pieces of BIC are generated, each piece of BIC includes “product pipeline code” and “running environment code”. In an embodiment, the server302stores one or more pieces of BIC in the BIC repository303. For example, in this embodiment, if the plurality of pieces of BIC are generated, a relationship between these pieces of BIC may be as follows. A product A (prodA) generates BIC 1 in a build environment 1; the product A generates BIC 2 in a build environment 2; and the product A generates BIC 3 in a build environment 3, where the BIC 1, the BIC 2, and the BIC 3 are stored in a same BIC repository, but are stored in three branches of the same BIC repository. Operation404: In the process of generating the BIC in operation403, when “modifying the existing BIC”, the method further includes: The server302triggers a BIC pipeline (for example, obtaining a basic image, and installing various compilation tools and dependencies) to generate one or more image files. In an embodiment, the server302executes product pipeline code (“pipeline” for short) in the BIC 1 to generate an image file 1 corresponding to the BIC 1. Likewise, the server302executes a “pipeline” in the BIC 2 to generate an image file 2 corresponding to the BIC 2. The server302executes a “pipeline” in the BIC 3 to generate an image file 3 corresponding to the BIC 3. In an embodiment, the image file 1, the image file 2, and the image file 3 are stored in the image repository305. Operation405: The server302generates at least one node based on the image file (1/2/3) stored in the image repository, for example, generates a node 1, a node 2, and a node 3, to form a resource pool in a service cluster. For example, the node 1 is configured to provide a service for the product A (prodA) in the build environment 1. The node 2 is configured to provide a service for the product A (prodA) in the build environment 2. The node 3 is configured to provide a service for the product A (prodA) in the build environment 3. In addition, different types of products may be included. The container orchestrator310generates a “node state table” based on a state (including three states: busy, idle, and online) of each node, and stores the “node state table” in the cache311. In an embodiment, a process of building the node state table includes building at least one correspondence. Each correspondence is a correspondence between a cache key and a cache value. Further, the cache key includes a product name or a product type and a version number, an IP address of a node, a port (Port) number of the node, and the like. The cache value is used to indicate anode state. The node state includes “idle” and “busy”. Further, “0” indicates idle, and “1” indicates busy. Further, a process of determining the cache value includes: As shown inFIG.5, each node307(or308,309) in the service cluster includes three modules: a hypertext transfer protocol (HTTP) module3011, a monitoring module3012, and a product pipeline module3013. The HTTP module3011is configured to receive a request message sent by the server302, and after receiving the request message, execute, by using the product pipeline module3013, a build job corresponding to the request message, to generate an installation package of a corresponding product. In addition, the HTTP module3011is further configured to convert the request message into an HTTP format. The monitoring module3012is configured to: monitor whether the build job is completed; and if the build job is completed, configure a state of the node as “idle”, or if the build job is not complete or is being executed, configure a state of the node as “busy”. In an embodiment, the HTTP module3011is further configured to register the node307(or308,309). For example, after the node starts the build job, the HTTP module3011is used to register related information of the node with the cache311. The related information of the node includes an IP address of the node, a port number of the node, an environment for executing the build job, a product type, a version number, and the like. For example, a registered cache entry includes a cache key and a cache value. The cache key is product type-version number-IP address-port number. The cache value is 0 (default). Generally, when there is no request message allocated to the node, the HTTP module3011sets a cache value of the HTTP module3011to “0”. When receiving the request message, the HTTP module3011sets the cache value of the HTTP module3011to “1”. The HTTP module3011does not receive any new request message in a busy state until the node completes the current build task, and sets the state of the node to “0”. Table 1 shows a node state table. The node state table includes a correspondence between a node number of at least one build node, a cache key, and a cache value. TABLE 1NodeCache entrynumberCache keyCache value1prodA-0.1.1-192.168.0.12-231502prodA-0.1.1-192.168.0.13-346713prodB-1.1.2-192.168.0.14-2321514prodB-1.1.2-192.168.0.15-2234705prodC-2.1-192.168.0.16-365906prodC-2.1-192.168.0.17-24591 In addition, it should be noted that all nodes listed in Table 1 are in an online state, namely, can provide services for a request message and build corresponding product installation packages. In an embodiment, whether each build node is in the online state may be set by the container orchestrator310, and is reflected in the node state table. The server302maintains the online state of each node in the node state table. If the server302detects that a node is in an offline state, the server302deletes the node from the node state table, to prevent the node from receiving and executing a build task again. In an embodiment, the node state table is stored in the cache311. The following describes structures and functions of the “product pipeline code” and the “running environment code” of the BIC generated in operation403. In the product pipeline code, three execution phases are included. The three execution phases are installation, build, and deployment. Product source code is obtained in the installation phase. In an embodiment, the product source code may be obtained from the product code repository312. Test scripts are executed in the build phase. Image files are generated and a plurality of nodes are deployed in the deployment phase to form a service cluster. When the three execution phases of the product pipeline code are executed, a parameter needs to be entered, or related information needs to be provided. The related information may be provided by the running environment code, namely, the product pipeline code is executed under a parameter condition defined by the running environment code, to obtain a node that serves a specific environment. For example, the configuration parameter entered externally includes: language_runtime (programming language runtime), sdk_version (SDK and version), repository (product source code repository), and the like. To facilitate identification of the configuration parameter, the configuration parameter that needs to be entered may be generally marked with a symbol “$”. For example, $language_runtime indicates a variable parameter of programming language runtime. An embodiment provides an example of running environment code. The example describes a running environment (parameter) required for executing “product pipeline code”, and the like. language: $language runtime (Parameter of programming language runtime (such as Java virtual machine)) sdk:$sdk_version (SDK and version (for example, JDK 1.7)) stages:setup (Installation step)build (Build step)deploy (Deployment step) setup:git pull $repository (Script in installation step (indicates obtaining pull source code from a git repository)paths:<path of source code> (Path for storing source code) build:script:<run build script in $repository> (Execute product code in a code repository to obtain a build script)artifacts:paths:<path of targets> (Path for storing build result) deploy:script:<path of script to deploy build artifacts> (Script used to deploy build result) env:language_runtime: {{“LANGUAGE_RUNTIME”} parameter}sdk_version: {{“SDK_VERSION” } parameter}repository: {{“SOURCE_CODE_REPOSITORY”} parameter}. Two types of service objects are defined in the running environment code: builders and provisioners. A builder part includes a type of a product for executing the build job, a node access permission (such as an access key access_key and a secret key secret_key), and a resource region, for example, a node where the server is deployed is in East China and North China, an image file name (image_name), a basic image type and address (base image), an access username (ssh_username), and an instance type (instance_type). For example, an instance type 4U8G indicates a node with a quad-core CPU and an 8-GB memory. In addition, a plurality of builders can be defined. Different builders can build different types of running environments, so that different types of nodes can be integrated into a service cluster in a plurality of phases. For example, two builders can define two types of running environments and deploy a build node 1 and a build node 2 accordingly to execute build jobs in the two running environments. A provisioner part defines one or more units by the running environment code. These units can be configured to install or configure tool software that a running environment depends on. In addition, a plurality of provisioners can be defined, such as ansible-local and shell. A variable (variables) part is used to enter an external parameter, such as an access key, an access key, or an image address. In addition, some data is required in the product pipeline code, including code and build scripts of the provisioner such as playbook_file, role_path, and script_path. Various configuration management tool components and related extension and dependencies are stored in the product code repository312. The following shows a piece of product pipeline code. { “variables”: {(Variable definition, can be transferred from outside) “access_key”: “{{env ‘ACCESS_KEY’}}”, (Access key) “secret_key”: “{{env ‘SECRET_KEY’}}”, (Secret key)“base_image1”: “{{env ‘BASE_IMG1’}}”, (Basic image file address 1)“instance_type1”: “{{env ‘INST_TYPE1’}}”, (Image file instance type 1)“base_image2”: “{{env ‘BASE_IMG2’}}”, (Basic image file address 2)“instance_type2”: “{{env ‘INST_TYPE2’}}”, (Image file instance type 2)“playbook_file”: “{{env ‘ANSIBLE_PLAYBOOK’}}”, (File address for ansible playbook)“role_path”: “{{env ‘ANSIBLE_ROLE’}}”, (Address for ansible role)“script_path”: “{{env ‘SHELL_SCRIPT’}}” (Script address of shell script)},“builders”: [{ “type”: “cloud-vm”, (Basic image file type (cloud vm indicates a cloud virtual machine)“access_key”: “{user ‘access key’}}”,“secret_key”: “{{user ‘secret_key’}}”,“region”: “cn”, (Region (cn indicates that the cloud virtual machine is in China)“image_name”: “build-node1”, (Node name)“base_image”: “{{user ‘base image1’}}”,“ssh_username”: “root”, (ssh access user name)“instance_type”: “{{user ‘instance_type1’}}”. . . ” “provisioners”: [{“type”: “ansible-local”, (ansible-local is used as a node configuration mode)“playbook_file”: “ansible/{{user ‘playbook_file’}}”,“role_paths”: [“{{user ‘role_path’}}”]“type”: “shell”, (shell script is used as a node configuration mode)“script_path”: “{{user ‘script_path’}}”}] }. In an embodiment, the server executes the “running environment code” to form an image file, and generates and deploys at least one node, to form the resource pool in the service cluster. Each node in the resource pool can execute a job task in a running environment corresponding to the node. After the node is established and deployed, when one or more nodes receive a request message or service request, the “product pipeline code” is run to generate a binary package of the corresponding product from the source code in the product code repository for subsequent installation use. It should be noted that this embodiment merely lists the “running environment code” and the “product pipeline code” of one piece of BIC, and may further include more or less other content. This is not limited in this embodiment. In addition, once the BIC code is generated, the BIC code cannot be modified, namely, a user or a developer is not allowed to directly modify the running environment on the node. If running environment code of a product needs to be modified, a client needs to submit a BIC modification request, and the server determines the modification request. If the modification request is approved, the request is allowed to be modified. The modification triggers the pipeline to execute the BIC to generate the image file. According to the method disclosed in this embodiment, the server presents different environments as code by using the configuration parameter, designs and customizes an efficient extender and interpreter based on an existing DSL language, and presents the build environment (the “running environment code” and the “product pipeline code”) as code in a form of the BIC. This implements beneficial effects of a replicable target build environment, a repetitive build process, and checkable and inspectable build deployment; and resolves black-box, manual, and non-repetitive issues in building the environment. As shown inFIG.4, the method further includes the second part of the procedure. Specifically, the following operations are included. Operation406: The user sends a request message to the server through the client301. Alternatively, operation407: The external system306initiates a request message to the server. The request message is used to request to provide an installation package of a product required by the user. Further, the request message includes a product type/product name and a version number. The product type and the version number, or the product name and the version number are used to uniquely determine the product required by the user. For example, WeChat 10.0.1. In addition, a build environment required by the product name and the version number that are carried in the request message is one of pre-configured environments, namely, a build environment used to generate an installation package of the product name “WeChat” and the version number “10.0.1” has been prepared in advance in the first part of the procedure. In an embodiment, the external system includes project management, an integrated development environment (IDE), code hosting, test, release, deployment, and the like that can be seamlessly integrated into a build request service, to implement one-stop full-technology stack software research & development services covering a full-lifecycle. Operation408: The server302receives the request message from the client301or the external system306, and searches the “node state table” for the corresponding target node based on information carried in the request message. Operation409: If there is the target node, send the request message or a build job to the target node, where the build job is generated based on the request message. If there is no the target node, go back to operation405to perform an operation of building an image file to generate a node. Operation410: After receiving the request message or the build job sent by the server, the target node executes the product pipeline code in the corresponding BIC to obtain the binary installation package. For example, if the target node is the node 1, the node 1 executes the product pipeline code of “the product Ain the build environment 1” after receiving the request message or the build job, to generate an installation package of the product A. In addition, the method further includes: The node 1 stores the installation package of the product A in a product repository. In addition, a quantity of idle nodes in the service cluster is dynamically adjusted. Details are as follows. Operation411: The server302determines whether the quantity of current idle nodes is within a preset range. For example, when the node 1 executes the build job and changes to the busy state, the server302detects whether a quantity of current idle nodes 1 is within the preset range. Operation412: If the quantity of the current idle nodes is not within the preset range, increase or decrease the quantity of the nodes of this type, so that a quantity of nodes after the increasing or decreasing is within the preset range. For example, when a quantity of idle nodes used to build WeChat 10.0.1 is less than a preset minimum value, a new idle node is added, so that a quantity of idle nodes after the adding reaches a minimum value in the preset range. Likewise, when the node 1 completes the build job and changes to the idle state, the quantity of the idle nodes 1 may exceed a maximum value in the preset range. Therefore, the quantity of the nodes 1 is decreased appropriately, so that the quantity of the nodes 1 is controlled within the preset range. Likewise, the server302controls a quantity of nodes of each type in a range, for example, a preset range [a1, b1] for a quantity of first-type nodes (nodes307) that execute to build the product A, a preset range [a2, b2] for a quantity of second-type nodes (nodes308) that execute to build the product B, and a preset range [a3, b3] for a quantity of third-type nodes (nodes309) that execute to build the product C, so that the quantity of the nodes of each type in the resource pool can be freely switched. This fully utilizes the resource pool and improves product build efficiency. In addition, to improve efficiency of sending a request message and avoid a request message loss, this embodiment further discloses a dynamic build job dispatching method. The method may be applied to a case in which massive request messages for building a product installation package need to be sent. A feature of a cloud service is coping with an unpredictable request volume. When the massive build request messages are sent to a server within a short period of time, the massive build request messages can be dispatched in batches to improve build job dispatching efficiency and reduce the request message loss. In an embodiment, when a quantity of received request messages by the server302is N, where N is a quantity of request messages obtained within preset time, N≥2, and N is a positive integer, the node state table is searched for a target node corresponding to product information carried in each request message, to obtain N target nodes. Then, the N request messages are sent to the N corresponding target nodes. The N request messages obtained within the preset time may be stored in a dispatching queue by using a cache technology (for example, a Redis cache). For example, the server creates a first in first out queue (FIFIO) in a memory. After finding the corresponding target node based on each request message in the queue, the server sends all request messages in the queue to the corresponding target node in the service cluster, and clears the queue, so as to continue to obtain request messages in a next preset time period, search for a target node corresponding to each request, and finally dispatch all these request messages to the corresponding target node. Further, the server dispatches all the request messages in the queue within the preset time by using a configuration file. For example, a round robin load balancing algorithm is set in the configuration file. In an embodiment, it is set that a job dispatching operation is started when a quantity of obtained request messages reaches a quantity limit of messages that can be accommodated by the queue, or the request messages in the queue are dispatched according to a principle of receiving all the request messages in a preset time period. In an embodiment, when services need to be provided for the massive build request messages instantaneously, the server further sets storage space for storing all external request messages, and then establishes a queue in these external request messages and performs job dispatching. In addition, the server configures the round robin algorithm that is updated in real time, this can effectively improve job dispatching efficiency, and can be applied to a multi-type node-based request load balancing scenario. The client configures and generates the BIC, stores the BIC, and reviews, modifies, and executes the BIC to generate the image file of the build environment, to present build environments of different products as code. In addition, a solution is provided to uniformly build a source code input end of the IBI system and an output end of the product installation package. This method makes it easy to deploy a secure, consistent, and compliant build environment and significantly improves a build capability. In addition, it solves problems of high complexity, deep build levels, error-prone delivery, and a low build resource reuse rate and low build resource utilization of complex products in a large-scale build process. In specific application, the method provided in this embodiment may be applied to an intelligent application verification platform inFIG.6. In an embodiment, the platform includes: a user500, an application code repository501, an intelligent code analysis apparatus502, an IBI system503(as shown inFIG.3), a deployment service504, a real device open platform505, and an application market506. Further, the user500obtains application code (a product or source code) of a product from the application code repository501, and sends the application code to the intelligent code analysis apparatus502in the intelligent application verification DevOps platform. The intelligent code analysis apparatus502analyzes security, compatibility, stability, and the like of the application code. The IBI system503provides a secure, consistent, and compliant build environment based on a type of the product corresponding to the application code, generates a product installation package or an application package (APK), and finally sends the product installation package or the application package to the real device open platform505for real device testing and verification. After the verification is passed, the product installation package or the application package is released in the application market506. This technical solution works with the intelligent application verification platform (such as Android Green Alliance or a HiAi platform) to provide a unified DevOps environment for application building, testing, and verification. It aims to optimize a current test and verification process, perform intelligent analysis and detection from application source code based on an intelligent code service, build and generate the product installation package or the application program package by using a standard build platform, and then deploy the product installation package or the application program package to the real device open platform of a terminal for automatic real device testing. If the intelligent analysis, build, and real device test and verification are all passed, the package is released to the application market (or released by a user). This method increases application code-level quality assurance, ensures a basic quality standard of the real device verification process, and improves test efficiency and test platform productivity. In addition, many mobile game and application development companies do not want to open product code to the public, even in a closed continuous delivery environment. However, the companies hope that the same intelligent application verification platform (as shown inFIG.6) can be used to manage a development procedure. This ensures that a finally released game or product installation package can successfully pass verification. The technical solution in this embodiment can facilitate delivery and deployment of an entire set of the intelligent application verification DevOps platform to a user data center (the user pays for use), ensure security, compatibility, and stability of the code corresponding to the user in a development process, and meet a requirement of the user. The following describes apparatus embodiments corresponding to the foregoing method embodiments. FIG.7is a schematic diagram of a structure of a communications apparatus according to an embodiment of this application. The apparatus may be the server in the foregoing method embodiments, or may be a network device, or a communications device, or may be a component located in a network device, for example, a chip. Further, the apparatus may implement all functions of the server in the foregoing embodiments. Further, as shown inFIG.7, the apparatus may include a receiving unit701, a processing unit702, and a sending unit703. In addition, the apparatus may further include a storage unit or another unit or module. The receiving unit701is configured to receive a request message. The request message is used to request to provide an installation package of a product required by a user, and the request message carries product information that uniquely identifies the product required by the user. The processing unit702is configured to search, based on the product information in the request message, a node state table for a target node corresponding to the product information. The node state table includes at least one correspondence. Each correspondence is a relationship between one node and one piece of product information. A state of each node in the node state table is idle. The sending unit703is configured to send the request message to the target node, so that the target node builds the corresponding product installation package for the product required by the user. In an embodiment, the product information includes a product name and a version number corresponding to the product name. The processing unit702is configured to: search the node state table for product information with the same product name and the same version number, and use the product information as target product information; and determine the target node based on the target product information and a correspondence of the target product information. In an embodiment, the receiving unit701is further configured to: before receiving the request message, receive a configuration parameter entered by the user. The configuration parameter is used to configure at least one build environment required when the user expects to build a product. The processing unit702is further configured to: generate at least one piece of build environment code based on the configuration parameter, and run each piece of build environment code to obtain an image file corresponding to the piece of build environment code; generate at least one node based on the image file, where one node serves a build environment corresponding to one image file, and generate a unique product name and a unique version number in the build environment; establish the at least one correspondence based on each node, and a unique product name and a unique version number generated by the node in the build environment; and generate the node state table based on the at least one correspondence and the state of each node. In an embodiment, a first node is configured to build an installation package of a first product in a first environment, and the target node is one of first nodes. The processing unit702is further configured to: after the sending unit sends the request message to the target node, mark a state of the target node as “busy” in the node state table; detect whether a quantity of idle-state first nodes in the node state table is less than a first threshold; and if the quantity of idle-state first nodes in the node state table is less than the first threshold, configure and increase the quantity of first nodes, so that an increased quantity of first nodes is not less than the first threshold. In an embodiment, the processing unit702is further configured to: when the target node completes a build task for the request message, mark the state of the target node as “idle”, and update the node state table; and if the quantity of idle-state first nodes in the node state table exceeds a second threshold, decrease the quantity of first nodes, so that a decreased quantity of first nodes does not exceed the second threshold. In addition, the processing unit702is further configured to delete a node in an offline state from the node state table. In an embodiment, when the apparatus is used as a container editor, the processing unit702is further configured to detect a state of each node in a resource pool, where the state includes an online state and an offline state; and mark the online state or the offline state for each node. In an embodiment, when the apparatus is configured to generate BIC, the receiving unit701is configured to obtain an external environment parameter. The external environment parameter includes: a programming language type and version, a software development kit SDK and version, and a product source code repository. The processing unit702is configured to generate the BIC based on the external environment parameter, and store the BIC in a BIC code repository. In addition, the BIC includes product pipeline code and running environment code. Further, the processing unit702is further configured to execute the running environment code in the BIC to obtain an image file in the running environment. The image file is used to generate one or more nodes. When executing the product pipeline code in the BIC, the processing unit702needs to first obtain source code corresponding to a product, and generate a binary package of the product after executing the product pipeline code based on the source code corresponding to the product. The binary package may include a binary product installation package or a binary application package. FIG.8is a schematic diagram of a structure of a communications device according to an embodiment of this application. The communications device may be the server or the communications apparatus in the foregoing embodiments, or may be a component (for example, a chip) that can be used for the server or the communications apparatus. The communications device may implement functions or operations of the server in the foregoing embodiments. As shown inFIG.8, the communications device may include a transceiver801and a processor802, and may further include a memory803. The memory803may be configured to store code or data. The transceiver801may include components such as a receiver, a transmitter, and an antenna. The communications device may further include more or fewer components, or combine some components, or have different component arrangements. This is not limited in this application. The processor802is a control center of the communications device, and is connected to each part of the entire communications device through various interfaces and lines. The processor802runs or executes a software program or a module stored in the memory803, and invokes data stored in the memory803, to perform various functions of the communications device or process data. The processor802may include an integrated circuit (integrated circuit, IC), for example, may include a single encapsulated IC, or may include a plurality of connected encapsulated ICs that have same or different functions. For example, the processor802may include only a central processing unit (central processing unit, CPU), or may be a combination of a GPU, a digital signal processor (digital signal processor, DSP), and a control chip (for example, a baseband chip) in a transceiver module. In various implementations of this application, the CPU may be a single computing core, or may include a plurality of computing cores. In an embodiment, the processor802includes a processing chip. The processing chip may include one or more random access storage units, and the storage unit may be configured to store instructions or computer programs. The transceiver801is configured to establish a communications channel, so that the communications device is connected to a communications network through the communications channel, to implement communication transmission between the communications device and another device. The transceiver801may be a module that completes receiving and sending functions. For example, the transceiver801may include communications modules such as a wireless local area network (WLAN) module, a Bluetooth module, and a baseband module, and a radio frequency (RF) circuit corresponding to the communications device. The transceiver801is configured to perform communication in a wireless local area network, Bluetooth communication, infrared communication, and/or communication in a cellular communications system, for example, wideband code division multiple access (WCDMA) and/or high speed downlink packet access (HSDPA). The transceiver801is configured to control communication between components in the communications device, and may support direct memory access (direct memory access). In different implementations of this application, transceiver modules in the transceiver801are usually presented in a form of an integrated circuit chip, and may be selectively combined, without requiring that all the transceiver modules and corresponding antenna groups are included. For example, the transceiver801may include only a baseband chip, a radio frequency chip, and a corresponding antenna, to provide a communication function in a cellular communications system. The communications apparatus may be connected to a cellular network (or the internet through communication connection, for example, wireless local area network access or WCDMA access, that is established by the transceiver. The memory803may include a volatile memory, for example, a random access memory (RAM), or may include a non-volatile memory, for example, a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD). Alternatively, the memory803may include a combination of the foregoing types of memories. The memory may store programs, code, or data, and the processor802in the communications device may implement a function of the communications apparatus by executing the programs or the code. In this embodiment of this application, the processor802and the transceiver801may be separately or coupled to implement all or some of the operations of the node determining method and the node state configuration in the foregoing method embodiments. For example, when the communications device is used as the server in the foregoing embodiment, the transceiver801may receive a request message. The request message is used to request to provide an installation package of a product required by a user, and the request message carries product information that uniquely identifies the product required by the user. The processor802searches, based on the product information in the request message, a node state table for a target node corresponding to the product information. The node state table includes at least one correspondence. Each correspondence is a relationship between one node and one piece of product information. A state of each node in the node state table is idle. Finally, the transceiver801sends the request message to the target node, so that the target node builds the corresponding product installation package for the product required by the user. Further, functions to be implemented by the receiving unit701and the sending unit703inFIG.7may be implemented by the transceiver801in the communications device, or may be implemented by the transceiver801controlled by the processor802, and a function to be implemented by the processing unit702may be implemented by the processor802. In addition, this application further provides a computer storage medium. The computer storage medium may store programs. When the programs are executed, some or all of the operations of the embodiments of the message sending method and the message receiving method provided in this application may be included. The storage medium may be a magnetic disk, an optical disc, a read-only memory (ROM), a random access memory (RAM), or the like. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions, for example, switching instructions. When the computer programs are loaded and executed on a computer, all or some of the procedures or functions are generated according to the embodiments of this application. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from one network node, computer, server, or data center to another website, computer, or server in a wired or wireless manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium, for example, a floppy disk, a hard disk, or a magnetic tape, an optical medium (for example, a DVD), or a semiconductor medium, for example, a solid-state drive SSD. In the specification, claims, and accompanying drawings of this application, the terms “first”, “second”, and the like are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. It should be understood that the data termed in such a way are interchangeable in appropriate circumstances, so that the embodiments described herein can be implemented in other orders than the order illustrated or described herein. Moreover, the terms “include”, “contain” and any other variants mean to cover the non-exclusive inclusion, for example, a process, method, system, product, or device that includes a list of operations or units is not necessarily limited to those operations or units, but may include other operations or units not expressly listed or inherent to such a process, method, product, or device. A person skilled in the art may clearly understand that, the technologies in the embodiments of this application may be implemented by software in addition to a necessary general hardware platform. Based on such an understanding, the technical solutions of the embodiments of this application essentially or the part contributing to the conventional technology may be implemented in a form of a software product. The computer software product may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disc and the like, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device and the like) to perform the methods described in the embodiments or some parts of the embodiments of the present application. For same or similar parts in the embodiments in this specification, refer to each other. Especially, a network device/node or an apparatus device is basically similar to a method embodiment, and therefore is described briefly. For related parts, refer to the descriptions of the method embodiments. The foregoing descriptions are implementations of this application, but are not intended to limit the protection scope of this application.
58,398
11861342
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIGS.1-4are diagrams showing an example of a system100for facilitating deployment of computing environments in cloud computing systems. The system100includes a computer system110of a software provider, a cloud computing platform120, and a computing device130of an administrator102from a customer of the software provider. The computer system110, the cloud computing platform120, and the computing device130all communicate over a communication network140, such as the Internet. In the example, the software provider, the operator of the cloud computing platform120, and the customer (e.g., Company A) are all third parties with respect to each other.FIGS.1-4show a series of steps or stages labelled (A) to (M), which illustrate various operations and the flow of data in the system100. The example discussed inFIGS.1-4uses examples that make use of various tools and frameworks such as Docker, Kubernetes, Helm, and others. These are simply examples of some of the many container formats and container management tools that can be used. For example, instead of using Docker containers, other types or formats of containers can be created, e.g., for LXC, Windows Containers, rkt, runC, and others. Rather than using Kubernetes, other container orchestration tools and container-as-a-service (CaaS) providers, can be used, such as Amazon Web Services (AWS) Fargate, Microsoft Azure Container Instances, Google Cloud Run, Amazon Elastic Kubernetes Service (EKS), Openshift Container Platform, Rancher, and so on. Similarly, instead of using Helm, other package mangers can be used, including Rancher, Ansible, Spring Cloud, Terraform, Kustomize, and others. Each ofFIGS.1-4shows a different stage or phase in an example process of deploying computing environments into an account of the cloud computing platform120. As an overview,FIG.1shows an initial stage in which the software provider makes software and deployment code available in a repository111. The customer retrieves, and stores in the cloud computing account150, a deployment package112that can be invoked to initiate the deployment process.FIG.2shows operations performed when the initial code or script in the deployment package112is invoked. For example, invoking the deployment package112can trigger the retrieval of various elements from the repository111into the cloud account150, as well as creating and running software modules to manage and create server environments.FIG.3shows additional deployment operations that can be performed, triggered by the deployment package112and/or responsive to instructions of the administrator102for the cloud account150. These operations can create a cluster160of processing nodes in the account150, as well as create one or more environments170a-170brunning in the cluster160. Using the deployment tools and API already established in the account150, as well as the container images and configuration data retrieved earlier, the administrator102can create, run, and manage server environments, with desired combinations of containers, in the account150without any communication with the software provider's system110.FIG.4shows the deployed environments170a-170bin use, providing service to various client devices190a-190cover the network140. In further detail, referring toFIG.1, stage (A) shows the software provider's computer system110hosting a repository111of objects that can be used to create deployment infrastructure as well as deploy server environments. The repository111can be publicly accessible so that customers can access contents over the network140, but the repository can also be access-controlled to limit access to authorized parties or accounts and to limit which contents are available to different parties. As an initial set-up step, the software provider builds, releases, tests, and deploys container images113and related configuration data114into a registry, before finally making the items available in the repository. When a software developer completes code and merges it into the code repository, the computer system110can trigger an automated job (e.g., including a “docker build” command) to build a container image113based on an underlying Docker file. The computer system110can run various tests on the generated container images113as well. Unit tests and other automated tests can be triggered to ensure that each container image113can be started and run without errors, and that the desired functionality (e.g., services, APIs, etc.) of the container is provided with expected performance characteristics. Once the tests validate that the container operates properly, it is entered into an internal (e.g., private) container registry. Along with the container images113, the computer system110can generate configuration data114. The configuration data114can include package information for installing and configuring one or more container images113, such as information in a Helm chart. The packaging information can include a collection of files that can be arranged in a folder, directory tree, or other form of archive. The configuration data can include configuration data in YAML or JSON files, lists of software and data dependencies, default configuration values, templates for generating manifest files, version control information (e.g., specifying which versions of software or containers are supported, or which versions of supporting software are required), scope and namespace information, and any other information needed for installing or configuring an instance of a container or of a collection of containers. The package information provides the metadata and settings to ensure compatibility and proper configuration, so that Helm or another similar package manager can automatically install and run containers from the container images113. The configuration data114, like the container images114, are run through various automated tests to ensure that they function properly and can install the corresponding containers and run without errors. The configuration data114is entered into an internal (e.g., private) registry. The computer system110can also run a test deployment of the container images113using the configuration data114. For example, the system110can deploy the container images113as containers in cluster of processing nodes (e.g., a Kubernetes cluster) in a target cloud computing platform. The testing can be performed for multiple different cloud computing platforms, to validate the function of the container images113and configuration data114for multiple different cloud computing platforms (e.g., Amazon Web Services (AWS), Microsoft Azure, Google Cloud, etc.). Once installed and run, various automated tests certify proper function of the software, including security scanning to test for security vulnerabilities in the containers. Once the release is certified, the certified versions of the container images113and configuration data114are added to the repository111, which is accessible over the network140by customers. The repository111also stores other types of data used in the deployment process. This includes deployment packages112, which are configured to be invoked from within a customer's cloud computing account150to start the automated process of building the deployment architecture in the account150. Different deployment packages112can be provided for different cloud computing providers, to account for differences in APIs, command syntax, communication protocols, programming tools (e.g., compilers, etc.), and other features. Each deployment package112represents the initial set of code, scripts, or other content that can be invoked from within a cloud computing account150to start building the deployment architecture. As a result, the deployment package112can reference container images113for the deployment management modules, corresponding configuration data114, as well as other data that specifies the sequences of operations needed. In addition to the deployment packages112, the repository111also stores other automation data115, which can include scripts for various tasks used in the process of establishing the deployment management functionality in the account150and/or for deploying server environments. Different sets of automation data115can be provided for each of various different cloud computing platforms. The deployment packages112and automation data115can also be tested before being made available for use by customers. The deployment package112can specify, or can link to or otherwise reference, the sets of different files (e.g., container images113, configuration data114files, automation data115files, etc.) that are needed, both for enabling the deployment infrastructure as well as the server environments for serving client devices. As will be discussed with respect toFIG.2, invoking the deployment package112can cause the cloud computing account150to automatically retrieve the container images113, configuration data114, and automation data115over the network140and store them in the cloud computing account150. In stage (B), the computer system110grants permission for the customer (e.g., Company A in the example) to access the deployment information in the repository111. For example, after the customer agrees to a service agreement, terms of use, or other agreements, the computer system110can grant authorization for an account of the customer to be able to browse and download appropriate items needed to access the deployment tools and server environment information for appropriate versions of the software provided by the provider. The computer system110may provide access in various other ways. For example, the computer system110may generate and provide a universal resource identifier (URI), universal resource locator (URL), or other reference to the deployment package112, in addition to updating permissions so a request for the package112and other data will be granted by the system110. In some implementations, the software provider has provided a user interface103in a client-side application that can run on the computing device130, or in a web page or web application that runs in a browser, which also can be updated to show or grant permission to access the contents of the repository111. In stage (C), the administrator102obtains a deployment package112from the repository111that is appropriate for the cloud computing platform120. The administrator saves the deployment package112into the cloud computing account150. In some implementations, the deployment package112is downloaded directly from the repository into the cloud computing account150. In other implementations, the deployment package112can be provided through one or more intermediary devices, such as saved to the client computing device130and then uploaded into the cloud computing account150. In some implementations, a user interface103of an application or web page provides interactive controls to select, download, and store the deployment package112into the cloud computing account150. For example, the software provider system110can provide a link or landing page created for the customer, which can include a unique URL for the customer to download the deployment package112. The deployment package112can be one that is generated for, or selected from among many options, that is applicable for the customer and the cloud computing account150. For example, multiple different deployment packages112can be stored, each for different software products, for different versions of a software product (e.g., different build versions of the software), for different combinations of features, and so on. As a result, the different deployment packages112may cause different sets of containers, or different versions of the containers, to be downloaded and used. Similarly, different deployment packages112can be stored and configured for different cloud computing platforms, to maximize compatibility and efficiency in running with the different APIs and infrastructure of each cloud computing platform. The computer system110can store information about each customer, indicating the software product and version that the customer has requested or paid for, as well as the target cloud computing platform (e.g., Amazon AWS, Microsoft Azure, Google Cloud, etc.) the customer intends to deploy to. With this information, the computer system110can provide a deployment package112that is appropriate for or is customized for the particular product, product version, and target cloud computing platform. In some implementations, the computer system110can provide user interface data with links or options for multiple different deployment scenarios (e.g., different combinations of product, product version, and target platform) so that the administrator102can select the option that best fits the current situation. In some implementations, the computer system110hosts the repository111as a publicly accessible registry, although access control may still be applied to various folders or files. In some implementations, the repository111is stored in cloud computing platform, such as a file system or data storage service provided by the cloud computing platform120. In this situation, to provide the deployment package112to the customer, the computer system110can generate or select the deployment package112for the customer, modify the condition of the cloud-computing storage to enable the customer to access the deployment package112, and then generate and provide a URL to the customer (e.g., through e-mail, a web page, a web application interface, an interface of a native application on the device130, etc.) for the deployment package112as stored in the cloud-computing storage. As a result, when the administrator uses the URL to retrieve the deployment package112, it can be done simply as a transfer, within the cloud computing platform120, from the software provider's account or data storage into the account150of the customer. Referring toFIG.2, stages (D) through (H) show steps to enable various deployment tools and deployment data into the cloud computing account150. In stage (D), the administrator102invokes the deployment package112that is stored within the cloud computing account150. The deployment package112can includes code that can be executed or interpreted to start various processes that create deployment tools in the account150. For example, the deployment package can include a script that can be run to execute various tasks. This can include compiling code, building software objects, and installing generated applications or modules. For example, for the AWS platform, a script can be generated using the AWS CodeBuild tool to compile and build code to integrate deployment tools into the account150. One of the actions triggered by invocation of the deployment package112is to communicate with the computing system110over the network140to retrieve the additional software and data needed to create deployment tools and deploy server environments. Because the account150is used to pull in the needed objects, the process fulfills security policies that may prevent granting permissions to the account or accepting transfers initiated by third parties. In this case, the deployment package112requests the container images113, configuration data114, and automation data115needed for deployment in the cloud computing platform120. In stage (E), the cloud computing account150receives the container images113, which can be for both deployment management tools and for server environments to be deployed. In stage (F), the cloud computing account150receives the configuration data114(e.g., Helm charts or other packaging data). In stage (G), the cloud computing account150receives the automation data115, including scripts for creating various aspects of the deployment tools. The downloaded data can be stored in a local repository151within or associated with the cloud computing account150. In some implementations, a repository151can be maintained by the customer to service multiple cloud computing accounts. At this point in the process, the cloud computing account150contains all of the software and configuration data needed to create the environment deployment infrastructure (e.g., tools for managing and deploying environments) as well as the software to be run in the server environments being deployed. No further communication with the computing system110is needed for setup and deployment, although upgraded versions of the containers and the deployment infrastructure can be made available and downloaded from time to time to update those in the account150. In step (H), the deployment package112can initiate a sequence of operations to create and run various software modules, such as a deployment orchestrator152, a deployment controller153, and a load balancer154. These modules can be instantiated from container images113and their associated configuration data114(e.g., package information). The modules to create and the operations to perform can be based on tasks specified in the deployment package112or through scripts in the automation data115that are invoked by the deployment package112. For example, executing a script in the deployment package112may (i) download the automation data115for the cloud computing platform120, and (ii) execute additional scripts in the automation data115to install, configure, and run containers and other software to provide deployment tools. As illustrated in further steps below, providing the deployment orchestrator152can be much more advantageous than simply providing an instance of an environment for the customer to use. For example, the customer's cloud computing account150does not merely gain a single instance of an environment, but obtains the deployment infrastructure to create and manage clusters, along with management elements within each cluster to be able to create and manage various environments. The deployment orchestrator153, receiving commands and instructions via the API provided by the deployment controller153can create multiple clusters, create multiple environments within each cluster, as well as instantiate the functionality for environment monitoring and reporting within each cluster. In the example, the deployment orchestrator152is a module that manages clusters of processing nodes, such as a Kubernetes cluster on which server environments can run. For example, the deployment orchestrator152can enable functions such as cluster creation and management, cluster upgrade, configuration and setup for the deployment tools (e.g., for the deployment orchestrator152and the deployment controller153), and so on. The deployment controller153can provide an API, such as a representational state transfer (REST) API, for deploying and managing environments within a cluster, and for managing the cluster. The deployment controller153act as an API gateway, and can include a software stack to communicate with the deployment orchestrator152and/or a cluster of processing nodes to support various API commands. For example, the deployment controller153supports a variety of environment management functions, including creating an environment, managing or configuring an environment, deploying or making an environment accessible to clients, starting and stopping an environment, scaling an environment up or down (e.g., adding or removing allocations of computing resources such as CPUs, memory, storage, etc.), scaling an environment out and in (e.g., increasing or decreasing replica count), upgrading the software of an environment, initiating backup or restore of environment data, and deleting an environment. The API can also support actions to manage a cluster in cooperation with the deployment orchestrator, such as commands to create a cluster (e.g., with a specified region, instance type, cluster size, and other parameters), or to modify a cluster (e.g., to add or remove processing nodes, to change the instance type, to change auto-scaling settings for adjusting allocation of computing resources, etc.) The load balancer154can route and manage requests through the API, providing a module that insulates the deployment controller153from direct outside requests and also helping to balance load when multiple deployment controller153instances or multiple clusters are used. At the end of the operations shown inFIG.2, the cloud computing account150is running the deployment infrastructure needed to create and deploy server environments, and also stores, in the repository151, all of the software and configuration data for those server environments. In some implementations, the deployment controller153can act as an API server to serve API requests specific to clusters and environments within these clusters. The deployment controller153can provide an entry point for instructions and requests to create and manage environments. The implementation of the deployment controller153can be cloud-agnostic or cloud-specific. In either scenario, the deployment controller153can perform functions such as serving API endpoints, triggering cluster or infrastructure creation, providing wait logic for cluster or infrastructure creation, storing cluster or infrastructure information, storing environment information within a cluster or infrastructure, supporting operations (e.g., create, read, update, delete) on cluster or infrastructure, providing authentication and authorization functionality, and supporting future updates or changes to the deployment infrastructure itself (e.g., to the deployment orchestrator152and deployment controller153itself). A cloud-agnostic approach (e.g., an application that can be a deployment controller153for any of multiple cloud computing platforms) can involve creating an application to serve as the deployment controller153. The application can be hosted on an instance. Since there can be many requests coming to the application, and because allowing direct access to the instance could be a security risk, the architecture can place a load balancer in front of the application. This way, the application for the deployment controller153will be able to serve API endpoints and trigger cluster creation. Cluster creation is a cloud-specific process, in the sense that the operations and settings vary from one cloud computing platform to another. As a result, the application can include packages, libraries, scripts, or modules for different cloud computing platforms. Since cluster creation takes time, the application can include multiple threads and/or functionality to perform asynchronous operations, e.g., wait logic for cluster and infrastructure creation. In case the application crashes and needs to start again from its previous state, the application can be configured to store and retrieve information from a database. In general, cluster-specific and environment-specific information also needs to be stored, and can be stored in the same database. The application can be configured to perform operations on the cluster and/or specific environment (e.g., create, read, update, delete, etc.). The application also can be protected against unauthenticated and unauthorized access. The application can be maintained as future updates and enhancements are deployed, so the application can include an update or upgrade mechanism. Even with an cloud-agnostic approach, with the same application used for multiple different cloud computing platforms, some operations will be specific to the cloud computing platform used. The application would be hosted on a virtual machine, container, computing instance, or other computing elements of the desired platform, and a load balancer and database cluster for management data would be created. The application will be able to identify which cloud computing platform it is hosted on through appropriate interaction with APIs of the cloud computing platform. The application also includes logic to execute the cluster management process specific to the cluster characteristics of the cloud computing provider identified. This approach gives a unified application that can be used on any cloud service but still manages the specific details required for hosting and managing clusters and environments on each specific cloud computing platform. As another example, a cloud-specific approach can customize or target the deployment controller153for a specific cloud computing platform. For example, the individual pieces of the deployment controller153can be generated or tailored for a specific target cloud computing platform. For example, for Amazon AWS, the deployment controller153can be implemented using Amazon API Gateway APIs to serve API endpoints and trigger cluster creation. Various functionality of the deployment controller153can also be created using CloudFormation templates. The deployment controller153can handle cluster creation with workflows and procedures (e.g., potentially defined using AWS Step Functions), which can be directly called from an ApiGateway API using the integration request. As another example, to provide wait logic for cluster or infrastructure creation, the application can use a workflow to issue an event or trigger a function (e.g., optionally using AWS Lambda platform) as needed. In case the application crashes and needs to start again from the earlier state, and also to store some cluster or environment-specific information an Amazon DynamoDB can be used. The database can be created using a CloudFormation template. The operations on the cluster and/or environments can be performed using a Lambda function which can access the DynamoDB for read operations and can call specific clusters for other operations if required. In order to protect against unauthenticated and unauthorized access, new API keys can be created, and the API Gateway can be configured to allow access via those specific API keys. For future updates and enhancements to be deployed, the system can provide an updated CloudFormation template to the customer and they can update the Stack as they see fit. This example shows how some of the Amazon AWS tools can be integrated into the function of the deployment controller153. Leveraging the tools natively provided by a specific cloud computing platform avoids the need for cloud-specific code and provides the deployment controller153as a lean application (e.g., limiting the resource usage, dependencies, and other overhead). The system can be customized with cloud-platform-specific elements for each of various other cloud computing platforms. For example, the deployment controller153—as well as other components such as the deployment package112, the deployment orchestrator152, automation data115, configuration data114, etc.—can be customized to use the APIs, tools, protocols, and other features that a specific cloud computing platform provides. FIG.3shows actions in the system100to create a cluster160of processing nodes and to deploy server environments170a-170bin the cluster160. In stage (I), the administrator102uses the computing device130to interact with the deployment tools in the cloud computing account150. The administrator102sends instructions131that leverage the API provided by the deployment controller153. These instructions131can be sent using direct commands, through scripts or command-line interface, or can be sent as a result of interaction with a graphical user interface. For example, the software provider can provide a native application, web application, or web page that includes functionality to generate and issue commands for the API, to create and manage clusters as well as server environments. In the illustrated example, the instructions131include API commands to create a new cluster of processing nodes and to create two environments in the cluster. In stage (J), instructions131to create a new cluster are received, interpreted, and communicated to the deployment orchestrator152, which creates a new cluster160of processing nodes, e.g., a Kubernetes cluster. The instructions131can include, as API call payload data, various settings or parameter values such as geographical region settings, an instance type, cluster size, and so on, and the deployment orchestrator152creates the cluster160according to the specified parameters. In some implementations, the instructions131trigger a cluster deployment workflow for that includes several steps, including (1) deploying a cluster160of processing nodes, including allocating appropriate computing resources to the cluster160and providing status updates for the cluster160, (2) creating a management namespace161and deploying an environment configuration module162in the cluster160, along with an environment monitor164and reporting and alerting functionality166, (3) provisioning a file system168that can optionally act as a shared volume for multiple environments, and (4) provisioning relational database services169, e.g., a database engine configured to process structured query language (SQL) statements and connect to existing databases. The cluster creation workflow can also involve deploying the cluster160in a private subnet, along with worker nodes and resource auto-scaling groups. Once the cluster160is created, the customer can trigger the creation of environments In stage (K), after the cluster160is running and available, additional instructions131to create server environments in the cluster160are received. The instructions131can use API calls for the deployment controller API and instruct to deploy a particular server environment in a specific cluster160and for specific geographic region settings. The API call can specify which services are desired for the new environment. These services can correspond to different containers from different container images113, for example, a document library, web server, application server, telemetry module, platform analytics, and so on. The deployment controller153provides the instructions and related parameter values to the environment configuration module162, which acts as an environment orchestrator to create a namespace for each new environment, and to install the appropriate packages and containers needed. For example, after creating an environment namespace for a new environment170a, the environment configuration module162can access the configuration data114for the environment configuration selected (e.g., the combination of containers needed to provide the services specified through the deployment controller API). The environment configuration module162identifies the container images113needed to provide the containers of the environment170a, and the configuration data114can include package information as discussed above (e.g., indicating dependencies, default settings, and so on). In some implementations, the dependencies and parameters are specified in a Helm chart or manifest file that specifies the elements and actions needed to deploy the respective containers needed for the environment170a. In the example, the environment170ais deployed with a set of containers171arun from corresponding container images113. The environment170bis deployed with a set of containers171b. The environment deployment process can include deploying an environment pod, which is a single instance of a running environment. The environment pod can include one or more containers. In some implementations, when multiple containers are included the containers are managed as a single entity and they share the resources of the pod. In some implementations, one or more environments170a,170bmay be generated based on configuration data from another server environment. For example, settings may be replicated or derived from an archive of environment data for another cloud computing environment or from an on-premises environment. In this process, the environment configuration module162may derive configuration settings from the source environment or an archive (e.g., backup data for an environment), and then configure the new environment with those configuration settings. In this process, the deployment of the new environment does not provide or disclose the environment data or settings to any external party. If the source environment data is already in the account150, then no transfer of the environment data is required either. During environment deployment and afterward, the environment monitor164checks the state of each environment170a-170band can provide this information to other deployment elements in the account150and/or to the computing device130. The reporting and alerting module166enables the administrator102to specify various conditions or triggers for generating log data for logs167or providing alerts, which can be provided through any of various channels, e.g., e-mails, mobile device text messages, online messaging platforms, messages through an management user interface103, and so on. FIG.4shows users accessing the deployed environments170a,170bover the network140using various client devices190a-190c. In stage (L), the client devices190a-190csend requests that are routed to and handled by the environments170a,170b. For example, the interactions may be to serve web pages, provide application content, serve documents, generate reports and visualizations, generate information for dashboards and other interfaces, run queries, and so on. Traffic from the client devices190a-190ccan be run through a load balancer192that is used to direct requests to the appropriate environments170a,170bcorresponding to the requests, and to manage traffic flow to manage load to appropriate levels. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, various forms of the flows shown above may be used, with steps re-ordered, added, or removed. Embodiments of the invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the invention can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a tablet computer, a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the invention can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. Embodiments of the invention can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. While this specification contains many specifics, these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. In each instance where an HTML file is mentioned, other file types or formats may be substituted. For instance, an HTML file may be replaced by an XML, JSON, plain text, or other types of files. Moreover, where a table or hash table is mentioned, other data structures (such as spreadsheets, relational databases, or structured files) may be used. Particular embodiments of the invention have been described. Other embodiments are within the scope of the following claims. For example, the steps recited in the claims can be performed in a different order and still achieve desirable results.
42,453
11861343
DETAILED DESCRIPTION Examples described herein relate to updating computer program(s) installed on one or more programmable devices of a computer network using a distributed ledger that is available to multiple devices of the computer network. As such, one or more of the examples described herein provides an alternative to the central communication model of updating computer programs (i.e., the client/server model). Consequently, at least one of the examples described herein is directed to improving computer functionality. In particular, at least one of the examples described herein can assist with one or more of the following: (i) minimizing or eliminating faulty updates to devices of a computer network that have the potential to disable one or more devices of the computer network; (ii) minimizing or eliminating risks to the operational integrity of a computer network caused by faulty updates; (iii) minimizing or eliminating the use of servers as the only update entities because such servers are potential bottlenecks and failure points that can disrupt the functioning of an entire computer network; (iv) minimizing or eliminating vulnerabilities caused by security compromises (e.g., man-in-the-middle attacks, etc.) because the data associated with the multiple devices of a computer network does not have to be communicated using a centralized communication model; (v) improving interoperability across a highly heterogeneous group of devices that are serviced by many vendors or 3rdparty updating services by minimizing or eliminating defects (e.g., bugs, etc.) in a data modeling system (DMS) or abstracted data associated with the DMS; and (vi) enabling 3rdparty updating services to deploy and install updates on one or more orphaned devices in a computer network, which can assist minimizing failures in the computer network. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. It will be apparent, however, to one skilled in the art that the examples described herein may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the examples described herein. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter in the examples described herein. As such, resort to the claims is necessary to determine the inventive subject matter in the examples described herein. Reference in the specification to “one example,” “an example,” “another example,” or their variations means that a particular feature, structure, or characteristic described in connection with the examples is included in at least one of the example described herein, and multiple references to “one example,” “an example,” “another example,” or their variations should not be understood as necessarily all referring to the same example. As used herein, the term “programmable device” and its variations refer to a physical object that includes electronic components configured to receive, transmit, and/or process data information. For one example, one or more of the electronic components may be embedded within the physical object, such as in wearable devices and mobile devices (e.g., self-driving vehicles). For one example, the device may also include actuators, motors, control functions, sensors, and/or other components to perform one or more tasks without human intervention, such as drones, self-driving vehicles, and/or automated transporters. The programmable device can refer to a computing device, such as (but not limited to) a mobile computing device, a lap top computer, a wearable computing device, a network device, an internet of things (IoT) device, a cloud computing device, a vehicle, a smart lock, etc. As used herein, the terms a “program,” a “computer program,” and their variations refer to one or more computer instructions are executed by a programmable device to perform a task. Examples include, but are not limited to, software and firmware. As used herein, the term “software update,” “update,” and their variations refer to modification and/or deletion of one or more computer programs installed on a programmable device. An update includes, but is not limited to, a major version upgrade, a minor version upgrade, a patch, a hotfix, a maintenance release, and a service pack. As such, an update includes moving from a version of a computer program to another version, as well as, moving from one state of a version of a computer program to another state of the same version of the computer program. Updates can be used for fixing security vulnerabilities and other bugs, improving the device's functionality by adding new features, improving power consumption and performance, etc. Updates may be viewed as important features in the lifecycles of programmable devices. As used herein, the term “computer network” and its variations refer to a collection of interconnected programmable devices that can exchange data with each other. One example of a computer network is a peer-to-peer network. In a computer network, interconnected programmable devices exchange data with each other using a communication mechanism. The connections between interconnected programmable devices are established using either wired or wireless communication mechanisms. Examples of communication mechanisms include, but are not limited to, any type of data network such as a local area network (LAN), a wide area network (WAN) such as the Internet, a fiber network, a storage network, or a combination thereof, wired or wireless. The communication mechanisms also include networking hardware (e.g., switches, gateways, routers, network bridges, modems, wireless access points, networking cables, line drivers, switches, hubs, repeaters, etc.). As used herein, the term “distributed ledger” and its variations refer to a database that is available to multiple devices of a computer network. One key feature of a distributed ledger is that there is no central data store where a master copy of the distributed ledger is maintained. Instead, the distributed ledger is stored in many different data stores, and a consensus protocol ensures that each copy of the ledger is identical to every other copy of the distributed ledger. A distributed ledger can, for example, be based on a blockchain-based technology, which is known in the art of cryptography and cryptocurrencies (e.g. bitcoin, etherium, etc.). The distributed ledger may provide a publically and/or non-publically verifiable ledger used for updating software in one or more programmable devices of a computer network. Changes in the distributed ledger (e.g., software updates, etc.) represent updates to one or more computer programs installed on one or more programmable devices of a computer network. These changes may be added to and/or recorded in the distributed ledger. For one example, multiple programmable devices of a computer network are required to validate updates, add them to their copy of the distributed ledger, and broadcast their updated distributed ledger to the entire computer network. Each of the programmable devices having the distributed ledger may validate updates according to a validation protocol. For one example, the validation protocol defines a process by which devices of the computer network agree on changes and/or additions to the distributed ledger. For one example, the validation protocol may include the proof-of-work protocol implemented by Bitcoin or a public consensus protocol. For another example, the validation protocol may include a private and/or custom validation protocol. The distributed ledger enables devices in a computer network to agree via the verification protocol on one or more changes and/or additions to the distributed ledger (e.g., to include updates, to delete updates, to reject updates, etc.). FIG.1is a block diagram illustrating a computer network100comprised of interconnected programmable devices102A-N according to one example. As shown, the computer network100includes multiple devices102A-N, multiple update entities104A-N, and one or more communication mechanisms105. Each of these elements of the computer network100is described in further detail below. Each of the devices102A-N can be an internet of things (IoT) device, a mobile computing device, or a cloud computing device. Also, each of the devices102A-N can include electronic components130A-N. Examples of the components130A-N include: processing unit(s) (such as microprocessors, co-processors, other types of integrated circuits (ICs), etc.); corresponding memory; and/or other related circuitry. For one example, each of the devices102A-N includes a corresponding one of the distributed ledger logic/modules101A-N, which implements a distributed ledger103. The ledger103is used for updating one or more computer programs installed on one or more of the devices102A-N. For one example, the distributed ledger103, which is distributed across at least two of the devices102A-N, is used to avoid one or more shortcomings of a central communication technique used for updating computer programs (i.e., the server/client model). Although not shown inFIG.1, for one example, the distributed ledger103is replicated on and available to the devices102A-N and the update entities104A-N. Thus, for this example, each of the update entities104A-N includes a corresponding distributed ledger logic/module that is similar to the distributed ledger logic/modules101A-N described in connection withFIGS.1-6throughout this document. Each of the distributed ledger logic/modules101A-N can be implemented as at least one of hardware (e.g., electronic circuitry of the processing unit(s), dedicated logic, etc.), software (e.g., one or more instructions associated with a computer program executed by the processing unit(s), software run on a general-purpose computer system or a dedicated machine, etc.), or a combination thereof. For one example, each of the distributed ledger logic/modules101A-N performs one or more examples of techniques for updating a computer program installed on one or more interconnected programmable devices102A-N, as described herein. For some examples, each of the distributed ledger logic/modules101A-N is implemented as one or more special-purpose processors with tamper resistance features. Examples of such special-purpose processors include a trusted platform module (TPM) cryptoprocessor, an application specific integrated circuit (ASIC), an application-specific instruction set processor (ASIP), a field programmable gate array (FPGA), a digital signal processor (DSP), any type of cryptographic processor, an embedded processor, a co-processor, or any other type of logic with tamper resistance features that is capable of processing instructions. In this way, the ledger103can be implemented and maintained in a secure manner that assists with minimizing or preventing security vulnerabilities. For a further example, the distributed ledger logic/modules101A-N may be maintained separately from the components130A-N. For example, the distributed ledger logic/modules101A may be implemented as one or more special-purpose processors that is separate from the components130A-N. In the computer network100, each of the programmable devices102A-N includes one or more computer programs (e.g., software, firmware, etc.) for performing its operations and functionalities. Furthermore, each of device102A-N's computer program(s) may be updated as the computer program(s) are changed or modified by developers or third party updating services. These updates are usually in the form of major version updates, minor version updates, patches, hotfixes, maintenance releases, service packs, etc. The goal of updating computer program(s) installed on the programmable devices102A-N is to bring such a device up to date or to improve its characteristics. These improvements include, but are not limited to, fixing security vulnerabilities and other bugs, improving the device's functionality by adding new features, or improving power consumption and performance. Such updates, therefore, can be viewed as important features in the lifecycles of IoT devices, mobile computing devices, and cloud computing devices. For a specific example, each of the distributed ledger logic/modules101A-N is implemented in a trusted execution environment (TREE) of one or more processors of the devices102A-N. In this way, the TREE acts as an isolated environment for the ledger103that runs in parallel with the other computer programs (e.g., software, firmware, etc.) installed on the devices102A-N. Each of the update entities104A-N in the computer network100is a computer system that executes various types of processing including delivery of updates. Also, each of the update entities104A-N can include electronic components131A-N. Examples of the components131A-N include: processing unit(s) (such as microprocessors, co-processors, other types of integrated circuits (ICs), etc.); corresponding memory; and/or other related circuitry. As such, each of the update entities104A-N can be any of various types of computers, including general-purpose computers, workstations, personal computers, servers, etc. For one example, the update entities104A-N in the computer network100are associated with an external entity. For this example, the update entities104A-N include software update systems of manufacturers of device(s)102A-N and/or software update systems of 3rdparty updating services for the device(s)102A-N. As such, the update entities104A-N can deliver software updates106A-N from multiple update sources owned by different entities to the device(s)102A-N. Examples of software update systems associated with external entities include Internet-based update facilities that facilitate updates for software (e.g., operating systems, etc.) or firmware installed on one or more devices102A-N. The updates106A-N provided by the update entities104A-N can include virus definition updates used by virus scanning programs, drivers to improve functionalities of devices102A-N, updates to one or more applications installed on the devices102A-N, upgrades to major or minor versions of firmware or software installed on one or more of the device(s)102A-N, etc. Each of the updates106A-N can be in the form of a bundle, which is used herein to refer to a directory with a standardized hierarchical structure that holds executable code and the resources used by that code. For example, a bundle can include a major version upgrade, a minor version upgrade, a hotfix, a patch, and all resources required to install the bundle's contents on one or more of the devices102A-N. The devices102A-N and the entities104A-N communicate within the computer network100via one or more communication mechanisms105. These mechanisms105comprise one or more different types of communication networks, such as the Internet, enterprise networks, data centers, fiber networks, storage networks, WANs, and/or LANs. Each of the communication mechanisms105may provide wired and/or wireless connections between the devices102A-N and the entities104A-N that operate in the electrical and/or optical domain, and also employ any number of network communication protocols (e.g., TCP/IP). For example, one or more of the communication mechanisms105within the computer network100may be a wireless fidelity (Wi-Fi®) network, a Bluetooth® network, a Zigbee® network, and/or any other suitable radio based network as would be appreciated by one of ordinary skill in the art upon viewing this disclosure. It is to be appreciated by those having ordinary skill in the art that the communication mechanism(s)105may also include any required networking hardware, such as network nodes that are configured to transport data over computer mechanisms105. Examples of network nodes include, but are not limited to, switches, gateways, routers, network bridges, modems, wireless access points, networking cables, line drivers, switches, hubs, and repeaters. For example, at least one of the devices102A-N and/or at least one of the entities104A-N implements the functionality of a network node. One or more of the communication mechanisms105within the computer network100may be configured to implement computer virtualization, such as virtual private network (VPN) and/or cloud based networking. For one example, at least one of the devices102A-N and/or at least one of the entities104A-N comprises a plurality of virtual machines (VMs), containers, and/or other types of virtualized computing systems for processing computing instructions and transmitting and/or receiving data over communication mechanism105. Furthermore, at least one of the devices102A-N and/or at least one of the entities104A-N may be configured to support a multi-tenant architecture, where each tenant may implement its own secure and isolated virtual network environment. Although not illustrated inFIG.1, the computer network100can enable at least one of the devices102A-N and/or at least one of the entities104A-N to connect to a variety of other types of programmable devices, such as VMs, containers, hosts, storage devices, wearable devices, mobile devices, and/or any other device configured to transmit and/or receive data using wired or wireless communication mechanisms105. For some examples, the communication mechanism(s)105comprise a cellular network for use with at least one of the devices102A-N and/or at least one of the entities104A-N. For this example, the cellular network may be capable of supporting of a variety of devices102A-N and/or the entities104A-N that include, but are not limited to computers, laptops, and/or a variety of mobile devices (e.g., mobile phones, self-driving vehicles, ships, and drones). The cellular network can be used in lieu of or together with at least one of the other communication mechanisms105described above. Cellular networks are known so they are not described in detail in this document. In some situations, updates106A-N for the computer program(s) installed on the devices102A-N are meant to fix problems. However, these updates106A-N can sometimes introduce new problems (e.g., a software regression, etc.). In some scenarios, an update to a single one of the devices102A-N (e.g., device102A, etc.) can disable one or more devices102A-N (e.g., one or more devices102B-N, etc.), which can in turn cause risks to the operational integrity of the computer network100. If an update (e.g., a hotfix, a patch, etc.) to a computer program that is installed on one or more of the devices102A-N includes new functionality for addressing security vulnerabilities of the computer program, this new functionality may have a more negative effect on the availability of the devices102A-N than previous versions of the installed computer program thought to present the security vulnerabilities. The distributed ledger103, as implemented by the distributed ledger logic/modules101A-N, can assist with minimizing or eliminating at least one of the problems described in the immediately preceding paragraph. This is because the distributed ledger103operates based on the concept of decentralized consensus, as opposed to the currently utilized concept of centralized consensus. Centralized consensus is the basis of the client/server model and it requires one central database or server for deciding which updates are provided to the device(s)102A-N, and as a result, this can create a single point of failure that is susceptible to security vulnerabilities. In contrast, the distributed ledger103operates based on a decentralized scheme that does not require a central database for deciding which updates are provided to one or more of the devices102A-N. For one example, the computer network100enables its nodes (e.g., the devices102A-N) to continuously and sequentially record the application of the updates106A-B to the devices102A-N in a unique chain—that is, in the distributed ledger103. For one example, the distributed ledger103is an append-only record of the updates106A-B applied to the devices102A-N that is based on a combination of cryptography and blockchain technology. For this example, each successive block of the distributed ledger103comprises a unique fingerprint of the previously applied update. This unique fingerprint can be include at least one of: (i) a hash as is known in the art of cryptography (e.g., SHA, RIPEMD, Whirlpool, Scrypt, HAS-160, etc.); or (ii) a digital signature generated with a public key, a private key, or the hash as is known in the art of generating digital signatures. Examples of digital signature algorithms include secure asymmetric key digital signing algorithms. One advantage of the distributed ledger103is that it can assist with securing the authentication of the update source (e.g., the update entities104A-N), which in turn removes the need for the central database or server that is required in the client/server model. Consequently, the distributed ledger103can assist with ensuring that there is never a duplicate one of the updates106A-B being applied more than once to the devices102A-N. For example, when the device102A receives an update106A from the entity104A and an update106B from the entity104B, the distributed ledger logic/module101A records the sequence of applying updates106A-B to the computer program(s) installed on the device102A. For this example, the records created by the distributed ledger logic/module101A in the ledger103are communicated via the communication mechanism(s)105to every other copy of the ledger103that is stored on or available to the other distributed ledger logic/module101B-N. In this way, and for this example, the distributed ledger103enables all of the devices102A-N to maintain a record of when and where the updates106A-B were applied, which can assist with determining points of failure and minimizing security vulnerabilities. The distributed ledger103, as a blockchain, includes information stored in its header that is accessible to the devices(s)102A-N and/or the entities104A-N, which enables the devices(s)102A-N and/or the entities104A-N to “view” the sequence of updates106A-N that have been applied to the devices(s)102A-N. In this way, the distributed ledger103is a software design approach that binds devices102A-N and/or the entities104A-N together such that commonly obey the same consensus process for releasing or recording what information they hold, and where all related interactions are verified by cryptography. The distributed ledger103can be a private blockchain or a public blockchain. Furthermore, the ledger103can be a permissioned blockchain or a permissionless blockchain. One issue associated with distributed ledgers that are based on blockchain technology is that they are resource-intensive. That is, they require a large amount of processing power, storage capacity, and computational resources that grow as the ledger is replicated on more and more devices. This issue is based, at least in part, on the requirement that every node or device that includes a ledger must process every transaction in order to ensure security, which can become computationally expensive. As such, each device that includes the ledger may have to have access to a sizable amount of computational resources. On programmable devices with fixed or limited computational resources (e.g., mobile devices, vehicles, smartphones, lap tops, tablets, and media players, microconsoles, IoT devices, etc.), processing a ledger may prove difficult. At least one example of the distributed ledger103described herein can assist with minimizing the resource-intensive issue described above. For one example, the distributed ledger103is not constructed as a monolithic blockchain with all of its blocks existing on all of the devices102A-N and/or the entities104A-N. Instead, the distributed ledger103is constructed is as a light ledger based on, for example, the light client protocol for the ethereum blockchain, the light client protocol for the bitcoin blockchain, etc. In this way, the ledger103may be replicated on the devices102A-N and/or the entities104A-N on an as-needed basis. For one example, any one of the devices102A-N and/or the entities104A-N that is resource-constrained will only store the most recent blocks of the ledger103(as opposed to all of the blocks of the ledger103). For this example, the number of blocks stored by a particular device or entity can be determined dynamically based on its storage and processing capabilities. For example, any one of the devices102A-N and/or the entities104A-N can store (and also process) only the current block and immediately following block of the ledger103. This ensures that any consensus protocols required to add new blocks to ledger103can be executed successfully without requiring all the devices102A-N and/or the entities104A-N to store the ledger103as a large monolithic blockchain. For another example, each block of a ledger103may be based on a light client protocol such that the block is broken into two parts: (a) a block header showing metadata about which one of the updates106A-N was committed to the block; and (b) a transaction tree that contains the actual data for the committed one of the updates106A-N in the block. For this example, the block header can include at least one of the following: (i) a hash of the previous block's block header; (ii) a Merkle root of the transaction tree; (iii) a proof of work nonce; (iv) a timestamp associated with the committed updates106A-N in the block; (v) a Merkle root for verifying existence of the committed one of the updates106A-N in the block; or (vi) a Merkle root for verifying whether the committed one of the updates106A-N in block was applied to a configuration of a computer program installed on one or more of the devices102A-N. For this example, the devices102A-N and/or the entities104A-N having the ledger103can use the block headers to keep track of the entire ledger103, and request a specific block's transaction tree only when processing operations need to be performed on the ledger103(e.g., adding a new block to the ledger103, etc.). For yet another example, the ledger103can be made more resource-efficient by being based on the epoch Slasher technique associated with the light client protocol for the ethereum blockchain. In some instances, a blockchain synchronization algorithm is required to maintain the ledger103across the devices102A-N and/or the entities104A-N. Here, the blockchain synchronization algorithm enables nodes of the system100(e.g., one or more of the devices102A-N and/or the entities104A-N) to perform a process of adding transactions to the ledger103and agreeing on the contents of the ledger103. The blockchain synchronization algorithm allows for one or more of the devices102A-N and/or the entities104A-N to use the ledger103, as a block chain, to distinguish legitimate transactions (i.e., software updates) from attempts to comprise to include false or faulty information by an attacker (e.g., man-in-the-middle attacks, etc.). Executing the blockchain synchronization algorithm is designed to be resource-intensive so that the individual blocks of the ledger103must contain a proof to be considered valid. Examples of proofs include, but are not limited to, a proof of work and a proof of stake. Each block's proof is verified by the devices102A-N and/or the entities104A-N when they receive the block. In this way, the blockchain synchronization algorithm assists with allowing the devices102A-N and/or the entities104A-N to reach a secure, tamper-resistant consensus. For one example, the blockchain synchronization algorithm is embedded in the system100and performed by at least one of the devices102A-N and/or the entities104A-N. For example, one or more of the devices102A-N and/or the entities104A-N may include an FPGA that is dedicated to performing and executing the blockchain synchronization algorithm. For this example, the FPGA generates the proofs for the blocks to be included in the ledger103. Also, and for this example, the blocks are added to the ledger103only through verification and consensus (as described above). The blockchain synchronization algorithm can be performed by: (i) any of the devices102A-N and/or the entities104A-N; or (ii) multiple of the devices102A-N and/or the entities104A-N. For a further example, generating proofs for new blocks is performed in response to automatically determining the complexity of the operation given the availability of resources in the system100. In this way, the resources of system100can be utilized more efficiently. For another example, the blockchain synchronization algorithm is performed outside of the system100by, for example, a synchronization device (not shown). This synchronization device can be paired to one or more of the devices102A-N and/or the entities104A-N having the ledger103. For example, one or more of the devices102A-N may be paired via communication mechanism(s)105to a synchronization device outside the system100. For this example, the synchronization device includes electronic components that are similar to components130A-N (which are described above). Also, and for this example, each transaction (e.g., a software update, a record of a software update, etc.) is communicated to the synchronization device via the communication mechanism(s)105using one or more secure communication techniques. Here, the synchronization device generates the proof required for verification and consensus and communicates it back to the system100. For yet another example, the ledger103may be maintained across the system100without using the blockchain synchronization algorithm. As a first example, the ledger103may be implemented as a distributed database. For a second example, the ledger103may be maintained across the system100as a distributed version control system (DVCS), which is also sometimes known as a distributed revision control system (DVRS). Examples of a DVCS include, but are not limited to, ArX, BitKeeper, Codeville, Dares, DCVS, Fossil, Git, and Veracity. The ledger103can also be made as a combination of the immediately preceding examples. For one example, the ledger103is implemented with the blockchain synchronization algorithm in response to determining that resources of the system100are sufficient for the resource-intensive synchronization process. For this example, the ledger103is implemented without the blockchain synchronization algorithm in response to determining that resources of the system100are not enough for the synchronization process. Enabling the devices102A-N to record the applied ones of updates106A-N to the ledger103and/or enabling the update entities104A-N to commit updates106A-N to the ledger103can be based on the enhanced privacy identification (EPID) protocol, e.g., the zero proof protocol. For an example based on the zero proof protocol, one or more of the devices102A-N (e.g., device102A, etc.) acts as a verifier that determines whether other ones of the devices102A-N (e.g., devices102B-N, etc.) and/or one or more update entities104A-N are members of a group of devices that have been granted the privilege to have their actions processed and added to the blockchain represented as the ledger103. For this example, each of the devices102A-N (e.g., devices102B-N, etc.) and/or the one or more update entities104A-N that has privilege to access the ledger103cryptographically binds its corresponding public-key to the zero-knowledge proof sent to the verifier, resulting in that public-key being recognized as an identity that has obtained permission to perform actions on the blockchain represented as the ledger103. For one example, the device(s)102A-N (e.g., device102A, etc.) acting as the verifier adds the verified public-key to the ledger103. Thus, the ledger103can maintain its own list of devices102A-N and/or entities106A-N that can interact with the ledger103. In this way, the device(s)102A-N (e.g., device102A, etc.) acting as the verifier ensures that any of the devices102A-N and/or entities106A-N that writes to the ledger103is authorized to do so. To assist with security, and for one example, the ledger103can be accessible to the update entities104A-N only via public key cryptography. Here, public keys associated with the ledger103can be disseminated to the entities104A-N, on an as-needed basis, with private keys associated with the ledger103, which would be known only to users of the devices102A-N. In this way, public key cryptography can be used for two functions: (i) using the public key to authenticate that an update originated with one of the entities104A-N that is a holder of the paired private key; or (ii) encrypting an update provided by one of the entities104A-N with the public key to ensure that only users of the devices102A-N, which would be the holders of the paired private key can decrypt the update. For example, and for one example, the entity104A cannot commit the update106A to the ledger103unless the entity104A is granted access to the ledger103via public key cryptography and/or unless the entity104A has been verified via the zero proof protocol described above. While, the public key may be publicly available to the entities104A-N, a private key and/or prior verification via the zero proof protocol will be necessary to commit the updates106A to the ledger103. For this example, the private key can be provided to the entity104A via the communication mechanism(s)105by the logic/module101A in response to input provided to the device102A by a user. Based on a combination of public key cryptography and/or the verification via the zero proof protocol, the entity104A is enabled to commit update106A to the ledger103. As shown by the immediately preceding example, only users of devices102A-N can provide the update entities104A-N with access to the ledger103. This has an advantage of minimizing or eliminating the risk of security vulnerabilities (e.g., man-in-the-middle attacks, eavesdropping, unauthorized data modification, denial-of-service attacks, sniffer attacks, identity spoofing, etc.) because the users will always know which ones of entities104A-N has been granted to their devices102A-N via the ledger103. For one example, the private key can include information that grants the update entities104A-N with access to the ledger103for a limited period of time (e.g., 10 minutes, 1 hour, any other time period, etc.). Thus, security is further bolstered by preventing an entity104A-N from having unfettered access to the devices102A-N and/or the ledger103. To assist with minimizing or eliminating security vulnerabilities, the users of the devices102A-N can revoke the access granted to any suspicious ones of the entities104A-N. For example, the logic/module101A can update the ledger103to reflect suspicious ones of the entities104A-N in response to the logic/module101A receiving input provided to the device104A by a user via the communication mechanism(s)105. In this way, the logic/modules101A-N can reject any requests for access to the ledger103from unauthorized ones of the entities104A-N. For a further example, the device(s)102A-N (e.g., device102A) that act as the verifier can prevent or remove any suspicious devices102A-N (e.g., device102B-N) and/or entities104A-N from the verified group described above. The immediately preceding example can be performed in response to user provided inputs. FIG.2is a sequence diagram illustrating a technique200for updating a computer program installed on one or more interconnected programmable devices according to one example. The technique200can be performed by one or more elements of the network100described above in connectionFIG.1. For example, a processor (e.g., a cryptoprocessor, a TPM-compliant processor, etc.) implementing a distributed ledger module/logic (e.g., the distributed ledger logic/module101A described above in connection withFIG.1, etc.). Technique200includes some elements of the network100described above in connection withFIG.1. For the sake of brevity, these elements are not described again. Technique200begins at operation201, where a distributed ledger logic/module performing technique200commits a first configuration of a computer program (ML.0) installed on the device102A to a distributed ledger103. For one example, the first configuration of a computer program (ML.0) is committed and recorded to a block250of the ledger103. For one example, a corresponding hash and/or digital signature is generated for the block250, which is provided to copies of the ledger103residing on other devices (e.g., devices102B-N described above in connection withFIG.1). Next, technique200proceeds to operations202and203. Here, an update entity104A transmits a request to the device102A regarding a first update (B1) to be applied to the first configuration of a computer program (ML.0). Also, an update entity104B also transmits a request to the device102A regarding a second update (B2) to be applied to the first configuration of a computer program (ML.0). In some scenarios, the multiple update entities104A-B may cause the device102A to become improperly updated. This can occur, for example, when the update entity104A prepares the first update (B1) based on an understanding that the first configuration of the computer program (ML.0) is the current configuration of the computer program installed on the device102A, while the update entity104B also prepares the second update (B2) based on an understanding that the first configuration of the computer program (ML.0) is the current configuration of the computer program installed on the device102A. In such a situation, the update entities104A-B may race each other to apply their respective updates. Furthermore, even if one of the entities104A-B successfully applies its updates, the other one entity's update may fail or cause the device102A to be improperly updated. For one example, the device102A resolves this issue by posting the updates of the update entities104A-B to the distributed ledger103. In this way, the ledger103can ensure a winner, and assist with minimizing a failure of device102A caused by the improper update process described above. At operation204, the update entity104A successfully applies the first update (B1) to the first configuration of the computer program (ML.0) to create a first updated computer program (B1′=B1+ML.0). Furthermore, at operation205, the update entity104B attempts to apply the second update (B2) to the first configuration of the computer program (ML.0) to create a non-existent first updated computer program (B2′=B2+ML.0). Given that operation204was successful, the device102A receives the first updated computer program (B1′=B1+ML.0) from the update entity104A, as shown in operation206. For a further example, operation206includes the update entity104A committing the first updated computer program (B1′=B1+ML.0) to the device102A. Furthermore, the logic/module101A directs the device102A to commit the first updated computer program (B1′=B1+ML.0) to the ledger103, as shown in operation207. As shown inFIG.2, the first updated computer program (B1′=B1+ML.0) is recorded in block251of the ledger103by the logic/module101A as a second configuration of the computer program (ML.1). At operation208, the device102A transmits an acknowledgement that is received by the update entity104A to indicate that the first updated computer program (B1′=B1+ML.0) was successfully received by the device102A and/or successfully committed to the ledger103as the second configuration of the computer program (ML.1). With regard to the unsuccessful operation205, the update entity104B attempts to commit or transmit the non-existent first updated computer program (B2′=B2+ML.0) to the device102A, as shown in operation210. At operation211, the device102A transmits an acknowledgement that is received by the update entity104B to indicate that the non-existent first updated computer program (B2′=B2+ML.0) was not successfully received by and/or installed on the device102A. Next, in response to operation209, the device102A determines that the first configuration of the computer program (ML.0) no longer exists because the ledger103includes the second configuration of the computer program (ML.1). This determination can include the distributed ledger logic/module101A examining the most recent block of the ledger103to determine the latest configuration of the computer program. Based on this determination, and at operation212, the update entity104B successfully applies the second update (B2) to the second configuration of the computer program (ML.1) to create a second updated computer program (B2″=B2+ML.1). Next, and as shown in operation213, the device102A receives the second updated computer program (B2″=B2+ML.1) from the update entity104B. For a further example, operation213includes the update entity104B committing the second updated computer program (B2″=B2+ML.1) to the device102A. Furthermore, the logic/module101A directs the device102A to commit the second updated computer program (B2″=B2+ML.1) to the ledger103, as shown in operation214. The second updated computer program (B2″=B2+ML.1) is recorded in block253of the ledger103by the logic/module101A as a third configuration of the computer program (ML.2). At operation215, the device102A transmits an acknowledgement that is received by the update entity104B to indicate that the second updated computer program (B2″=B2+ML.1) was successfully installed on the device102A and/or successfully committed to the ledger103as the third configuration of the computer program (ML.2). For any additional requests to update the computer program installed on the device102A, operation216can be performed before application of such updates to ensure that the updates are applied to the proper configuration of the computer program. Specifically, the device102A performs operation216to determine that the first configuration of the computer program (ML.0) and the second configuration of the computer program (ML.1) no longer exists because the ledger103includes the third configuration of the computer program (ML.2). Technique200can assist with allowing updates that include multiple components, firmware, system software, application binaries, application interpreters and/or data model scripts to be updated successfully in situations where these updates are dynamically received from multiple sources. Referring now toFIG.3, which is a sequence diagram illustrating a technique300for updating a computer program using a distributed ledger103in accord with one example. The technique300can be performed by one or more elements of the network100described above in connectionFIG.1. For example, a processor (e.g., a cryptoprocessor, a TPM-compliant processor, etc.) implementing a distributed ledger module/logic (e.g., the distributed ledger logic/module101A described above in connection withFIG.1, etc.). Technique300includes some elements of the technique200described above in connection withFIG.2and some elements of the network100described above in connection withFIG.1. For the sake of brevity, these elements are not described again. One feature of the distributed ledger103, which is based on blockchain technology, is the ability to resolve forks attributable to the devices102A-N and/or the entities104A-N that have access to the ledger103attempting to add blocks to the end of the chain by finding a nonce that produces a valid hash for a given block of data. When two blocks are found that both claim to reference the same previous block, a fork in the chain is created. Some of the devices102A-N and/or the entities104A-N in the network100will attempt to find the next block on one end of the fork while other ones of the devices102A-N and/or the entities104A-N in the network100will work from the other end of the fork. Eventually one of the forks will surpass the other in length, and the longest chain is accepted by consensus as the valid chain. This is usually achieved using a consensus algorithm or protocol. Therefore, intruders attempting to change a block must not only re-find a valid hash for each subsequent block, but must do it faster than everyone else working on the currently accepted chain. Thus, after a certain number of blocks have been chained onto a particular block, it becomes a resource-intensive task to falsify contents of a block, which assists with minimizing or eliminating security vulnerabilities. For one example, this ability to resolve forks can be used to ensure that the updates106A-N are properly applied, especially in situations where rollback operations and roll-forward operations are necessary to deal with problematic updates. Referring again toFIG.3, which illustrates two chains331and333. Each of the chains331and333represents a fork that is created due to a rollback operation301, correction operation303, and/or a roll-forward operation305. Technique300can begin when at least one of the logic/modules101A-N detects or determines that a second configuration of the computer program (ML.1), which is recorded in block251of the ledger103includes a flaw that affects an operation of the programmable device102A. Such a flaw can, for example, be the result of applying a faulty update to the first configuration of the computer program (ML.0), as described above. The result of such an application can create a defective second configuration of the computer program (ML.1) that causes one or more of the devices102A-N of the network100to crash or malfunction. The one or more logic/modules101A-N can detect a flaw in a configuration of a computer program installed on the devices102A using one or more software configuration management (SCM) techniques. One example of an SCM technique includes analyzing one or more checksums of software images associated with updating configurations of a computer program installed on the device102A (e.g., any of updates106A-N applied to the device102A, etc.). For this example, the one or more logic/modules101A-N detect the flaw in the second configuration of the computer program (ML.1) installed on or received by the device102by comparing: (i) a first checksum associated with the second configuration (ML.1) received by the device102A from the update entity104A; and (ii) a second checksum associated with the second configuration (ML.1) that was transmitted by the update entity104A to the device102A. For another example of an SCM technique, a watchdog timing technique and/or a heartbeat timing technique can be used to detect a flaw that results from applying an update to the first configuration of the computer program (ML.0). A watchdog timing technique includes the device102A periodically resetting a timer before the timer expires to indicate that there are no errors in the operation of the device102A. When the device102A does not reset its timer, it is assumed that the operation of device102A is flawed. Thus, the one or more logic/modules101A-N can detect the flaw in the second configuration of the computer program (ML.1) installed on device102A when the one or more logic/modules101A-N determine that the device102A failed to reset its timer after application of an update to the first configuration of the computer program (ML.0). A heartbeat timing technique generally includes the device102A transmitting a heartbeat signal with a payload to another device (e.g., any of devices102B-N, etc.) in the network (e.g., network100, etc.) to indicate that the device102A is operating properly. Thus, one or more logic/modules101A-N can detect the flaw in the second configuration of the computer program (ML.1) installed on device102A when the one or more logic/modules101A-N determine that the device102A failed to transmit its heartbeat signal on time after application of an update to the first configuration of the computer program (ML.0). The watchdog timing technique and/or the heartbeat timing technique can be implemented in a processor (e.g., fault-tolerant microprocessor, etc.) of the device102A. For yet another example of an SCM technique, exception handling techniques (e.g., language level features, checking of error codes, etc.) can be used by the logic/module101A to determine that the second configuration of the computer program (ML.1) installed on device102A is flawed. For a specific example of an exception handling technique that applies when the device102A includes or executes a script, the one or more logic/modules101A-N can determine that the second configuration of the computer program (ML.1) installed on device102A is flawed when the one or more logic/modules101A-N determine that the device102A failed to output or return a result message (e.g., an exit status message, a result value, etc.) to indicate that the script was successfully run or executed after an update was applied to the first configuration of the computer program (ML.0). The one or more logic/modules101A-N can request the result message from the processor(s) of the device102A running or executing the script. The description in the immediately preceding paragraph assumes that the device102A was operating properly when the first configuration of the computer program (ML.0) was installed on the device102A and began malfunctioning when the second configuration of the computer program (ML.0) was installed on the device102A. In response to detecting the flaw, at least one of the logic/modules101A-N can perform a rollback operation301to return the computer program a previous state—that is, to return the computer program from the defective second configuration (ML.1) recorded in block251of the ledger103to the properly functioning first configuration (ML.0) recorded in block250of the ledger103. The rollback operation301can assist with restoring a computer program installed on the device102A to a previously known configuration that was functioning properly. This is important in situations where the actual effect of an update may be unknown or speculative, which could result in a configuration of the computer program that is in an inconsistent state. Next, the technique300proceeds to a correction operation301. Here, at least one of the logic/modules101A-N performs the correction operation301to correct the detected flaw, such that updating the first configuration of the computer program (ML.0) to a later configuration will not create unwanted effects. Specifically, the flaw is corrected such that the first configuration of the computer program (ML.0) recorded in block250of the ledger103is updated to a second corrected configuration of the computer program (ML.1′) recorded in block355of the ledger103. This creates a fork in the ledger103that creates the chain333. For an example, updates continue to be applied to the blocks251and the blocks355of the ledger103. In this way, each of the chains331and333continues to grow. For one example, the same update is applied to the configurations of the computer program recorded in blocks251and355to create the updated configurations of the computer program recorded in blocks253and357, respectively. Also, at least one of the logic/modules101A-N can determine that the configuration of the computer program recorded in block251is flawed. Next, at least one of the logic/modules101A-N can perform a roll-forward operation305to replace the flawed configuration recorded in block251with the properly functioning configuration of the computer program recorded in block357of chain333. Consequently, the chain331becomes shorter than the chain333. Thus, a plurality of the logic/modules101A-N would agree, based on a consensus algorithm or protocol, that the chain333is the valid chain. In this way, issues associated with faulty updates can be rectified using the distributed ledger103. An advantage of this example is that faulty updates can be removed from the network100based on the blocks of the distributed ledger103, such that all of the devices102A-N can eventually eliminate any faulty configurations of computer programs as they reach a consensus of which configurations are good ones. Detecting flaws in the configurations of the computer program may occur as a result of audits, forensics or other investigation of configurations installed on the devices102A-N or found on the ledger103. For one example, the one or more logic/modules101A-N used to perform technique300can reside in any one of devices102A-N in the network100or in a TREE of a processor of one or more of the devices102A-N that may execute independently of the device(s)102A-N. For one example, the configurations of the computer program recorded in blocks250,251,253,355,357, and359of the ledger103are maintained in bundles in order to keep versioning simple. That is, each bundle representing one of the configurations of the computer program recorded in blocks250,251,253,355,357, and359of the ledger103contains all the packages, files and other dependencies that may be implicated when performing an update. For one example, a bundle version string is used by the one or more logic/modules101A-N to track the first configuration of the computer program and any subsequent updates applied to the first configuration that result in one or more later configurations of the computer program. One benefit of bundles is that each file in a bundle has an integrity hash that can be compared with the installed file hash, which can assist with improving update speed. With regard now toFIG.4, which is a sequence diagram illustrating a technique400for updating a computer program installed on programmable device102A using a distributed ledger103according to another example. The technique400can be performed by one or more elements of the network100described above in connectionFIG.1. For example, a processor (e.g., a cryptoprocessor, a TPM-compliant processor, etc.) implementing a distributed ledger module/logic (e.g., the distributed ledger logic/module101A described above in connection withFIG.1, etc.). Technique400includes some elements of the techniques200and300described above in connection withFIGS.2and3, as well as, some elements of the network100described above in connection withFIG.1. For the sake of brevity, these elements are not described again. A distributed ledger module/logic (e.g., one or more of the logic/modules101A-N) may perform the technique400when the update entities104A-B and the device102A have a contract to manage updates. For one example, each contract can be a smart contract—that is, a state stored in the blockchain represented as the distributed ledger103that facilitates, authenticates, and/or enforces performance of a contract between the update entities104A-B and the device102A. Consequently, a smart contract is one feature of the ledger103, as a blockchain, that can assist the one or more distributed ledger modules/logic101A-N with keeping track of a current configuration of a computer program installed on the device102A. This is beneficial because a smart contract can enable the ledger103to remain stable, even as account servicing roles are transferred or passed between the update entities104A-B. Technique400, as described below and in connection withFIG.4, presents an example smart contract that delineates the order of transmitting or applying updates by the update entities104A-B to computer programs installed on the device102A. Technique400begins at operation401, where a distributed ledger module/logic registers the device102A with the ledger103. This can be performed by creating a genesis block (when the ledger103lacks any blocks) or appending a block to an already existing ledger103. For one example, a distributed ledger module/logic registers the device102A with the ledger103by committing the current configuration of the computer program installed on the device102A to the ledger103. At operation403, a distributed ledger module/logic performing the technique400transmits the current configuration of the computer program installed on the device102A to an update entity104A. Operation403can, for one example, be performed in response to the update entity104A requesting application of an update to the current configuration of the computer program. At operations405and407, the update entity104A can identify the proper update and apply the identified update to the current configuration of the computer program received from the device102A to generate a first updated configuration of the computer program. Operation407can, for one example, include the distributed ledger module/logic performing the technique400causing the device102A to receive the first updated configuration from the update entity407A. For a further example, operation407can include the distributed ledger module/logic performing the technique400causing the device102A to commit the first updated configuration to the device102A. A distributed ledger module/logic performing the technique400can, at operation409, register the device102A with its first updated configuration of the computer program. Similar, to operation401, registering the device102A with its first updated configuration of the computer program includes creating a block to record the first updated configuration of the computer program and committing the first updated configuration to the ledger103. Furthermore, the distributed ledger module/logic performing the technique400can, at operation411, inform the update entity104B that the update entity104A applied updates to the current configuration of the computer program installed on the device102A. After operation411, the distributed ledger module/logic performing the technique400transmits its first updated configuration of the computer program based on the most current block of the ledger103in operation413. At operations415and417, the update entity104B can identify the proper update and apply the identified update to the first updated configuration of the computer program received from the device102A to generate a second updated configuration of the computer program. Operation417can, for one example, include the distributed ledger module/logic performing the technique400causing the device102A to receive the second updated configuration from the update entity104B. For a further example, operation417can include the distributed ledger module/logic performing the technique400causing the device102A to commit the second updated configuration to the device102A. Next, at operation419, a distributed ledger module/logic performing the technique400can register the device102A with its second updated configuration of the computer program. Similar, to operations401and409, registering the device102A with its second updated configuration includes creating a block to record the second updated configuration of the computer program and committing the second updated configuration to the ledger103. Furthermore, the distributed ledger module/logic performing the technique400can, at operation421, inform the update entity104A that the update entity104B applied updates to the first updated configuration of the computer program installed on the device102A. In this way, other updates can be applied by the entity104A (if desired). FIG.5is a block diagram that illustrates a programmable device500, which may be used to implement the techniques described herein in accordance with one or more examples (e.g., network100and techniques200,300, and400). The programmable device500illustrated inFIG.5is a multiprocessor programmable device that includes a first processing element570and a second processing element580. While two processing elements570and580are shown, an example of programmable device500may also include only one such processing element. Programmable device500is illustrated as a point-to-point interconnect system, in which the first processing element570and second processing element580are coupled via a point-to-point interconnect550. Any or all of the interconnects illustrated inFIG.5may be implemented as a multi-drop bus rather than point-to-point interconnects. As illustrated inFIG.5, each of processing elements570and580may be multicore processors, including first and second processor cores (i.e., processor cores574A and574B and processor cores584A and784B). Such cores574A,574B,584A,584B may be configured to execute computing instruction code. However, other examples may use processing elements that are single core processors as desired. In examples with multiple processing elements570,580, each processing element may be implemented with different numbers of cores as desired. Each processing element570,580may include at least one shared cache546. The shared cache546A,546B may store data (e.g., computing instructions) that are utilized by one or more components of the processing element, such as the cores574A,574B and584A,584B, respectively. For example, the shared cache may locally cache data stored in a memory532,534for faster access by components of the processing elements570,580. For one or more examples, the shared cache546A,546B may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), or combinations thereof. The memory532,534may include software instructions representing distributed ledger logic/modules101A-N, which includes a distributed ledger103that is accessible by each of the processing elements570and580. Each of the logic/modules101A-N and the distributed ledger103is described above in connection with at leastFIG.1,2,3, or4. WhileFIG.5illustrates a programmable device with two processing elements570,580for clarity of the drawing, the scope of the present disclosure is not so limited and any number of processing elements may be present. Alternatively, one or more of processing elements570,580may be an element other than a processor, such as an graphics processing unit (GPU), a digital signal processing (DSP) unit, a field programmable gate array, or any other programmable processing element. Processing element580may be heterogeneous or asymmetric to processing element570. There may be a variety of differences between processing elements570,580in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst processing elements570,580. In some examples, the various processing elements570,580may reside in the same die package. First processing element570may further include memory controller logic (MC)572and point-to-point (P-P) interconnects576and578. Similarly, second processing element580may include a MC582and P-P interconnects586and588. As illustrated inFIG.5, MCs572and582couple processing elements570,580to respective memories, namely a memory532and a memory534, which may be portions of main memory locally attached to the respective processors. While MC logic572and582is illustrated as integrated into processing elements570,580, in some examples the memory controller logic may be discrete logic outside processing elements570,580rather than integrated therein. Processing element570and processing element580may be coupled to an I/O subsystem590via respective P-P interconnects576and586through links552and554. As illustrated inFIG.5, I/O subsystem590includes P-P interconnects594and598. Furthermore, I/O subsystem590includes an interface592to couple I/O subsystem590with a high performance graphics engine538. In one example, a bus (not shown) may be used to couple graphics engine538to I/O subsystem590. Alternately, a point-to-point interconnect539may couple these components. In turn, I/O subsystem590may be coupled to a first link516via an interface596. In one example, first link516may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another I/O interconnect bus, although the scope of the present disclosure is not so limited. As illustrated inFIG.5, various I/O devices514,524may be coupled to first link516, along with a bridge518that may couple first link516to a second link510. In one example, second link510may be a low pin count (LPC) bus. Various devices may be coupled to second link720including, for example, a keyboard/mouse512, communication device(s)526(which may in turn be in communication with the computer network505), and a data storage unit528such as a disk drive or other mass storage device which may include code530, for one example. The code730may include instructions for performing examples of one or more of the techniques described above. Further, an audio I/O524may be coupled to second link510. Note that other examples are contemplated. For example, instead of the point-to-point architecture ofFIG.5, a system may implement a multi-drop bus or another such communication topology. Although links516and510are illustrated as busses inFIG.5, any desired type of link may be used. In addition, the elements ofFIG.5may alternatively be partitioned using more or fewer integrated chips than illustrated inFIG.5. FIG.6is a block diagram illustrating a programmable device600for use with techniques described herein according to another example. Certain aspects ofFIG.6have been omitted fromFIG.6in order to avoid obscuring other aspects ofFIG.6. FIG.6illustrates that processing elements670,680may include integrated memory and I/O control logic (“CL”)672and682, respectively. In some examples, the672,682may include memory control logic (MC) such as that described above in connection withFIG.6. In addition, CL672,682may also include I/O control logic.FIG.6illustrates that not only may the memories632,634be coupled to the CL672,682, but also that I/O devices644may also be coupled to the control logic672,682. Legacy I/O devices615may be coupled to the I/O subsystem690by interface696. Each processing element670,680may include multiple processor cores, illustrated inFIG.6as processor cores674A,674B,684A and684B. As illustrated inFIG.6, I/O subsystem690includes point-to-point (P-P) interconnects694and698that connect to P-P interconnects676and686of the processing elements670and680with links652and654. Processing elements670and680may also be interconnected by link650and interconnects678and688, respectively. The memory632,634may include software instructions representing distributed ledger logic/modules101A-N, which includes a distributed ledger103, that is accessible and/or executable by each of the processing elements670and680. Each of the logic/modules101A-N and the distributed ledger103is described above in connection with at leastFIG.1,2,3, or4. The programmable devices depicted inFIGS.5and6are schematic illustrations of examples of programmable devices that may be utilized to implement various examples discussed herein. Various components of the programmable devices depicted inFIGS.5and6may be combined in a system-on-a-chip (SoC) architecture. Program instructions may be used to cause a general-purpose or special-purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include a machine readable medium having stored thereon instructions that may be used to program a processing system or other device to perform the methods. The term “machine readable medium” used herein shall include any medium that is capable of storing or encoding a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. The term “machine readable medium” shall accordingly include, but not be limited to, tangible, non-transitory memories such as solid-state memories, optical and magnetic disks. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action or produce a result. At least one example is disclosed and variations, combinations, and/or modifications of the example(s) and/or features of the example(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative examples that result from combining, integrating, and/or omitting features of the example(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations may be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). The use of the term “about” means ±10% of the subsequent number, unless otherwise stated. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having may be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are example(s) of the present disclosure. The following examples pertain to further examples. Example 1 is a non-transitory machine readable medium storing instructions for managing updating of a programmable device that is part of a computer network comprised of multiple interconnected programmable devices, the instructions when executed by a machine cause the machine to: commit, to a distributed ledger, a first configuration of a computer program installed on a programmable device, wherein the programmable device is part of a computer network comprised of multiple interconnected programmable devices, and wherein the distributed ledger exists on at least two of the multiple interconnected programmable devices; receive a first request to apply a first update to the first configuration of the computer program and a second request to apply a second update to the first configuration of the computer program; receive a second configuration of the computer program, the second configuration being generated based on the first update and the first configuration of the computer program; commit the second configuration of the computer program to the distributed ledger; determine, based on the distributed ledger, that the second update cannot be applied to the first configuration of the computer program; receive a third configuration of the computer program in response to determining that the second update cannot be applied, the third configuration being generated based on the second update and the second configuration of the computer program; and commit the third configuration of the computer program to the distributed ledger. In Example 2, the subject matter of example 1 can optionally include that at least one of the first or second updates is received from a third party updating service. In Example 3, the subject matter of examples 1 or 2 can optionally include that the second request is received before receiving the second configuration of the computer program. In Example 4, the subject matter of examples 1, 2, or 3 can optionally include that the distributed ledger stores each configuration of the computer program in a separate block. In Example 5, the subject matter of examples 1, 2, 3, or 4 can optionally include that each of the first and second updates is a software bundle. In Example 6, the subject matter of examples 1, 2, 3, 4, or 5 can optionally include that each of the first and second updates is identified using a bundle version string. In Example 7, the subject matter of examples 1, 2, 3, 4, 5, or 6 can further comprise instructions that when executed by a machine cause the machine to: detect that the second configuration of the computer program should be rolled back; roll back the second configuration of the computer program to the first configuration of the computer program in response to detecting that the second configuration of the computer program should be rolled back; modify the first update in response to rolling back the second configuration of the computer program; receive a modified second configuration of the computer program that is based on the modified first update and the first configuration of the computer program; commit the modified second configuration of the computer program to the distributed ledger; receive a modified third configuration of the computer program that is based on the second update and the modified second configuration of the computer program; and commit the modified third configuration of the computer program to the distributed ledger. In Example 8, the subject matter of examples 1, 2, 3, 4, 5, 6, or 7 can optionally include that the instructions for causing the machine to modify the first update comprise instructions for causing the machine to: detect a flaw in the first update; and correct the flaw to generate the modified first update. Example 9 is a method of managing updating of a programmable device that is part of a computer network comprised of multiple interconnected programmable devices, the method comprising: committing, to a distributed ledger implemented by one or more processors of a programmable device, a first configuration of a computer program installed on a programmable device, wherein the programmable device is part of a computer network comprised of multiple interconnected programmable devices, and wherein the distributed ledger exists on at least two of the multiple interconnected programmable devices; receiving, by the one or more processors of the programmable device, a first request to apply a first update to the first configuration of the computer program and a second request to apply a second update to the first configuration of the computer program; receiving a second configuration of the computer program that is based on the first update and the first configuration of the computer program to generate; committing, by the one or more processors of the programmable device, the second configuration of the computer program to the distributed ledger; determining, based on the distributed ledger, that the second update cannot be applied to the first configuration of the computer program; receiving a third configuration of the computer program in response to determining that the second update cannot be applied, the third configuration being generated based on the second update and the second configuration of the computer program; and committing, by the one or more processors of the programmable device, the third configuration of the computer program to the distributed ledger. In Example 10, the subject matter of example 9 can optionally include that at least one of the first or second updates is received from a third party updating service. In Example 11, the subject matter of examples 9 or 10 can optionally include that the second request is received before receiving the second configuration of the computer program. In Example 12, the subject matter of examples 9, 10, or 11 can optionally include that the distributed ledger stores each configuration of the computer program in a separate block. In Example 13, the subject matter of examples 9, 10, 11, or 12 can optionally include that each of the first and second updates is a software bundle. In Example 14, the subject matter of examples 9, 10, 11, 12, or 13 can optionally include that each of the first and second updates is identified using a bundle version string. In Example 15, the subject matter of examples 9, 10, 11, 12, 13, or 14 can optionally further comprise: detecting that the second configuration of the computer program should be rolled back; rolling back the second configuration of the computer program to the first configuration of the computer program in response to detecting that the second configuration of the computer program should be rolled back; modifying the first update in response to rolling back the second configuration of the computer program; receiving a modified second configuration of the computer program that is based on the modified first update and the first configuration of the computer program; committing the modified second configuration of the computer program to the distributed ledger; receiving a modified third configuration of the computer program that is based on the second update and the modified second configuration of the computer program; and committing the modified third configuration of the computer program to the distributed ledger. In Example 16, the subject matter of examples 9, 10, 11, 12, 13, 14, or 15 can optionally include that modifying the first update comprises: detecting a flaw in the first update; and correcting the flaw to generate the modified first update. Example 17 is a system for managing updating of a programmable device that is part of a computer network comprised of multiple interconnected programmable devices, the system comprising: one or more processors; and a memory coupled to the one or more processors and storing instructions, wherein execution of the instruction by the one or more processors causes the one or more processors to: commit, to a distributed ledger, a first configuration of a computer program installed on a programmable device, wherein the programmable device is part of a computer network comprised of multiple interconnected programmable devices, and wherein the distributed ledger exists on at least two of the multiple interconnected programmable devices; receive a first request to apply a first update to the first configuration of the computer program and a second request to apply a second update to the first configuration of the computer program; receive a second configuration of the computer program, the second configuration being generated based on the first update and the first configuration of the computer program; commit the second configuration of the computer program to the distributed ledger; determine, based on the distributed ledger, that the second update cannot be applied to the first configuration of the computer program; receive a third configuration of the computer program in response to determining that the second update cannot be applied, the third configuration being generated based on the second update and the second configuration of the computer program; and commit the third configuration of the computer program to the distributed ledger. In Example 18, the subject matter of example 17 can optionally include that at least one of the first or second updates is received from a third party updating service. In Example 19, the subject matter of examples 17 or 18 can optionally include that the second request is received before receiving the second configuration of the computer program. In Example 20, the subject matter of examples 17, 18, or 19 can optionally include that the distributed ledger stores each configuration of the computer program in a separate block. In Example 21, the subject matter of examples 17, 18, 19, or 20 can optionally include that each of the first and second updates is a software bundle. In Example 22, the subject matter of examples 17, 18, 19, 20, or 21 can optionally include that each of the first and second updates is identified using a bundle version string. In Example 23, the subject matter of examples 17, 18, 19, 20, 21, or 22 can optionally further comprise instructions that when executed by the one or more processors causes the one or more processors to: detect that the second configuration of the computer program should be rolled back; roll back the second configuration of the computer program to the first configuration of the computer program in response to detecting that the second configuration of the computer program should be rolled back; modify the first update in response to rolling back the second configuration of the computer program; receive a modified second configuration of the computer program that is based on the modified first update and the first configuration of the computer program; commit the modified second configuration of the computer program to the distributed ledger; receive a modified third configuration of the computer program that is based on the second update and the modified second configuration of the computer program; and commit the modified third configuration of the computer program to the distributed ledger. In Example 24, the subject matter of examples 17, 18, 19, 20, 21, 22, or 23 can optionally include that the instructions for causing the one or more processors to modify the first update comprise instructions for causing the one or more processors to: detect a flaw in the first update; and correct the flaw to generate the modified first update. In Example 25, the subject matter of examples 17, 18, 19, 20, 21, 22, 23, or 24 can optionally include that at least one of the one or more processors is a cryptoprocessor. In Example 26, the subject matter of examples 1, 2, 3, 4, 5, 6, 7, or 8 can optionally include that at least one of the one or more processors of the programmable device is a cryptoprocessor. In Example 27, the subject matter of examples 9, 10, 11, 12, 13, 14, 15, or 16 can optionally include that at least one of the one or more processors of the programmable device is a cryptoprocessor. It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described examples may be used in combination with each other. Many other examples will be apparent to those of skill in the art upon reviewing the above description. The scope of the disclosure therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In this document, reference has been made to blockchain technologies, such as ethereum and bitcoin. ETHEREUM may be a trademark of the Ethereum Foundation (Stiftung Ethereum). BITCOIN may be a trademark of the Bitcoin Foundation. These and any other marks referenced herein may be common law or registered trademarks of third parties affiliated or unaffiliated with the applicant or the assignee. Use of these marks is by way of example and shall not be construed as descriptive or to limit the scope of the examples described herein to material associated only with such marks.
83,775
11861344
DETAILED DESCRIPTION To make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the following clearly and thoroughly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely some embodiments of the present invention rather than all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative effects shall fall within the protection scope of the present invention. FIG.1is a schematic diagram of one application environment of a module upgrade method in a UAV system according to an embodiment of the present invention. The application environment includes: a UAV system100. The UAV system100includes a UAV10and a ground station20. When a module in the UAV system100is upgraded, the UAV10establishes a connection with the ground station20to upgrade the module to be upgraded in the UAV system100. In one embodiment, for the UAV10of the UAV system100, the UAV10includes a camera, a vision module, a holder module, four electric tuner modules, an intelligent battery module, an ultrasonic module, a flight control module, an airplane image transmission module, etc. For the ground station20of the UAV system100, the ground station20includes a remote control single-chip microcomputer, a ground image transmission module, a remote control panel, etc., which may be modules to be upgraded. In one embodiment, the camera module, the airplane image transmission module, the ground image transmission module and the remote control panel are all provided with own storage units, which are used to store upgrade files for upgrading the module to be upgraded in the UAV system100. In the embodiments of the present application, the UAV10may be a fixed-wing UAV, a multi-rotor UAV, etc. Here, the UAV may be referred to as an unmanned aerial vehicle. In some other embodiments, it may also be other aerial vehicles, such as an unmanned spacecraft. The ground station20may be any suitable device having a remote control function, such as a remote control. In order to cause the UAV system100to better meet the requirements of users or improve the stability of the UAV system100, the module to be upgraded in the UAV system100generally needs to be upgraded so as to optimize the functions of the UAV system100and enable users to have better experience. Because the modules in the UAV system are numerous, the current upgrade mode cannot guarantee the success rate of the module to be upgraded. In combination with the application scenario, the embodiments of the present invention mainly aim to provide a module upgrade method in a UAV system, which can improve the success rate of the module to be upgraded in the UAV system100. The embodiments of the present invention are further described below with reference to the accompanying drawings. A module upgrade method in a UAV system provided by the embodiments of the present invention is applied to a module to be upgraded in the UAV system, such as a UAV system as shown inFIG.3. The module to be upgraded in the UAV system may be implemented by combining software with hardware. Here, the module to be upgraded may also be understood as an independent functional system. The UAV system includes a UAV and a ground station, and the module to be upgraded may be a module in the UAV or a module in the ground station. According to the technical solution of this embodiment, firstly, an upgrade file of a module to be upgraded is acquired; then, the module to be upgraded is upgraded according to the upgrade file; then, it is judged whether the module to be upgraded is successfully upgraded; and if no, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. In this way, the upgrade success rate of the module to be upgraded can be improved by multiple upgrades. Meanwhile, in this embodiment, by recording an upgrade result of the module to be upgraded, the upgrade result of the module to be upgraded can be accurately checked. The following describes technical solutions of the present invention in detail with reference to specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. FIG.2is a flowchart of a module upgrade method in a UAV system according to Embodiment 1 of the present invention. As shown inFIG.2, the method in this embodiment may include the following steps. In S101, an upgrade file of a module to be upgraded is acquired. The upgrade file is typically stored in a part of a storage unit of the module to be upgraded. In S102, the module to be upgraded is upgraded according to the upgrade file. The executive body of this embodiment is a software upgrade apparatus with a software upgrade function. The software upgrade apparatus may be a separate device. At this moment, the software upgrade apparatus is communicatively connected to the module to be upgraded. Alternatively, the software upgrade apparatus of this embodiment may be a part of the module to be upgraded, such as a central processing unit (CPU) in the module to be upgraded. FIG.3is an application scenario diagram of a module upgrade method in a UAV system according to Embodiment 1 of the present invention. The UAV system as shown inFIG.3includes a plurality of modules to be upgraded. For example, an airplane side includes a camera, a holder module, a vision module, four electric tuner modules, an intelligent battery module, an ultrasonic module, a flight control module, an airplane end image transmission module, etc. A ground side includes a remote control single-chip microcomputer, a ground image transmission module, a remote control panel, etc. In the UAV system as shown inFIG.3, the modules to be upgraded include two main types: modules to be upgraded with storage units, such as a remote control panel; and modules to be upgraded without storage units, such as a flight control module. An upgrade file of the module to be upgraded with the storage unit is stored in the storage unit. For example, the upgrade file of the remote control panel is stored in the storage unit of the remote control panel. In this way, the remote control panel may directly read the upgrade file of the remote control panel from the own storage unit. An upgrade file of the module to be upgraded without the storage unit may be stored in the module to be upgraded with the storage unit, which is in serial connection with the module to be upgraded. For example, as shown inFIG.3, the flight control module is connected to the airplane image transmission module through a serial port2, and the upgrade file of the flight control module may be stored in the storage unit of the airplane image transmission module. In this way, the flight control module may read the own upgrade file from the storage unit of the airplane image transmission module through the serial port2. Alternatively, the upgrade file of this embodiment may be uploaded by a terminal device. For example, as shown inFIG.3, the terminal device is connected to a ground image transmission device in the UAV system. A file of each module to be upgraded is sent to each module to be upgraded through the ground image transmission device and a communication link between various devices in the UAV system. Specifically, for the module to be upgraded with the storage unit, the upgrade file may be directly stored in the storage unit of the module to be upgraded, and for the module to be upgraded without the storage unit, the upgrade file may be stored in other modules to be upgraded with the storage units, which are in serial connection with the module to be upgraded. In this embodiment, the software upgrade processes of modules to be upgraded are consistent, and one module to be upgraded is taken as an example for description in this embodiment, and other modules to be upgraded may be referred to the description. In this step, an upgrade file of the module to be upgraded is firstly acquired, and the module to be upgraded is then upgraded according to the upgrade file. For example, if the upgrade file is a full upgrade file, a previous upgrade file of the module to be upgraded is completely replaced with the upgrade file. If the upgrade file only includes a patch file, the patch file is added to the module to be upgraded for supplementing the previous upgrade file of the module to be upgraded. The software upgrade of a module to be upgraded according to an upgrade file is a common technical means in the art and will not be repeated here. In a possible implementation of this embodiment, the module to be upgraded in this embodiment includes an App area, and upgrading the module to be upgraded in this step may specifically include refreshing the upgrade file to the App area of the module to be upgraded to finish upgrading of the module to be upgraded. In S103, it is judged whether the module to be upgraded is successfully upgraded. According to the foregoing steps, after the module to be upgraded is upgraded, an upgrade result of the module to be upgraded needs to be judged. In one example, judging whether the module to be upgraded is successfully upgraded may be: checking upgrade data in the module to be upgraded, if the checking is successful, determining that the module to be upgraded is successfully upgraded, and if the checking is unsuccessful, determining that the module to be upgraded is unsuccessfully upgraded. In another example, judging whether the module to be upgraded is successfully upgraded may be: re-booting the upgraded module to be upgraded, if the re-boot is successful, determining that the module to be upgraded is successfully upgraded, and if the re-boot is unsuccessful, determining that the module to be upgraded is unsuccessfully upgraded. In S104, if no, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. In this embodiment, according to the foregoing steps, if it is determined that the module to be upgraded is unsuccessfully upgraded, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file. Next, S103is re-executed to judge whether the module to be upgraded is successfully upgraded, if no, an upgrade file is continuously re-acquired, and the module to be upgraded is re-upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. In this way, the upgrade success rate of the module to be upgraded can be improved by the multiple upgrades. In a possible implementation of this embodiment, S104in which upgrading of the module to be upgraded is finished may include the following steps. When an upgrade count of the module to be upgraded is smaller than a first preset threshold, the module to be upgraded is successfully upgraded, and upgrading is finished. For example, the preset first preset threshold is n, and when the upgrade count of the module to be upgraded is smaller than n and the module to be upgraded is successfully upgraded, upgrading of the module to be upgraded may be finished. Or, when the upgrade count of the module to be upgraded is greater than or equal to the first preset threshold, upgrading of the module to be upgraded is finished. For example, when the upgrade count of the module to be upgraded is greater than or equal to n, the module to be upgraded may not have been successfully upgraded, but at this moment, in order to prevent the upgrade from circulating indefinitely, upgrading of the module to be upgraded is stopped. According to the module upgrade method in a UAV system provided by the embodiments of the present invention, an upgrade file of the module to be upgraded is acquired; the module to be upgraded is upgraded according to the upgrade file; it is judged whether the module to be upgraded is successfully upgraded; and if no, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. That is, in this embodiment, the upgrade success rate of the module to be upgraded can be improved by multiple upgrades, and the upgrade method is simple, convenient, easy to implement and high in reliability. FIG.4is a flowchart of a module upgrade method in a UAV system according to Embodiment 2 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.4, S103of judging whether the module to be upgraded is successfully upgraded may include the following steps. In S201, it is judged whether upgrade data in the module to be upgraded is successfully checked. In this embodiment, checking upgrade data in a module to be upgraded may be: judging whether upgrade data in a module to be upgraded is matched with upgrade data in an upgrade file of the module to be upgraded, if yes, determining that the upgrade data in the module to be upgraded is successfully checked, and if no, determining that the upgrade data in the module to be upgraded is successfully checked. Specifically, upgrade data stored in an upgraded module to be upgraded is firstly acquired, and the upgrade data is recorded as first upgrade data for the convenience of explanation. Meanwhile, upgrade data in an upgrade file of the module to be upgraded is acquired, and the upgrade data is recorded as second upgrade data. The first upgrade data and the second upgrade data may have the size of corresponding upgrade data or a check code of the upgrade data, etc. Then, it is judged whether the first upgrade data is matched with the second upgrade data, and if yes, it is determined that the upgrade data in the module to be upgraded is successfully checked. If no, it is determined that the upgrade data in the module to be upgraded is unsuccessfully checked, an upgrade file of the module to be upgraded needs to be re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file. By way of example, it is assumed that the first upgrade data and the second upgrade data both have the size of upgrade data, the size of the second upgrade data being b. Therefore, after the module to be upgraded is upgraded for the first time according to the upgrade file, the size of the first upgrade data after the module to be upgraded is acquired to be a. It is judged that the size a of the first upgrade data is not matched with the size b of the second upgrade data, that is, a is not equal to b, and it may be determined that the first upgrading of the module to be upgraded is unsuccessful. Then, an upgrade file is re-acquired, the module to be upgraded is re-upgraded according to the re-acquired upgrade file, it is continuously judged whether the size a of the first upgrade data of the module to be upgraded after the second upgrade is equal to the size b of the second upgrade data, if no, the upgrade file is continuously re-acquired, and the module to be upgraded is upgraded again according to the re-acquired upgrade file until the size of data to be upgraded is equal to the size of the second upgrade data. In S202, if yes, it is judged whether the module to be upgraded is successfully re-booted. In S203, if yes, it is determined that the upgraded module is successfully upgraded. In order to accurately judge whether the module to be upgraded is successfully upgraded, if the upgrade data in the module to be upgraded is successfully checked, it is also necessary to judge whether the module to be upgraded may be re-booted. Specifically, after the module to be upgraded is upgraded, if it is judged that the upgrade data in the module to be upgraded is successfully checked, the module to be upgraded is re-booted. For example, a boot loader of the module to be upgraded is booted, and if the boot loader may be booted, it is determined that the module to be upgraded is successfully upgraded. If the boot loader is not successfully booted, it is determined that the module to be upgraded is unsuccessfully upgraded currently. At this moment, the module to be upgraded needs to be re-upgraded. Specifically, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file. The foregoing steps are repeated until the module to be upgraded is successfully re-booted. According to the module upgrade method in a UAV system provided by the embodiments of the present invention, it is judged whether upgrade data in the module to be upgraded is successfully checked, if yes, it is judged whether the module to be upgraded is successfully re-booted, and if yes, it is determined that the module to be upgraded is successfully upgraded. In this way, through double judgment, the accurate judgment of the upgrade success of the module to be upgraded can be improved, and the upgrade reliability of the module to be upgraded can be further improved. FIG.5is a flowchart of a module upgrade method in a UAV system according to Embodiment 3 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.5, if the module to be upgraded does not include a storage unit, the upgrade process of the module to be upgraded in this embodiment may include the following steps. In S301, an upgrade file of the module to be upgraded, which is sent by the previous module, is acquired through a serial interface. In this embodiment, the previous module refers to a module that includes a storage unit and is directly connected to the module to be upgraded through the serial interface. The module to be upgraded is communicatively connected to a previous module through a serial interface. The module to be upgraded does not include a storage unit. Therefore, the upgrade file of the module to be upgraded may be stored in the storage unit of the previous module that is in serial connection with the module to be upgraded. For example, as shown inFIG.3, the upgrade file of the flight control module may be stored in the storage unit of the airplane image transmission module, and the flight control module communicates with the airplane image transmission module through a serial port2. In the upgrade process of the flight control module, the upgrade file of the flight control module is read from the storage unit of the airplane image transmission module through the serial port2. In S302, the module to be upgraded is upgraded according to the upgrade file. Specifically, as shown in Table 1, a module to be upgraded without a storage unit includes two areas, namely an area where a boot loader is located and an App area, the area where the boot loader is located being marked as a boot loader area. TABLE 1Boot loader areaApp area With reference to the above example, the upgrade file of the flight control module received from the serial port2is refreshed to the App area of the flight control module. The detailed process thereof will be described with reference to the foregoing embodiments, and will not be repeated here. In S303, it is judged whether upgrade data in the module to be upgraded is successfully checked. If yes, S304is executed, and if no, the process returns to S301. With reference to the above example, the module to be upgraded adopts a flight control module. As shown inFIG.3, if upgrade data in the flight control module is not matched with the upgrade data in the upgrade file of the flight control module after the module to be upgraded is upgraded, it is determined that the upgrade data in the module to be upgraded is unsuccessfully checked. At this moment, the module to be upgraded continues to re-read the upgrade file of the flight control module again from the airplane image transmission module through the serial port2, and the App area of the flight control module is re-refreshed by using the re-read upgrade file until the upgrade data in the flight control module is successfully checked. In S304, it is judged whether the module to be upgraded is successfully re-booted. If yes, S305is executed, and if no, S301is executed. In practical application, when the upgrade file is in error, the boot loader of the module to be upgraded cannot be booted normally after the upgrade. Therefore, in order to avoid the problem in this embodiment, after the upgrade data in the module to be upgraded is successfully checked, and in order to further improve the accuracy of software upgrade detection, the module to be upgraded is re-booted in this embodiment. Specifically, the boot loader of the module to be upgraded is booted, and if the boot loader may be booted, it is determined that software of the module to be upgraded is successfully upgraded. If the boot loader is not successfully booted, it is determined that the module to be upgraded is unsuccessfully upgraded currently. At this moment, the module to be upgraded needs to be re-upgraded. Specifically, an upgrade file of the module to be upgraded is re-acquired, the re-acquired upgrade file is refreshed to the App area of the module to be upgraded, and then S303is re-executed until the module to be upgraded is successfully booted. In S305, upgrading is finished. According to the module upgrade method in a UAV system provided by the embodiments of the present invention, for a module to be upgraded which does not include a storage unit, an upgrade file of the module to be upgraded, which is sent by a previous module, is acquired through a serial port; it is judged whether the module to be upgraded is successfully upgraded; and if no, an upgrade file of the module to be upgraded is re-acquired, and the module to be upgraded is upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. Thus, the detection accuracy of the module to be upgraded which does not include the storage unit is improved, thereby further improving the success rate of software upgrade. FIG.6is a flowchart of a module upgrade method in a UAV system according to Embodiment 4 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.6, an upgrade process for a module to be upgraded which does not include a storage unit may further include the following steps. In S401, a serial communication link of a previous module with other modules except the module to be upgraded is closed. In this embodiment, an upgrade file of the module to be upgraded is sent after the previous module closes the serial communication link with other modules except the module to be upgraded. According to this embodiment, if the module to be upgraded acquires the upgrade file of the module to be upgraded from other modules through the serial port, in order to ensure that the serial port can efficiently and accurately send the upgrade file to the module to be upgraded, other communications except the upgrade file in the serial port are closed, and the upgrade success rate of the module to be upgraded is improved. For example, as shown inFIG.3, when the flight control module is upgraded, a serial communication link between the airplane image module and the flight control module is opened through the serial port2, and a serial communication link between the airplane image module and other modules (for example, the holder module and the vision module) is closed. Meanwhile, the module to be upgraded in this embodiment may also perform information transmission with other modules in the upgrade process, so that software to be upgraded in the module to be upgraded may be in a running state, and the software in the running state cannot be completely upgraded. At this moment, other communications except a file to be upgraded in the module to be upgraded needs to be closed so as to ensure the normal progress of the upgrade process and improve the success rate of upgrading. In S402, an upgrade file of the module to be upgraded, which is sent by the previous module, is acquired through a serial interface. In S403, the module to be upgraded is upgraded according to the upgrade file. In S404, it is judged whether upgrade data in the module to be upgraded is successfully checked. If yes, S407is executed, and if no, S405is executed. In S405, an upgrade count of the module to be upgraded is acquired. In S406, it is judged whether the upgrade count of the module to be upgraded is greater than or equal to a second preset threshold. If yes, S401is re-executed, and if no, S402is executed. That is, in this embodiment, when the upgrade count of the module to be upgraded is greater than or equal to a second preset threshold, an upgrade file of the module to be upgraded, which is sent after the previous module closes the serial communication link with the other modules, is re-acquired, the second preset threshold being smaller than the first preset threshold. Specifically, the upgrade count acquired by S405are compared with a second preset threshold, and if the upgrade count exceeds the second preset threshold, S401is executed, that is, other communications with modules except the module to be upgraded are closed, and subsequent steps such as S402are executed. If the upgrade count does not exceed the second preset threshold, S402is executed, that is, the module to be upgraded is upgraded according to the upgrade file of the module to be upgraded. The second preset threshold is set according to actual requirements and is smaller than the first preset threshold. In S407, it is judged whether the module to be upgraded is successfully re-booted. If yes, S408is executed, and if no, S405is executed. Specifically, if the upgrade data in the module to be upgraded is successfully checked, the boot loader of the module to be upgraded is booted, and if the boot loader is successfully booted, S408is executed for successful software upgrade. If the boot loader is unsuccessfully booted, S405is executed to acquire the upgrade count of the module to be upgraded. If the upgrade count exceeds the second preset threshold, S401is executed. If the upgrade count does not exceed the second preset threshold, S402is executed, that is, the module to be upgraded is upgraded according to the upgrade file of the module to be upgraded. In S408, upgrading is finished. According to the module upgrade method in a UAV system provided by this embodiment, it is determined, according to an upgrade count of a module to be upgraded, whether a serial communication link between a previous module and other modules is re-closed or the module to be upgraded is upgraded according to an upgrade file of the module to be upgraded, thereby further improving the accuracy and success rate of software upgrade of the module to be upgraded. FIG.7is a flowchart of a module upgrade method in a UAV system according to Embodiment 5 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.7, if the module to be upgraded includes a storage unit, the upgrade process in this embodiment may include the following steps. In S501, an upgrade file of the module to be upgraded is acquired, and the upgrade file is stored into the storage unit of the module to be upgraded. In S502, the module to be upgraded is upgraded according to the upgrade file. In S503, it is judged whether the module to be upgraded is successfully upgraded. In S504, if no, an upgrade file of the module to be upgraded is re-acquired from the storage unit. Specifically, as shown in Table 2, a module to be upgraded with a storage unit includes three areas, namely an area where a boot loader is located, an App area and an area where the storage unit is located, the area where the boot loader is located being denoted as a boot loader area, and the storage unit storing an upgrade file. TABLE 2Boot loader areaApp areaStorage unit In an actual upgrade process, an upgrade file of the module to be upgraded is acquired, the upgrade file is stored into a storage unit of the module to be upgraded, the upgrade file of the module to be upgraded is read from the storage unit, and the upgrade file is refreshed to the App area of the module to be upgraded. Then, it is judged whether the module to be upgraded is successfully upgraded. The detailed process thereof will be described with reference to the above description. If it is judged that the module to be upgraded is unsuccessfully upgraded, an upgrade file of the module to be upgraded is re-acquired from the storage unit, and the module to be upgraded is upgraded according to the re-acquired upgrade file. The foregoing steps are repeated until finishing upgrading the module to be upgraded. In this embodiment, the upgrade file is read from the storage unit of the module to be upgraded for upgrading, the upgrade process is simple, and the upgrade speed is high. Alternatively, in this embodiment, the boot loader in the module to be upgraded may be controlled to read an upgrade file from the storage device, the boot loader may be controlled to read the upgrade data of the module to be upgraded from the App area, and an upgrade file of the module to be upgraded may be read from the storage unit to judge whether the two upgrade files are matched. According to the module upgrade method in a UAV system provided by the embodiments of the present invention, for a module to be upgraded which includes a storage unit, an upgrade file in the storage unit is refreshed to an App area of the module to be upgraded, upgrade data of the module to be upgraded is read from the App area, and an upgrade file of the module to be upgraded is read from the storage unit to judge whether the two upgrade files are matched. The entire software upgrade process is simple, the upgrade speed is high, and the module to be upgraded can be quickly upgraded. FIG.8is a flowchart of a module upgrade method in a UAV system according to Embodiment 6 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.8, the method in this embodiment may include the following steps. In S601, upgrade information in an upgrade file of the module to be upgraded is stored, the upgrade information including upgrade version information of the module to be upgraded. Specifically, after the upgrade file of the module to be upgraded is obtained, upgrade information in the upgrade file such as upgrade version information of the upgrade module is stored. In S602, version information of the module to be upgraded is acquired after finishing upgrading the module to be upgraded. In S603, an upgrade state of the module to be upgraded is determined according to the version information and/or the upgrade version information. In this embodiment, after the module to be upgraded is upgraded for multiple times, this embodiment also needs to detect an upgrade state of the module to be upgraded. Specifically, according to the foregoing steps, upgrade version information and current version information of the module to be upgraded are obtained, the upgrade state of the module to be upgraded is determined according to the upgrade version information and/or the version information of the module to be upgraded, and the upgrade state of the module to be upgraded can be accurately determined. The upgrade state of the module to be upgraded may include: non-upgrade, successful upgrade and unsuccessful upgrade. In a possible implementation of this embodiment, S503may include the following steps. When the version information is not acquired, it is determined that the module to be upgraded is in a non-upgraded state. When the version information is acquired and the version information is the same as the upgrade version information, it is determined that the module to be upgraded is in a successfully upgraded state. When the version information is acquired and the version information is different from the upgrade version information, it is determined that the module to be upgraded is in an unsuccessfully upgraded state. Specifically, the module to be upgraded is booted. If the module to be upgraded is successfully booted, version information of the module to be upgraded is acquired. If the version information of the module to be upgraded is not acquired at this moment, it may be determined that the module to be upgraded is in a non-upgraded state. If the version information of the module to be upgraded may be obtained, the version information of the module to be upgraded is matched with the upgrade version information of the module to be upgraded. If the version information of the module to be upgraded is the same as the upgrade version information, it is determined that the module to be upgraded is in a successfully upgraded state. If the version information of the module to be upgraded is different from the upgrade version information, it is determined that the module to be upgraded is in an unsuccessfully upgraded state. Finally, the upgrade state of the module to be upgraded is stored into the storage unit of the module to be upgraded. For example, the upgrade result is stored into a section of Flash of the UAV system. The section of Flash cannot be refreshed. Alternatively, during the above storage process, an identifier of the module to be upgraded, a version number of the module to be upgraded, and the upgrade state of the module to be upgraded are also stored. For example, an upgrade state identifier is represented by Un, Un=0 represents non-upgrade, Un=1 represents successful upgrade, and Un=2 represents unsuccessful upgrade. With continued reference toFIG.3, according to the foregoing method, the upgrade result of each module to be upgraded in the UAV system may be determined as shown in table 3: TABLE 3UpgradeFlightUpgradeVersionstatecontrolVersionstateCameranumberidentifier. . .modulenumberidentifier. . .n = 0V11U1. . .n = 7V17U2. . . In this embodiment, through the foregoing method, the upgrade state of the module to be upgraded is detected, and the upgrade detection accuracy is further improved. Meanwhile, the upgrade state of the module to be upgraded is summarized and stored, so that the version information and the upgrade result of the module to be upgraded can be conveniently checked. According to the module upgrade method in a UAV system, upgrade information in an upgrade file of the module to be upgraded is stored, and when finishing upgrading the module to be upgraded, version information of the module to be upgraded is acquired; and according to the version information and/or the upgrade version information, the upgrade state of the module to be upgraded is determined, and the software upgrade of each module to be upgraded is further accurately checked. FIG.9is a schematic structure diagram of a module to be upgraded in a UAV system according to Embodiment 1 of the present invention. As shown inFIG.9, a module to be upgraded100in this embodiment may include:an acquisition unit110, configured to acquire an upgrade file of the module to be upgraded;an upgrade unit120, configured to upgrade the module to be upgraded according to the upgrade file; anda judgment unit130, configured to judge whether the module to be upgraded is successfully upgraded. The acquisition unit110is further configured to re-acquire an upgrade file of the module to be upgraded when the judgment unit120judges that the module to be upgraded is unsuccessfully upgraded. The upgrade unit120is further configured to upgrade the module to be upgraded according to the re-acquired upgrade file until finishing upgrading the module to be upgraded. The module to be upgraded in this embodiment of the present invention may be configured to execute the technical solution in the foregoing method embodiment. An implementation principle and a technical effect thereof are similar. Details are not described herein again. In a possible implementation of this embodiment, the judgment unit130is specifically configured to judge whether upgrade data in the module to be upgraded is successfully checked, judge whether the module to be upgraded is successfully re-booted if the upgrade data in the module to be upgraded is successfully checked, and determine that the module to be upgraded is successfully upgraded if the module to be upgraded is successfully re-booted. In another possible implementation of this embodiment, finishing upgrading the module to be upgraded includes:when an upgrade count of the module to be upgraded is smaller than a first preset threshold, successfully upgrading the module to be upgraded, and finishing upgrading; or,when the upgrade count of the module to be upgraded is greater than or equal to the first preset threshold, finishing upgrading the module to be upgraded. In another possible implementation of this embodiment, the module to be upgraded is communicatively connected to a previous module through a serial interface. The acquisition unit110is specifically configured to acquire an upgrade file of the module to be upgraded, which is sent by the previous module through the serial interface. In another possible implementation of this embodiment, the upgrade file of the module to be upgraded is sent after the previous module closes a serial communication link with other modules except the module to be upgraded. In another possible implementation of this embodiment, the acquisition unit110is further specifically configured to re-acquire, when an upgrade count of the module to be upgraded is greater than or equal to a second preset threshold, an upgrade file of the module to be upgraded, which is sent after the previous module closes the serial communication link with the other modules, the second preset threshold being smaller than the first preset threshold. In another possible implementation of this embodiment, the module to be upgraded includes a storage unit. The acquisition unit110is specifically configured to acquire an upgrade file of the module to be upgraded, and store the upgrade file into the storage unit of the module to be upgraded. In another possible implementation of this embodiment, the acquisition unit110is further specifically configured to re-acquire an upgrade file of the module to be upgraded from the storage unit. FIG.10is a schematic structure diagram of a module to be upgraded in a UAV system according to Embodiment 2 of the present invention. On the basis of the foregoing embodiment, as shown inFIG.10, the module to be upgraded100in this embodiment may further include:a storage unit140, configured to store upgrade information in an upgrade file of the module to be upgraded, the upgrade information including upgrade version information of the module to be upgraded,the acquisition unit110being further configured to acquire version information of the module to be upgraded after finishing upgrading the module to be upgraded; anda determination unit150, configured to determine an upgrade state of the module to be upgraded according to the version information and/or the upgrade version information. In another possible implementation of this embodiment, the determination unit150is specifically configured to: determine, when the version information is not acquired, that the module to be upgraded is in a non-upgraded state; determine, when the version information is acquired and the version information is the same as the upgrade version information, that the module to be upgraded is in a successfully upgraded state; and determine, when the version information is acquired and the version information is different from the upgrade version information, that the module to be upgraded is in an unsuccessfully upgraded state. The module to be upgraded of the embodiments of the present invention may be used to execute the technical solution of the method embodiment shown above, the implementation principle and the technical effect are similar, and detailed description is omitted. FIG.11is schematic structure diagram of a module to be upgraded according to an embodiment of the present invention. As shown inFIG.11, a module to be upgraded30in this embodiment includes:a memory31, configured to store a computer program; anda processor32, configured to execute the computer program to implement the module upgrade method in a UAV system. The implementation principle and the technical effect are similar, and detailed description is omitted. Here, the memory10may be the same as the memory device in the module to be upgraded in the foregoing embodiment, or the memory10may be independent of the foregoing storage device, which will not be limited here. Further, when at least part of the functions of the module upgrade method in a UAV system in the embodiments of the present invention are implemented by software, the embodiments of the present invention also provide a computer storage medium. The computer storage medium is used to store a computer software instruction for upgrading the software. The computer software instruction, when run on a computer, causes the computer to perform various possible module upgrade methods in a UAV system in the foregoing method embodiment. The processes or functions described in accordance with the embodiments of the present invention may be generated, in whole or in part, when the computer-executed instruction is loaded and executed on the computer. The computer instruction may be stored in the computer storage medium or transmitted from one computer storage medium to another computer storage medium. The transmission may be wireless (e.g., cellular communication, infrared, short-range wireless, microwave, etc.) transmission to another website site, computer, server or data center. The computer readable storage medium may be any available medium capable of being accessed by a computer or include one or more data storage devices integrated by an available medium, such as a server and a data center. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium, or the like. Finally, it is to be noted that the foregoing embodiments are merely intended for describing the technical solutions of the present invention, but not for limiting the present invention. Although the present invention is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some or all technical features thereof, without making the essence of the corresponding technical solutions departing from the scope of the technical solutions of the embodiments of the present invention.
43,323
11861345
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed. DETAILED DESCRIPTION In order to facilitate a wide range of options of network configurations, network devices such as switches need to be configured to rapidly deploy updates or changes to network configurations. Such updates or changes include firmware or software updates to an existing network device, or rolling back previous updates or changes. Some examples of network configurations include command line interface (CLI), layer 2 (L2), and layer 3 (L3) configurations. One particular example of a configuration change may include increasing a range of options to suspend network access to certain client devices that are exceeding a threshold amount of data traffic, or increasing security requirements for certain client devices to obtain network access. A grammar file of the network devices may include commands available in a network configuration and thus encapsulate the features and/or capabilities of a network configuration. When a network configuration is updated or changed, the changes are reflected in the grammar file. A server may control operations of the network devices, including receiving, interpreting, and implementing commands. In order to enable full deployment of the firmware and/or software updates, the server may also need to recognize the changes or updates to an existing network device, as reflected in the updated grammar file. Thus, to carry out the configuration changes, the server may need to obtain the updated grammar file. If the server does not have the updated grammar file, the added features of the firmware and/or software updates may not be interpreted by the server and may not be accessed. A current problem is that the process of updating a grammar file at the server is done manually. A human operator manually inputs the updated grammar file at the server. Currently, the process may be tedious and require days. Not only does this requirement of manually updating consume human resources, but it also delays configuration changes to a network. This problem may be especially pressing if a firmware update includes a security update to address a security loophole, and may result in compromising data security at a network. Embodiments described herein address these technical problems by obviating the need to manually update a grammar file at a server, which may include a remote server such as a cloud server. Thus, the server may automatically obtain or generate a grammar file of a network device without requiring human input. The server may obtain or extract grammar of a network device and generate a grammar file, along with secondary or auxiliary grammar files, in conformance with the obtained grammar. In some embodiments, as will be described with respect to the FIGS. below, the server may obtain grammar information from a network device such as a switch. The server may generate a grammar file in conformance with the grammar information along with secondary or auxiliary grammar files which may include a mask file and a patch file. The mask file may include metadata regarding ordering and interpreting contexts of commands to expedite processing. The patch file may include rules to address edge cases, such as, if a particular command is not to appear or to appear differently in a running configuration. The running configuration may include a historical log of all commands requested or executed during a particular session. FIG.1Ais an exemplary illustration of a computing system110that generates or updates grammar files of network devices. The description below provides an overview of the process of generating or updating grammar files to network devices. Further details will be provided in subsequentFIGS.1B,2A-2B,3A-3B, and4A-4B. The computing system110may include a server. The server may be remote from a network site, such as a cloud server. The computing system110may include a computing component111, which may be implemented as the aforementioned server, and a database112. The computing component111may receive, from a network device120, information about a grammar used by the network device120. The grammar may be used to validate syntaxes of commands entered at the network device120, and/or suggest additional feasible commands. The network device120may include a storage121that stores parameters, configurations, and/or protocol of the network device120, which may include a grammar of the network device120. The storage121may be either internal to the network device120or external to the network device120, as shown inFIG.1A. The network device120may be connected to client devices such as client devices131,132, and133. Although only three client devices are shown, any suitable number and/or type of client devices may be connected to the network device120. In some embodiments, the computing component111may be notified or informed of a firmware and/or software update at the network device120. The firmware and/or software update at the network device120may result in the grammar of the network device120being updated accordingly. The computing component111may extract or obtain the information required to update or generate a base grammar file, and secondary or auxiliary grammar files from the storage121of the network device120using an API (Application Programming Interface) such as Representational State Transfer (REST). In particular, the computing component111may extract commands and/or protocols supported by the network device120, parameters that may be inputted into the commands or otherwise used in a configuration of the network device120, and particular syntactical rules of the commands and the parameters. For example, the computing component111may extract supported commands or protocols such as NTP (Network Time protocol), VSX (Virtual System Extension), or RADIUS (Remote Authentication Dial In User Service), tunneling protocols that connect separate networks, or foo commands, which define variable parameters or settings such as hostnames and port numbers. The computing component111may update or generate a base grammar file, and secondary or auxiliary grammar files, so that the computing component111may implement configuration or other changes as a result of the firmware and/or software update without requiring a manual update or input. In such a manner, the implementation of changes resulting from firmware and/or software changes at network devices may be greatly expedited and more efficient. In some embodiments, the base grammar file and the secondary or auxiliary grammar files may be generated in JSON (JavaScript Object Notation) format. As shown inFIG.1B, the computing component111may include one or more hardware processors140and machine-readable storage media114storing a set of machine-readable/machine-executable instructions that, when executed, cause the hardware processor(s)140to generate, calibrate, and/or update one or more grammar files in accordance with grammar used by network devices such as the network device120. The computing component111also includes a database112that may include a repository of existing grammar files mapped to existing network devices and/or a stored log (e.g., running configuration) of commands150inputted at and/or executed by the network devices. The computing component111may first identify or discover a connected network device such as the network device120using a protocol such as LLDP (Link Layer Discovery Protocol). The computing component111may determine whether an existing grammar file is mapped to, or corresponds to, a network device that is newly discovered or one that is updated. If the computing component111determines that no existing grammar file exists in the repository, the computing component111generates the grammar file corresponding to the new or updated network device. In some embodiments, the computing component111may only generate a grammar file if the network device is of a specified type or falls within a specific range of network devices. For example, the computing component111may only generate a grammar file if the network device has a particular operation system (OS). If the network device is not of a specific type or does not fall within a specific range of network devices, the computing component111may not generate a grammar file for that network device and thus only monitor that network device without configuring that network device. The hardware processors140may include a grammar file generating engine141and a logging engine145. The grammar file generating engine141may generate a grammar file based on grammar stored, for example, in the storage121of a network device such as the network device120. The grammar file generating engine141may obtain separate components of the grammar file, which may include a base grammar, a mask, and a patch. The grammar file may be separated into different components or files in order to more effectively delegate processing tasks, which may enable the computing component111, or a subcomponent of the computing component111, to validate one particular aspect of grammar by searching or parsing through a particular file that is dedicated to that particular aspect. The base grammar may be generated by a base grammar generating engine142. The mask may be generated by a mask file generating engine143. The patch may be generated by a patch file generating engine144. As will be shown in the subsequentFIGS.2A-2B,3A-3B, and4A-4B, each of the components of the grammar file generated by the grammar file generating engine141may include a subset of grammar data obtained or extracted from a network device such as the network device120. A subset of data is to be construed as all or a portion of the data. In particular, the base grammar generating engine142, the mask file generating engine143, and the patch file generating engine144may apply modifications to the grammar obtained or extracted from a network device, such as particular formatting modifications that transform the grammar to a JSON (JavaScript Object Notation)-Schema. As an illustrative example, the base grammar generating engine142, the mask file generating engine143, and the patch file generating engine144may include descriptions of the commands, protocols, and/or parameters in the grammar as nonexecutable text. The logging engine145may store a log, such as a running configuration, of commands inputted at and/or executed by network devices such as the network device120. The patch may define how certain commands show up or are hidden in the log, which may be stored in the database112. InFIG.2A, the base grammar generating engine142may obtain grammar data201from the storage121of the network device120via an API such as a REST API202. The grammar data201may be raw data stored in the storage121of the network device120. The grammar data201may include commands and/or protocols supported by the network device120, parameters that may be inputted into the commands and/or protocols, and particular syntactical rules of the commands and/or protocols and the parameters. As illustrative examples, supported commands or protocols may include NTP (Network Time protocol), VSX (Virtual System Extension), or RADIUS (Remote Authentication Dial In User Service), tunneling protocols that connect separate networks, or foo commands, which define variable parameters or settings such as hostnames and port numbers. The base grammar generating engine142may generate a base grammar file211by reformatting and/or applying syntactical changes to transform a subset of the grammar data201into a JSON-schema format. For example, the base grammar generating engine142may include descriptions of the commands, protocols, and/or parameters in the grammar data201as nonexecutable text. The descriptions may appear on a user interface and elucidate functions of the commands, protocols, and/or parameters. The generated base grammar file211may include a list of permitted commands specific to a software and/or firmware version of the network device120. The computing component111may validate syntax of commands entered at the network device120by comparing the entered commands to the permitted commands in the generated base grammar file211. The computing component111may perform syntax highlighting using the generated base grammar file211in order to identify particular syntactic errors or nonconformities. The computing component111may further use the generated base grammar file211to generate a list of keywords, commands, and/or parameters available if the computing component111receives a keyword as an entry. For example, as shown inFIG.2B, if the computing component111receives an entry210of “show ip,” the computing component111may, from the base grammar file211, generate a list of further keywords, commands, and/or parameters that are valid command options to complete the entry210. In the example shown inFIG.2B, a list220of further keywords, commands, and/or parameters may include “Arp,” “Interface,” and “Ssh,” so that complete valid commands may include “show ip Arp,” “show ip Interface,” or “show ip Ssh.” The list220may further include a brief description of what each completed command entry does. InFIG.3A, the mask file generating engine143may obtain grammar data201from the storage121of the network device120via an API such as the REST API202. The mask file generating engine143may generate a mask311from a subset of the grammar data201and/or from a subset of the base grammar file211. In some embodiments, the mask311may include a file. In some embodiments, the mask file generating engine143may parse the grammar data201and/or the base grammar file211simultaneously or in parallel with the generation of the mask311. For example, the mask file generating engine143may identify a relevant portion of the grammar data201and/or the base grammar file211to be used to derive, obtain, or generate the mask411. The mask file generating engine143may continue to parse the grammar data201and/or the base grammar file211while generating the mask311, thus enabling the generation of the mask311without interruption in the parsing of the grammar data201and/or the base grammar file211. Such parallel or simultaneous operations may be infeasible or impossible during a manual update of a grammar file, which would inevitably entail interruptions in parsing or generating operations. In particular, during a manual update to generate a mask, parsing of the grammar data201and/or the base grammar file211would be interrupted. In some embodiments, the mask311may include data or metadata that is not present in the base grammar file211. The mask311may include a condensed version of the base grammar file211and/or metadata regarding ordering and interpreting contexts of commands to expedite processing and ensure proper processing of commands. The metadata may include grammar rules to be used by the computing component111to recognize contexts of commands, entities, or objects. For example, using the mask311, the computing component may recognize an entry containing “1/1/1/” or an equivalent format as being in an interface context. By recognition of the context of commands, the computing component111may sort and/or reorder commands to correct dependencies between commands and/or add missing declarations, while confining available options to be compatible within that context. In order to generate the mask311, the computing component111may extract ordering data or metadata of CLI commands from existing network devices such as existing switches. The ordering data or metadata may include tokens, which identify permitted formats and/or contexts of repeatable commands (e.g., commands in a specific format and/or context that include one or more permitted variable components) that may be sorted based on comparators. For example, a token may define a series of numbers separated by backslashes, such as “1/1/1” or “1/1/2” as being in an interface context. As another example, a token may identify an entry beginning with “VRF” to belong to a VRF (virtual routing and forwarding) context, in which command options may include “VRF management,” “VRF default” or “VRF test.” Comparators may include rules by which to sort such commands. For example, a number comparator may sort commands in an ascending order. An alphabetic comparator may sort commands alphabetically. Thus, if a series of command entries received by the computing component111does not conform to a particular order as defined by the ordering data or metadata, the computing component111may resort the command entries. In the example shown inFIG.3B, the computing component111may determine contexts of entities in a command script310. In particular, the computing component111may determine that the entities “interface vlan66,” “vlan 66,” and “router ospf,” were referenced in lines2and5of the command script310, without being declared first. Here, the identifier “vlan66” has no space between “vlan” and “66” while the object “vlan 66” includes a space between “vlan” and “66” because the computing component111may remove the previously existing space between “vlan” and “66” to create a single identifier “vlan66.” The computing component111may append declarations to the aforementioned entities in lines1-3of an updated command script320to ensure that the commands in the updated command script320will run without exceptions. InFIG.4A, the patch file generating engine144may obtain grammar data201from the storage121of the network device120via an API such as the REST API202. The patch file generating engine144may generate a patch411from a subset of the grammar data201and/or from a subset of the base grammar file211. In some embodiments, the patch411may include a file. In some embodiments, the patch411may be generated simultaneously, or in parallel with, the generation of the mask311. Thus, the simultaneous or parallel generation of the patch411and the mask311transcends the capabilities of a manual update of a grammar file. In some embodiments, the patch file generating engine144may parse the grammar data201and/or the base grammar file211simultaneously or in parallel with the generation of the patch411. For example, the patch file generating engine144may identify a relevant portion of the grammar data201and/or the base grammar file211to be used to derive, obtain, or generate the patch411. The patch file generating engine144may continue to parse the grammar data201and/or the base grammar file211while generating the patch411, thus enabling the generation of the patch411without interruption in the parsing of the grammar data201and/or the base grammar file211. Such parallel or simultaneous operations may be infeasible or impossible during a manual update of a grammar file, which would inevitably entail interruptions in parsing or generating operations. In particular, during a manual update to generate a patch, parsing of the grammar data201and/or the base grammar file211would be interrupted. The patch411may include data that is not present in the base grammar file211or in the mask311. The patch411may include rules to address edge cases, such as, if a particular command is not to appear or to appear differently in a running configuration. The running configuration may include a historical log of all commands requested or executed during a particular session. As an illustrative example, inFIG.4B, a running configuration410may originally display an ip address connected to an interface. However, the patch411may include rules that specify that an ip address is not to be shown in, or is to be redacted from, the running configuration. Thus, using the rules from the patch411, the computing component111may update the running configuration410and generate an updated running configuration420that removes the ip address previously in line2of the running configuration410. Returning back toFIG.1B, the logging engine145may log the commands150into the database112in accordance with the generated or updated grammar files. For example, the grammar files, such as the patch411, may identify or define commands that are permitted at the network devices, contexts and ordering of the aforementioned commands, and/or other specific manners in which the commands should appear or not appear. In a particular implementation, the commands150may include global commands151, interface commands164, and/or sub-interface commands165. The global commands151may include commands that affect or pertain to an entire network, which may include multiple network devices. The interface commands164may include commands that are specific to an operation of an interface. The sub-interface commands165may include commands that configure or modify a virtual interface created from an interface, such as a particular application running on a client device. Particular global commands151may include, without limitation, any of a connect command152that opens a terminal connection, a disable command153that turns off privileged commands, a disconnect command154that disconnects an existing network connection, an enable command155that turns on privileged commands which may include operating parameters, testing, and commands such as show, copy, and debug, an exit command156that exits from an execution mode, a logout command157which may be synonymous with the exit command156, a ping command158that sends echo messages to network devices, a resume command159that resumes an active network connection, a show command160that shows running system information, a telnet command161that opens a telnet connection, a terminal command162that sets terminal line parameters, and/or a trace command163that traces a route to a destination. Same or analogous commands may also be implemented as part of the interface commands164and the sub-interface commands165. In some embodiments, the database112may include a volatile or non-volatile storage. Thus, the stored log may be stored in the database112either temporarily or permanently. FIGS.5A-5Billustrate exemplary implementations of using the generated grammar files to change and roll back configurations on a network device, such as the network device120. For example, inFIG.5A, a configuration500is shown, in which the computing component111may accept and validate commands to disable an interface502, which may be connected to the client device133. The computing component111may validate the commands using the grammar files211,311, and/or411. Once the computing component111validates the commands, the computing component may implement the configuration500which disables the interface502. However, such a configuration change is reversible, as defined by the grammar files211,311, and/or411. Thus, inFIG.5B, the computing component111may receive commands to roll back or revert the configuration500to a previous configuration shown as a configuration550, in which the interface502is enabled. Each configuration change may be logged, for example, into the database112. Therefore, the rules in the grammar files211,311, and/or411in the framework of the computing component111enable flexibility in changing and rolling back of different configurations of network devices. FIG.6illustrates a computing component600that includes one or more hardware processors602and machine-readable storage media604storing a set of machine-readable/machine-executable instructions that, when executed, cause the hardware processor(s)602to perform an illustrative method of generating a grammar file corresponding to grammar data of a network device. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. The computing component600may be implemented as the computing component111ofFIGS.1A,1B,2A,3A,4A, and5A-5B. The computing component600may include a server. The machine-readable storage media604may be implemented as the machine-readable storage media114ofFIG.1B, and may include suitable machine-readable storage media described inFIG.9. At step606, the hardware processor(s)602may execute machine-readable/machine-executable instructions stored in the machine-readable storage media604to extract grammar data from a network device, the grammar data being used to validate syntax of commands provided to the network device. Next, at step608, the hardware processor(s)602may determine that the computing component600lacks an existing grammar file corresponding to the network device, for example, in a database such as the database112of the computing component600. At step610, the hardware processor(s)602may generate a new grammar file including a base grammar file and secondary grammar files based on the extracted grammar data. At step612, the hardware processor(s)602may parse the base grammar file to extract segments of the base grammar file that may be relevant or pertinent to generation of secondary grammar files. At step614, the hardware processor(s)602may generate secondary grammar files based on the extracted segments, wherein the parsing is conducted in parallel with the generation of the secondary grammar files. FIG.7illustrates a computing component700that includes one or more hardware processors702and machine-readable storage media704storing a set of machine-readable/machine-executable instructions that, when executed, cause the hardware processor(s)702to perform an illustrative method of generating a grammar file corresponding to grammar data of a network device. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. The computing component700may be implemented as the computing component111ofFIGS.1A,1B,2A,3A,4A, and5A-5B. The computing component700may include a server. The machine-readable storage media704may be implemented as the machine-readable storage media114ofFIG.1B, and may include suitable machine-readable storage media described inFIG.9. At step706, the hardware processor(s)702may detect a presence of a network device. Next, at step708, the hardware processor(s)702may determine that the network device fails to map to any grammar files stored in a database, such as the database112, of or associated with the computing component700. At step710, the hardware processor(s)702may extract grammar data of the network device. At step712, the hardware processor(s)702may generate a new grammar file based on the extracted grammar data, the new grammar file being used to implement updates to the network device. FIG.8illustrates a computing component800that includes one or more hardware processors802and machine-readable storage media804storing a set of machine-readable/machine-executable instructions that, when executed, cause the hardware processor(s)802to perform an illustrative method of generating a grammar file corresponding to grammar data of a network device. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. The computing component800may be implemented as the computing component111ofFIGS.1A,1B,2A,3A,4A, and5A-5B. The computing component800may include a server. The machine-readable storage media804may be implemented as the machine-readable storage media114ofFIG.1B, and may include suitable machine-readable storage media described inFIG.9. At step806, the hardware processor(s)802may execute machine-readable/machine-executable instructions stored in the machine-readable storage media804to detect a presence of a network device. Next, at step808, the hardware processor(s)802may determine that the network device fails to map to any grammar files stored in a database associated with the computing component. At step810, the hardware processor(s)802may determine a grammar file that maps to a previous firmware version of the network device. At step812, the hardware processor(s)802may determine a difference between the grammar file mapping to the previous firmware version and a grammar file of the network device. At step814, the hardware processor(s)802may generate the new grammar file based on the grammar file mapping to the previous firmware version and the difference. FIG.9depicts a block diagram of an example computer system900in which various of the embodiments described herein may be implemented. The computer system900includes a bus902or other communication mechanism for communicating information, one or more hardware processors904coupled with bus902for processing information. Hardware processor(s)904may be, for example, one or more general purpose microprocessors. The computer system900also includes a main memory906, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus902for storing information and instructions to be executed by processor904. Main memory906also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor904. Such instructions, when stored in storage media accessible to processor904, render computer system900into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system900further includes a read only memory (ROM)908or other static storage device coupled to bus902for storing static information and instructions for processor904. A storage device910, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus902for storing information and instructions. The computer system900may be coupled via bus902to a display912, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device914, including alphanumeric and other keys, is coupled to bus902for communicating information and command selections to processor904. Another type of user input device is cursor control916, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor904and for controlling cursor movement on display912. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. The computing system900may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “component,” “system,” “engine,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The computer system900may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system900to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system900in response to processor(s)904executing one or more sequences of one or more instructions contained in main memory906. Such instructions may be read into main memory906from another storage medium, such as storage device910. Execution of the sequences of instructions contained in main memory906causes processor(s)904to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device910. Volatile media includes dynamic memory, such as main memory906. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus902. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. The computer system900also includes a communication interface918coupled to bus902. Network interface918provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface918may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface918may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, network interface918sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface918, which carry the digital data to and from computer system900, are example forms of transmission media. The computer system900can send messages and receive data, including program code, through the network(s), network link and communication interface918. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface918. The received code may be executed by processor904as it is received, and/or stored in storage device910, or other non-volatile storage for later execution. Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system900. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
40,814
11861346
DETAILED DESCRIPTION The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and operational changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. Present teachings may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits, programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a transitory or non-transitory storage medium such as a disk drive or computer-readable medium. It should be noted that methods disclosed herein can be implemented by a computer (e.g., a desktop computer, a tablet computer, a laptop computer), a game console, a handheld gaming device, a cellular phone, a smart phone, a smart television system, and so forth. The terms “application developer” and “software developer” or simply “developer” refer to one or more of the following: a software developer, a mobile application developer, a software engineer, a software owner, a mobile application owner, a software manager, a mobile application manager, a dialog system owner, and the like. Application developers develop and/or manage a Dialog System Engine and/or a Dialog System Interface. The term “Dialog System” refers to one or more of the following: a chat information system, a spoken dialog system, a conversational agent, a chatter robot, a chatterbot, a chatbot, a chat agent, a digital personal assistant, an automated online assistant, and the like. Each Dialog System includes “Dialog System Interface” and “Dialog System Engine.” Each of these elements can be customized by an application developer. The term “Dialog System Interface” refers to a computer-human interface, which is configured to acquire user inputs in the form of audio messages or text messages, and deliver dialog system responses to the users in the form of audio messages or displayable messages. In an example embodiment, a Dialog System Interface may be implemented as a widget employed to or integrated with a software application, a mobile application, a middleware application, a firmware application, a website, and web service, to provide a computer-human interface for acquiring user requests and delivering dialog system outputs to the users. The term “Dialog System Engine” refers to a software application, which is configured to process user inputs and generate responses thereto. In one example embodiment, a Dialog System Engine refers to a computer-enabled or processor-enabled system for supporting an associated Dialog System Interface by processing user requests and generating corresponding responses thereto. The term “plugins” refers to one or more of the following: software plugins, add-ons, software extensions, updates, upgrades or software codes for a Dialog System Engine. The term “plugins” is also referred herein to as “dialog system extension elements.” The present technology provides for a platform enabling creation of custom Dialog System Engines serving as backend services for Dialog System Interfaces. The platform may include an online platform (i.e., a platform that resides on a server or network node). The present technology also provides for an online marketplace, such as one implemented as a website or web service, for a plurality of dialog system extension elements including various plugins, add-ons, extensions, updates, or software codes for custom Dialog System applications and custom Dialog Systems. The online marketplace can be a part of or connected to the platform to enable a software developer to create custom Dialog Systems and enhance the functionality of Dialog Systems maintained by the platform. More particularly, by using the online marketplace, software developers can extend functionalities of dialog systems associated with the software developers by installing plugins available via the online marketplace and integrating these plugins into the dialog systems. Dialog System Interfaces can be implemented at least as a part of various software applications, mobile applications, middleware applications, firmware applications, websites, web services, and so forth. In other words, Dialog System Interfaces may be on a client side and may provide a computer-human interface configured to at least acquire user inputs and deliver dialog system outputs to the users. Dialog System Engines, on the other hand, support the Dialog System Interfaces by processing user inputs and generating corresponding responses thereto. Thus, the Dialog System Engine and the Dialog System Interface, when interacting with each other, form a Dialog System. One may refer to a Dialog System Interface running on or accessed from a client device as a “frontend” user interface, while a Dialog System Engine, which supports the operation of such Dialog System Interface, can be referred to as a “backend” service. In general, by selecting or purchasing a particular plugin at the marketplace, application developers may extend functionality of Dialog Systems developed by or belonging to the application developers and ultimately extend or alter functionality of software applications that use the Dialog Systems, as well as implement a particular function or a broad array of functions of the software applications that use the Dialog Systems. Once a plugin is selected or purchased by a developer, the plugin can be automatically integrated with a particular Dialog System Engine maintained by the platform. The plugin may have defined application programming interface signatures. Accordingly, when the Dialog System Interface receives user requests, the user requests may be processed using plugins associated with the Dialog System Engine. In other embodiments, user requests may be processed by internal modules of the Dialog System Engine, and if no “good” fulfillment can be found or no proper response can be generated, the user requests may be processed by the associated plugins. Therefore, this technology allows the application developers to enhance Dialog System functionalities without investing time in developing Dialog Systems having multiple Dialog System rules. The benefits of methods and system of the present disclosure can be evident from the following example. Assume a software developer needs to create a mobile application, such as a restaurant booking system, which integrates a Dialog System to allow users to make oral requests. The software developer may not have time or resources to create his own Dialog System, and thus the software developer may use an online platform to create a custom Dialog System specifically for his or her mobile application. The mobile application may include only a Dialog System Interface, which can accept user requests and deliver the user requests to the custom Dialog System Engine for processing, as well as receive responses from the custom Dialog System Engine and provide the responses to the users through a displayable or audio message. The custom Dialog System Engine may reside at the online platform (i.e., on a server or network node). Normally, when the custom Dialog System Engine processes a user request, the Dialog System Engine may generate a response to the user request and cause the Dialog System Interface to deliver the response to the user. In order for the Dialog System Engine to operate normally, the software developer may need to create or customize rules (in some embodiments, this task may require creating entities and intents which define dialog structures and fulfillment execution rules). However, this task can be time consuming in certain instances. Moreover, even if the Dialog System Engine is properly trained, there still can be functions that the Dialog System Engine may not able to fulfill. In these cases, the software developer may improve functionality and operability of the Dialog System Engine by installing plugins. The software developer may simply need to open the online marketplace and select one or more certain plugins the software developer wants to add to the Dialog System. For example, there may be a plugin, which includes dialog system rules with certain entities and intents related specifically to online booking systems. Alternatively, there can be a plugin allowing the Dialog System to process user requests in a foreign language. It shall be clear that there can be a number of various plugins for fulfilling different needs. The software developer can select or purchase plugins of interest at the marketplace in order to make the plugins of interest integrated with the particular Dialog System Engine of the software developer. Once the plugin is installed, the plugin can handle user requests or parts of the user requests so as to fulfill particular user needs. The plugins can be created by third party developers and can be purchased or provided on a free-of-charge basis depending on a particular implementation. Therefore, the present technology makes it very easy and fast for software developers to create custom Dialog Systems for a wide range of third party mobile applications or web services, while adding plugins to these Dialog Systems through the marketplace enhances Dialog System functionality. More specifically, the platform, according to various embodiments of the present disclosure, allows for software developers and engineers to create custom Dialog System Engines that may support frontend Dialog System Interfaces. For example, if a software developer wants to integrate Dialog System functionality into a mobile application as an additional feature, the developer can use the platform to create and deploy a custom Dialog System Engine and link the custom Dialog System Engine with the mobile application. The mobile application, in turn, may have only a Dialog System Interface. In this example, the Dialog System Interface can be activated by a user when the user interacts with the mobile application. The user can make inquiries to the Dialog System Interface in the form of voice inputs or text inputs. Upon receipt of a user inquiry, the Dialog System Interface can transfer the user inquiry with little or no pre-processing to the linked custom Dialog System Engine, which was previously created using the platform. The Dialog System Engine may process the received user inquiry, interpret the user inquiry, and generate a response to the user inquiry based on predetermined rules and settings. The response may then be delivered to the Dialog System Interface for further visual or audio presentation to the user. In some embodiments, the response may include a response text to be delivered to the user and/or metadata with instructions for the user device to perform an action (e.g., open a browser, access certain data online, run a particular application, etc.). In other embodiments, the response may include a callback Uniform Resource Identifier (URI) that the Dialog System Interface or user device may need to access to obtain a response text and/or metadata or perform an action on the device/app represented by the URI. In general, Dialog System Interfaces can be integrated or be an integral part of a wide range of software applications running on a client device, such as a personal computer (PC) or a cellular phone, or on a server so that the Dialog Systems become a part of a website or web service. Dialog Systems can be implemented on a server such that their functionalities can be accessible to Dialog System Interfaces over the Internet, cellular networks, or any other communications means. An online marketplace can be also implemented in “a cloud,” meaning it can run on a server and be available to software developers thorough a particular website or web interface. Referring now to the drawings,FIG.1shows a high-level block diagram of example system environment100suitable for practicing the present technologies. As shown onFIG.1, there is a platform110for creating and maintaining custom Dialog Systems Engines. To these ends, the platform110may include a platform interface112for creating custom Dialog System Engines and backend service114for maintaining and running custom Dialog System Engines120. The platform interface112may include a graphical user interface (GUI) embedded into a webpage and accessible by developers and/or engineers116via the Internet. In some other embodiments, however, the platform interface112may be implemented as a software application such as a downloadable software application or any other software, middleware, or firmware running on or accessible from an electronic device such as a computer. In the example shown inFIG.1, the platform interface112may be realized as a web accessible GUI as will be described below. For simplicity, this disclosure describes such embodiments where the platform interface112is a server-based solution so that it is accessible via the Internet. Regardless of a particular implementation, the platform interface112may enable the developers and/or engineers116through a number of GUI tools to create one or more custom Dialog System Engines120. Still referencing toFIG.1, the backend service114of the platform110may be responsible for maintaining and running the custom Dialog System Engines120that are created, for example, by or with the help of the platform interface112. The backend service114may operate as a web service providing functionality to custom Dialog Systems by enabling Dialog System Interfaces130to interact with the custom Dialog System Engines120maintained at the backend service114of the platform110. As briefly discussed above, the Dialog System Interfaces130can be provided on a client side140associated with dialog system end users118. The Dialog System Interfaces130may be as simple as a GUI enabling the dialog system end users118to make inquiries, which may be then delivered to the backend service114for processing by the corresponding Dialog System Engines120, and to receive responses to the inquires generated by Dialog System Engines120. The Dialog System Interfaces130may be implemented as a stand-alone software application or the Dialog System Interfaces130can be an integral part of a software application, mobile application, web service, website, and the like. Still referencing toFIG.1, the client side140may refer to, but is not limited to, a user device, a terminal, a computing device (e.g., a laptop computer, a tablet computer, a desktop computer, a PC), a cellular phone, a smart phone, a gaming console, a remote control, a multimedia system, a smart television device, a set-top box, an infotainment system, an in-vehicle computing device, an informational kiosk, a robot, and so forth. In these embodiments, the Dialog System Interfaces130may be implemented as software, middleware, or firmware installed on such devices. In additional embodiments, the client side140may refer to a networked or online solution, such as a server, hosting service, web service, web site, cloud service, and so forth. For example, the Dialog System Interface130can be a widget or a GUI provided on one or more web pages enabling end users to make inquiries and get responses to the inquiries. This option may be suitable for those instances when a developer, for example, wants to integrate a Dialog System into a website of the developer to provide enhanced customer service. As can be seen inFIG.1, the interaction between the Dialog System Interfaces130and the corresponding Dialog System Engines120may be performed via a communications network150. The communications network150may include one or more of the Internet, intranet, cellular network, Local Area Network (LAN), Wide Area Network (WAN), IEEE 802.11 based network, and so forth. FIG.1also shows various third party web resources/web services160provided via one or more web servers. These third party web resources/web services160can provide information of various types to the Dialog System Engines120or the Dialog System Interfaces130as a part of a response to a user request. For example, the web resources/web services160may refer to email services, weather services, navigation services, and the like. Accordingly, if a user makes the inquiry “What is the weather like today?,” such information may be automatically acquired by the Dialog System Engine120from one or more third party web resources/web services160and then integrated into a dialog system response to be delivered to the dialog system end users118. Still referring toFIG.1, the example system environment100may include an online plugin marketplace, shown as a marketplace170, for maintaining a plurality of plugins. The marketplace170can be implemented on a server such that it can communicate with the platform110. In some embodiments, however, the marketplace170can be integrated with the platform110. The marketplace170may include a database172for storing plugins and respective metadata. The marketplace170may also include a marketplace interface174for enabling the software developers to review, select, purchase, and/or optionally customize selectable plugins. Metadata may accompany each plugin and include content associated therewith. For example, metadata may include one or more of the following: a description of plugins, example images, example audio messages, tags, developer comments, ranks, publisher information, payment information, statistical information (e.g., a number of downloads/installs), abuse report links/buttons, legal notices, hyperlinks to third party web resources, and so forth. The marketplace interface174may include a GUI embedded into a webpage and accessible by the developers via the Internet. In some other embodiments, however, the marketplace interface174may be implemented as a software application such as a downloadable software application or any other software, middleware, or firmware running on or accessible from an electronic device such as a computer. In the example shown inFIG.1, the marketplace interface174may be realized as a web accessible GUI. For simplicity, this disclosure describes such embodiments where the marketplace170is a server based solution so that it is accessible via the Internet. Regardless of a particular implementation, the marketplace interface174enables the developers, through a number of GUI tools, to select one or more plugins and associate them with their custom Dialog System Engines120. As mentioned above, plugins can be provided to software developers when purchased or on a free-of-charge basis. In an example embodiment, the application developers may need to make a one-time payment or subscribe to a plan requiring regular payments. Accordingly, the marketplace170may be enabled to make financial transactions using monetary or non-monetary funds. For example, the marketplace170may have a credit card processing agent, an Automated Clearing House agent, and the like. Subscription plans may require payments in amounts depending on a number of dialog system users, period during which a plugin is used (e.g., a periodic plan, such as a month-to-month subscription plan, a yearly plan), number of plugin copies, complexity, number of functions provided by the plugin, and so forth. Some plugins may be provided free of charge. In one example, plugins can be provided free of charge during a predetermined period (e.g., a test period of one month), but then may require a payment. In another example embodiment, plugins can relate to free of charge open source agents. These free of charge open source agents can be collectively developed by a plurality of developers. It should be noted that plugins can be provided by software developers or third party developers. For example, some plugins can be provided by an owner of the platform110and/or an owner of the marketplace170. In another example embodiment, plugins can be provided to the marketplace170by third party developers or companies. In yet another example embodiment, plugins can be provided to the marketplace170by software developers. If plugins are sold from marketplace170, the original owner of the plugins may be compensated by the marketplace170from the funds collected from purchasers. For instance, the owners of plugins sold can be compensated as a percentage of the funds collected at the purchase. According to some example embodiments, plugins can be shared among software developers. There may be several possible scenarios, including “knowledge sharing” and “black box sharing.” Under the “knowledge sharing” concept (also referred herein to as “white box” sharing), plugins may be shared by transferring definitions of entities and intents from one developer to another. This may be similar to sharing source code among developers so that all of the developers can contribute to a particular plugin. Under the “black box sharing” concept, developers on the consuming side may not have access to data, contents, entities, intents, and the like, and can use the plugin at runtime but not make any changes to the plugin. In order for software developers, third-party developers, or companies (collectively referred to as “plugin developers”) to sell and/or share plugins through the marketplace170, they may be required to register with the marketplace170and establish a user profile. In some embodiments, marketplace personnel may review each plugin submitted by a plugin developer before publishing. The review may be required to maintain high quality products and services for application developers. In yet more embodiments, plugin developers may be provided with a separate interface (different from the marketplace interface174), which may include statistical information associated with plugins of these developers, control modules, financial information, and so forth. Accordingly, the marketplace170can be referred to a multi-user web platform/web service allowing plugin developers to sell, distribute or share plugins or elements of the plugins, and allowing application developers to review, select, or purchase plugins of their interest, and integrate them with custom Dialog System Engines. The process of creating and operating custom Dialog System Engines120will now be described with reference toFIGS.1-3. In particular, the platform interface112may provide one or more GUIs having a number of tools enabling developers to create and customize one or more dialog system elements, which serve as a basis for a custom Dialog System Engine120. According to various embodiments, dialog system elements include entities and intents. Each entity may refer to a number of objects having the same or similar characteristics. In other words, entities may include lists of terms and/or keywords defining objects of one class. In one example embodiment, an entity may refer to a keyword and a set of its synonyms. In another example embodiment, an entity may refer to a keyword and a set of its definitions. In yet another example embodiment, an entity may refer to a list (e.g., a list of cities, list of names, list of titles, list of brands, list of street names, etc.). In some embodiments, each entity can have a title. For example, one entity can be titled as “city” and may contain a list of cities such as Alexandria, Arlington, Boston, and so forth. In other embodiments, an entity can be titled as a keyword and can contain synonyms and/or definitions of this keyword. In one example embodiment, the entity called “music” may include the terms song, singer, singing, musician, and so forth. In another example embodiment, the entity called “artist” may include a list of music bands, music ensembles, or music artists. In another example embodiment, the entity called “Beatles” may include a list of possible synonyms, such as “The Beatles,” “Beatles,” “Fab Four,” “Liverpool Legends,” “John Lennon,” and so forth. In yet another example embodiment, there can be an entity called “Artist” which may include various artist names, artist name synonyms, music band names, and so forth. In some embodiments, the Dialog System Engines may include a number of default, pre-configured entities and/or intents. These can include common types of entities or intents related to such concepts as time, date, location, and the like. For example, when a developer creates a new Dialog System Engine, it may already have a few entities of common type such as “@System.Date” entity. This entity may cover linguistic constructs related to particular dates and may include the following terms: “today,” “tomorrow,” “next week,” “January 1,” “January 1 of next year,” “next Monday,” “the following Monday,” and so forth. Further, each intent of a Dialog System Rule may include a dialog system interaction scheme, which may provide a particular relation between at least one user request and at least one dialog system linguistic response or fulfilment response. The dialog system interaction scheme can be represented by a rule based on a relationship between a particular action and at least one entity. Actions generally relate to formalized software objects such as JSON (JavaScript Object Notation) objects causing at least one processor to generate linguistic or fulfilment responses associated with at least one entity. Accordingly, each intent can be represented as a logical relation between at least one action and at least one entity object, for example, as follows:a) [Action] @[Entity]b) [Action] @[Entities]c) [Actions] @[Entity]d) [Actions] @[Entities]e) Text @[Entity]f) Text @[Entities]g) Text @[Entity] Texth) [Action] Text @[Entity] The procedures a) through d) mean that a particular Action or several Actions shall be performed by client side140and/or Dialog System Interface130with respect to a predetermined Entity or several Entities. For example, one intent may be represented as “Play @Artist,” where @Artist is a developer-defined entity containing a set of artists. In this example, the intent orders the Dialog System Engine120to activate the playback of at least one Beatles song, depending on a context. The procedures e) through h) mean that particular information in the form of text is provided with respect to a particular Entity. For example, the user request “Create a meeting with John at 1 p.m. tomorrow, please” may be presented as the following markup: [Action] Text @[sys.date-time] Text. Here, @[sys.date-time] refers to an entity associated with time and date, while the phrase “Create a meeting” refers to a predetermined action to be performed by a Dialog System Interface130or Dialog System Engine120with a certain mobile application, software application, or web service. The element “Text” refers to content and not entity nor intent. As mentioned above, a dialog system rule may cause generation of linguistic response and/or fulfilment response as an answer to a user request. One example of linguistic response may include particularized content deliverable as an audio message or displayable message. Fulfilment responses may refer to particular processor-executable instructions for one or more software applications, middleware, firmware, web services, and the like that cause implementation of a particular action. Some examples of fulfilment responses may include scheduling an event in a calendar mobile application, writing and sending a text message or email, searching for content at a web search service, building a route in a navigational software application, and so forth. In certain embodiments, at least some linguistic responses and/or fulfilment responses can be configured by developers. In other embodiments, at least some linguistic responses and/or fulfilment responses can be pre-configured and be available as default responses. In certain additional embodiments, developers can provide not entities and intents, but just example requests to illustrate intents and entities. In these embodiments, the platform110may automatically determine, using machine-learning techniques, what entities and intents are implied in example user requests and create corresponding rules. For example, a developer may simply provide example requests, such as “Play Beatles” and “I'd like to listen to Madonna,” and the platform110may match “Beatles” and “Madonna” to existing entities (system's or user's) and generate corresponding “[Action] @[Entity]” rules automatically. Thus, developers can use the platform interface112to generate a plurality of dialog system rules specific to a particular application or industry. These pluralities of entities and intents form dialog system rules (also referred to as dialog system elements) and enable the custom Dialog System Engines to perform certain actions or generate certain outputs in response to a wide range of end user inputs. FIG.2is a process flow diagram showing a method200for creating custom Dialog System Engines using a platform, shown as the platform110onFIG.1, and for operating the platform according to an example embodiment. The method200may be performed by processing logic that may comprise hardware (e.g., decision-making logic, dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic refers to one or more components of the platform. Notably, the below recited steps of the method200may be implemented in an order different than described and shown inFIG.2. Moreover, the method200may have additional steps not shown herein, but which can be evident for those skilled in the art from the present disclosure. The method200may also have fewer steps than outlined below and shown inFIG.2. At operation205, an application developer may be enabled to register with the platform. To these ends, the software developer may need to interact with a platform interface. The registration may include creating a developer profile, which can be maintained by the platform. The software developer profile may link (i.e., associate) a custom Dialog System Engine of this software developer and one or more Dialog System Interfaces deployed on a client side. The linking may include stipulating Application Programming Codes, rules for interaction, destination addresses, and so forth. At operation210, the platform may receive from the software developer one or more entities and store the received entities at a local database. In some embodiments, the entities may be not received, but created by the developer using web tools of the platform interface. At operation215, the platform may receive from the software developer one or more intents and store the intents at the local database. In some embodiments, the intents may be not received, but created by the software developer using tools of the platform interface. As described above, the intents may be associated with the entities, and intents and entities together may form dialog system elements (custom rules enabling the Dialog System Engine to generate responses tailored for specific needs). At operation220, the platform may associate one or more entities with one or more intents to create (i.e., form) the custom Dialog System Engine. The custom Dialog System Engine may be associated with one or more Dialog System Interfaces of the software developer. Operations205-220illustrate a set-up process for the custom Dialog System Engine, while the following operations225-245illustrate the operation of the custom Dialog System Engine. Once all dialog system elements of custom the Dialog System Engine are created, the dialog system elements may be maintained as a backend service and enable any of the associated Dialog System Interfaces to provide the full functionality of the Dialog System to users according to predetermined settings. At operation225, the platform may receive a user request from an unidentified Dialog System Interface. The user request can be a voice input or text input. In some embodiments, the Dialog System Interface can pre-process the user input, for example, by recognizing spoken words and transforming the voice input into text input. In other embodiments, however, no pre-processing is performed by Dialog System Interface. At operation230, the platform may process the user request and identify the Dialog System Interface and the Dialog System Engine associated with the identified Dialog System Interface. To these ends, the user request can be accompanied by an identifier when the user request is sent from the Dialog System Interface to the platform. At operation235, based on the result of the identification at operation230, the platform may activate the custom Dialog System Engine associated with the identified Dialog System Interface. At the same operation, the platform may also retrieve or identify one or more dialog system elements (i.e., one or more entities and one or more intents) based on the result of the identification at operation. At operation240, the Dialog System Engine may process the user request using identified dialog system elements (i.e., one or more entities and one or more intents) as retrieved at operation235. At operation245, the Dialog System Engine may generate a response and send the response to the Dialog System Interface associated with the custom Dialog System Engine120. The Dialog System Interface may then display and/or playback the response to the end user depending on predetermined settings. FIG.3shows a high-level architecture of an exemplary Dialog System Engine300, according to an example embodiment. It should be noted that each module of the Dialog System Engine300or associated architecture includes hardware components, software components, or a combination thereof. The Dialog System Engine300may be embedded or installed in a user device or server, or may be presented as a cloud computing module and/or a distributed computing module. In the embodiment shown, the Dialog System Engine300may include an Automatic Speech Recognizer (ASR)310configured to receive and process a speech-based user input305into a sequence of parameter vectors. The ASR310may further convert the sequence of parameter vectors into a recognized input (i.e., a textual input having one or more words, phrases, or sentences). The ASR310may include one or more speech recognizers, such as a pattern-based speech recognizer, free-dictation recognizer, address book based recognizer, dynamically created recognizer, and so forth. Further, the Dialog System Engine300may include an NLP module320for understanding spoken language input. Specifically, the NLP module320may disassemble and parse the recognized input to produce utterances, which are then analyzed utilizing, for example, morphological analysis, part-of-speech tagging, shallow parsing, and the like. The NLP module320may then map recognized input or its parts to meaning representations. The Dialog System Engine300may further include a dialog manager330, which may coordinate the activity of all components, control dialog flows, and communicate with external applications, devices, services, or resources. The dialog manager330may play many roles, which include discourse analysis, knowledge database query, and system action prediction based on the discourse context. In some embodiments, the dialog manager330may contact one or more task managers (not shown) that may have knowledge of specific task domains. In some embodiments, the dialog manager330may communicate with various computational or storage resources340, which may include, for example, a content storage, rules database, recommendation database, push notification database, electronic address book, email or text agents, dialog history database, disparate knowledge databases, map database, points of interest database, geographical location determiner, clock, wireless network detector, search engines, social networking websites, blogging websites, news feeds services, and many more. In some embodiments, computational or storage resources340may include one or more web resources/web services160as shown onFIG.1and discussed above. Referring back toFIG.3, the dialog manager330may employ multiple disparate approaches to generate an output360in response to recognized inputs. Some approaches include using statistical analysis, machine-learning algorithms (e.g., neural networks), heuristic analysis, and so forth. The dialog manager330may be one of the central components of the Dialog System Engine. The major role of the dialog manager330may be to select the correct system actions based on observed evidences and inferred dialog states from the results of the NLP (e.g., dialog act, user goal, and discourse history). In addition, the dialog manager330may be able to handle errors when the user input has ASR and NLP errors caused by noises or unexpected inputs. The Dialog System Engine300may further include an output renderer350for transforming the output360of dialog manager330into a form suitable for providing to the user. For example, the output renderer350may employ a text-to-speech engine or may contact a pre-recorded audio database to generate an audio message corresponding to the output360of the dialog manager330. In certain embodiments, the output renderer350may present or cause to present the output360of the dialog manager330as a text message, an image, or a video message for further displaying on a display screen of the user device. In some example embodiments, the output renderer350can constitute at least a part of the Dialog System Interface shown as the Dialog System Interface130onFIG.1. Still referring toFIG.3, the Dialog System Engine300may include one or more dialog system rules maintained in at least one rule database365. The Dialog System Engine300may also include or be associated with one or more context databases370, which may maintain a plurality of context description elements, such as lists of terms, keywords, phrases, expressions, context variables, context parameters (e.g., geolocation, system rate, GUI, etc.) associated with one or more dialog system rules. In other words, the context databases370may include information supporting the process of determining conversational or environmental context for particular user requests. The Dialog System Engine300may also include or be associated with one or more statistics and usage databases380, which may be configured to aggregate statistical or usage information associated with the operation of the Dialog System Engine300and/or associated Dialog System Interface and/or associated mobile or software application. For example, statistics and usage database380may accumulate dialog system logs, which can be later used for optimization of dialog system rules, dialog system responding schemes, training machine-learning algorithms if employed by Dialog System Engine300, and so forth. FIG.4is a high-level block diagram illustrating an example system400for enhancing dialog systems described herein. In particular, the system400may be a server-based solution suitable for running a platform110and/or a marketplace170shown onFIG.1. Note that all components of the system400shown onFIG.4may include logic elements, hardware components, software (firmware) components, virtual components, or a combination thereof. The system400may include, relate, or constitute an integral part of one or more of a variety of types of devices and systems such as a general-purpose computer, server, web server, network service, cloud-computing service, and so forth. Further, all modules shown inFIG.4may be operatively coupled using any suitable wired, wireless, radio, electrical, or optical standards. As shown inFIG.4, the system400includes the following hardware components: at least one processor402, a memory404, optionally one or more storage devices406, and optionally network interface408. The system400may also optionally include the following software or virtual components: an operating system410, one or more software applications420, and an interface430(such as a platform interface112and/or marketplace interface174shown onFIG.1). The interface430may provide a human-centric interface for accessing and managing information as discussed herein. In some embodiments, the processor402may be configured to implement functionality and/or process instructions for execution within the system400. For example, the processor402may process instructions stored in the memory404and/or instructions stored on the storage devices406. Such instructions may include components of the operating system410, the software applications420, and/or the interface430. The memory404, according to one example embodiment, may be configured to store information within system400during operation. The memory404, in some example embodiments, may refer to a non-transitory computer-readable storage medium or a computer-readable storage device. In some example embodiments, the memory404may be a temporary memory, meaning that a primary purpose of the memory404may not be long-term storage. The memory404may also refer to a volatile memory, meaning that the memory404may not maintain stored contents when the memory404is not receiving power. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, the memory404may be used to store program instructions for execution by the processor402. The memory404, in one example embodiment, may be used to temporarily store information during program execution. One or more storage devices406can also include one or more transitory or non-transitory computer-readable storage media and/or computer-readable storage devices. In some embodiments, the storage devices406may be configured to store greater amounts of information than the memory404. The storage devices406may further be configured for long-term storage of information. In some examples, the storage devices406include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, solid-state discs, flash memories, forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM), and other forms of non-volatile memories known in the art. In one example, the storage devices406can include a database shown as a database172onFIG.1(i.e., the storage devices406can store and maintain multiple dialog system extension elements, which include plugins, add-ons, extensions, etc.). In other embodiments, the storage devices406can store and maintain user profiles and custom Dialog System Engines. Still referencing toFIG.4, the system400may include a network interface408. The network interface408can be utilized to communicate with external devices, servers, and networked systems via one or more communications networks such as one or more wired, wireless, or optical networks including, for example, the Internet, intranet, LAN, WAN, cellular phone networks (e.g. Global System for Mobile (GSM) communications network, packet switching communications network, circuit switching communications network), Bluetooth radio, and an IEEE 802.11-based radio frequency network, among others. The network interface408may be a network interface card, such as an Ethernet card, optical transceiver, radio frequency transceiver, or any other type of device that can send and receive information. Other examples of such network interfaces may include Bluetooth®, 3G, 4G, and WiFi® radios in mobile computing devices as well as a Universal Serial Bus. The operating system410may control one or more functionalities of system400or components of the system400. For example, the operating system410may interact with the interface430, and may further facilitate one or more interactions between the software applications420and processor402, memory404, storage devices406, and/or network interface408. The operating system410may interact with or be otherwise coupled to the interface430and components of the interface430. Notably, the system400and its components may also interact with one or more remote storage or computing resources including, for example, web resources, web sites, social networking websites, blogging websites, news feeds, email servers, web calendars, event databases, ticket aggregators, map databases, points of interest databases, and so forth. Software applications420, in essence, may provide functionality to the platform and/or the marketplace and enable their operation. Alternatively, the software applications420may be additions to the platform and/or the marketplace. FIG.5is a process flow diagram showing a method500for enhancing dialog systems, according to an example embodiment. The method500may be performed by processing logic that may comprise hardware (e.g., decision-making logic, dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic refers to one or more components of the marketplace170and/or the platform110shown onFIG.1. Notably, the below recited steps of the method500may be implemented in an order different than described and shown inFIG.5. Moreover, the method500may have additional steps not shown herein, but which can be evident for those skilled in the art from the present disclosure. The method500may also have fewer steps than outlined below and shown inFIG.5. The method500may commence at operation510with maintaining an online marketplace. The online marketplace may be maintained by the memory and may include a plurality of dialog system extension elements (e.g., a dialog system plugin, a dialog system add-on, a dialog system update, and a dialog system upgrade). A software developer may view and select particular dialog system extension elements through a marketplace interface. In an example embodiment, the software developer can review metadata associated with dialog system extension elements, review comments of other developers or users, ranks, ratings, reviews, publisher's information, description, manuals, images, videos, legal information, and so forth. At operation520, a processor may receive at least one selection of a dialog system extension element from the software developer. In an example embodiment, the software developer is associated with a dialog system; i.e., the software developer may have the dialog system developed or it my be owned by the software developer. In some embodiments, the selection may require making a financial transaction so that the dialog system extension element can be integrated with a particular Dialog System Engine. In these cases, the software developer may need to make a subscription to a plan or make a lump sum payment for the right to use the selected dialog system extension element. More specifically, upon receiving the selection of the dialog system extension element, the processor may receive a selection of a subscription plan for the dialog system extension element from the software developer. Furthermore, the processor may receive a payment for the dialog system extension element. The payment may be provided by the software developer in accordance with the subscription plan. At operation530, the processor may associate the dialog system extension element selected by the software developer with the dialog system of the software developer. For this purpose, the processor may need to identify the software developer or the dialog system associated with the software developer. The identification can be accomplished by an authorization process (i.e., requesting the software developer to login with the online marketplace). More specifically, the processor may receive an authorization request from the software developer. In this regard, the processor may communicate with the platform based on the authorization data and access the records or user profile associated with the software developer and the dialog system of the software developer. The records may be stored in the memory. Based on the records, the software developer and the dialog system of the software developer may be identified. Once the software developer and/or the dialog system of the software developer are identified, the processor may authorize an access of the software developer to the online marketplace. The processor may further proceed to linking the dialog system of the software developer with the dialog system extension element selected by the software developer. For the linking, the dialog system extension element may be integrated or embedded into the dialog system of the software developer, or, alternatively, certain links or metadata associated with the dialog system extension element may be integrated with the dialog system of the software developer. In either case, the dialog system extension element may operate in conjunction with the dialog system of the software developer. In an example embodiment, the software developer can obtain dialog system extension elements as “black box” solutions, meaning the software developer may be not able to see source code, entities, intents, or other information of the dialog system extension element. Alternatively, the software developer can obtain dialog system extension elements as “white box” solutions, meaning the software developer may be able to see source codes, entities, intents, or other information of the dialog system extension element. In yet more embodiments, various options in between “black box” and “white box” solutions can be provided, meaning there can be provided various access levels which can allow the software developer to view and edit particular elements of the dialog system extension elements (e.g., access to intent execution results, but not dialogs definitions, intents and entities themselves). Alternatively, the software developer can be provided with a full open-source access to the dialog system extension element. In other words, the dialog system extension elements may be provided as open-source dialog system extension elements editable by the software developer, restricted access dialog system extension elements partly editable by the software developer, and closed access dialog system extension elements non-editable by the software developer. Further, when the dialog system extension element is successfully associated with the dialog system, the operation of the dialog system can be as follows. At operation540, the processor may receive a user request from a dialog system interface. The dialog system interface may be installed on a user device or a third party server. The dialog system interface may be associated with the dialog system maintained at the online platform. At operation550, the processor may identify a dialog system engine associated with the dialog system interface and, thus, with the dialog system. At operation560, the processor, or the dialog system engine, may identify the dialog system extension element or multiple elements associated with the dialog system engine. Optionally, an arbitration step can be performed to select between elements or present the user with multiple results at once. The arbitration step can be performed by an arbitration application. At operation570, the user request may be processed by the dialog system extension element alone or in conjunction with the dialog system engine to generate a response to the user request. Finally, at operation580, the processor may cause the delivery of the response to the user. The delivery of the response may include delivering, to the dialog system interface or to a user device, text, video, audio, and/or metadata, such as a callback URL where the user device can obtain data for delivering to the user. In certain embodiments, at operation570, the dialog system engine may attempt to process the user request without any dialog system extension elements. Such processing may include activating the dialog system based on the user request, retrieving one or more entities and one or more intents as discussed above, and processing the user request by applying one or more entities and one or more intents in order to generate a proper response or fulfilment action. If the processing of the user request in such a way was successful, the processor may proceed to operation580so as to deliver the response to the user or make a particular action. Alternatively, if the processing of the user request by applying the dialog system engine itself is unsuccessful, the user request is processed by the dialog system extension element (or multiple dialog system extension elements) so as to generate a substitute response to the user request. Once the substitute response is generated by one or more dialog system extension elements, the method500proceeds to operation580as discussed above. FIG.6is a high-level block diagram illustrating an example user device600suitable for implementing the methods described herein. It is worth mentioning that all components of the user device600may include logic elements, hardware components, software (firmware) components, virtual components, or a combination thereof. The user device600may include at least an integral part of one or more of a variety of types of devices and systems such as a general-purpose computer, desktop computer, server, computer network, network service, cloud-computing service, and so forth. Further, all modules shown inFIG.6may be operatively coupled using any suitable wired, wireless, radio, electrical, or optical standards. As already outlined above, the user device600may refer to a smart phone, wireless telephone, computer, such as a tablet computer or desktop computer, infotainment system, in-vehicle computing device, and the like. As shown inFIG.6, the user device600may include the following hardware components: at least one processor602, a memory604, one or more storage devices606, one or more input modules608, one or more output modules610, a network interface612, and a geo location determiner614. The user device600may also include the following software or virtual components: an operating system620, one or more software (mobile) applications630, and a dialog system interface130, which can be a stand-alone software application or be integrated into one or more software applications630. The dialog system interface130may provide a human-centric interface for accessing and managing information as discussed herein, communicating with a dialog system engine, and communicating with web resources/web services. According to various embodiments, the dialog system interface130can be virtual. The processor602may be configured to implement functionality and/or process instructions for execution within the user device600. For example, the processor602may process instructions stored in the memory604and/or instructions stored on the storage devices606. Such instructions may include components of the operating system620and the software applications630. The user device600may also include one or more additional components not shown inFIG.6, such as a housing, power supply, communication bus, and the like. These elements are omitted so as to not burden the description of present embodiments. The memory604, according to one example embodiment, may be configured to store information within the user device600during operation. The memory604may refer to a non-transitory computer-readable storage medium or a computer-readable storage device. In some examples, the memory604may be a temporary memory, meaning that a primary purpose of the memory604may not be long-term storage. The memory604may also refer to a volatile memory, meaning that the memory604may not maintain stored contents when the memory604is not receiving power. Examples of volatile memories include RAM, DRAM, SRAM, and other forms of volatile memories known in the art. In some examples, the memory604may be used to store program instructions for execution by the processor602. The memory604, in one example embodiment, may be used by software (e.g., the operating system620) or the dialog system interface130executing on the user device600to temporarily store information during program execution. The storage devices606can also include one or more transitory or non-transitory computer-readable storage media and/or computer-readable storage devices. In some embodiments, the storage devices606may be configured to store greater amounts of information than the memory604. The storage devices606may further be configured for long-term storage of information. In some examples, the storage devices606may include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, solid-state discs, flash memories, forms of EPROM or EEPROM, and other forms of non-volatile memories known in the art. Still referencing toFIG.6, the user device600may include one or more input modules608. The input modules608may be configured to receive user inputs. Examples of the input modules608may include a microphone, keyboard, keypad, mouse, trackball, touchscreen, touchpad, or any other device capable of detecting an input from a user or other source in the form of speech, audio, or tactile actions, and relaying the input to the user device600or components thereof. The output modules610, in some example embodiments, may be configured to provide output to users through visual or auditory channels. The output modules610may include a video graphics adapter card, liquid crystal display monitor, light emitting diode monitor, sound card, speaker, or any other device capable of generating output that may be intelligible to a user. The user device600, in some embodiments, may include the network interface612. The network interface612can be utilized to communicate with external devices, servers, and networked systems via one or more communications networks such as one or more wired, wireless, or optical networks including, for example, the Internet, intranet, LAN, WAN, cellular phone networks (e.g., GSM communications network, packet switching communications network, circuit switching communications network), Bluetooth radio, and an IEEE 802.11-based radio frequency network, among others. The network interface612may be a network interface card, such as an Ethernet card, optical transceiver, radio frequency transceiver, or any other type of device that can send and receive information. Other examples of such network interfaces may include Bluetooth®, 3G, 4G, and WiFi® radios in mobile computing devices as well as a Universal Serial Bus. The user device600may further include the geo location determiner614for determining a current geographical location of the user device. The geo location determiner614may utilize a number of different methods for determining geographical location including, for example, receiving and processing signals of Global Positioning Systems, GLONASS satellite navigation systems, or the Galileo satellite navigation system; utilizing multilateration of radio signals between radio towers (base stations); or utilizing geolocation methods associated with Internet Protocol addresses, Media Access Control addresses, Radio-Frequency Identification, or other technologies. The operating system620may control one or more functionalities of the user device600or its components. For example, the operating system620may interact with the dialog system interface130and may further facilitate one or more interactions between the software applications630and one or more of the processor602, the memory604, the storage devices606, the input modules608, and the output modules610. As shown inFIG.6, the operating system620may interact with or be otherwise coupled to the software applications630, the dialog system interface130, and components thereof. In some embodiments, the dialog system interface130can be included into the operating system620and/or the software applications630. Notably, the user device600and its components, such as the dialog system interface130, may also interact with one or more remote storage or computing resources including, for example, web resources, websites, social networking websites, blogging websites, news feeds, email servers, web calendars, event databases, ticket aggregators, map databases, points of interest databases, and so forth. Thus, methods and systems for enhancing dialog systems have been described. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
61,815
11861347
DETAILED DESCRIPTION OF EMBODIMENTS FIG.1is a block diagram illustrating an overall configuration of a network system according to an embodiment,FIG.2is a block diagram illustrating a schematic configuration of a server illustrated inFIG.1, andFIG.3is a block diagram illustrating a schematic configuration of a software updating device illustrated inFIG.1. The network system illustrated inFIG.1is a system for updating software of electronic control units13athrough13dinstalled in a vehicle, and is provided with a server (center)1and an onboard network2installed in the vehicle. The server1is capable of communication with a software updating device11installed in the vehicle, via a network5, and manages information relating to whether there is update data for the electronic control units13athrough13dinstalled in the vehicle, and updating processing of software performed by the software updating device11. The server1is provided with a central processing unit (CPU)21(one or more processors), random access memory (RAM)22, a storage device23, and a communication device24, as illustrated inFIG.2. The storage device23is provided with a rewritable storage medium such as a hard disk, a solid state drive (SSD), or the like, and stores software for executing software update managing, and later-described prerequisite condition information and error information. In the server1, the CPU21executes software read out from the storage device23, using the RAM22as a work area, thereby executing later-described control processing. The communication device24is equipment for performing communication with the software updating device11via the network5such as the Internet or the like. The onboard network2is provided with the software updating device11(OTA master), a communication module12, the electronic control units13athrough13d, and a display device14. The software updating device11is connected to the communication module12via a bus15a, connected to the electronic control units13aand13bvia a bus15b, connected to the electronic control units13cand13dvia a bus15c, and connected to the display device14via a bus15d. The software updating device11is a device that is able to communicate with the server1wirelessly (over the air) via the communication module12and the network5, and control updating of software of equipment out of the electronic control units13athrough13dthat is the object of updating, based on update data acquired from the server1, and later-described prerequisite condition information. The software updating device11may also be referred to as a central gateway. The communication module12is a communication device that connects the onboard network2and server1provided in the center. The electronic control units13athrough13dare ECUs that control operations of various parts of the vehicle, and that include a CPU, RAM, and a nonvolatile storage device such as a flash memory, an electrically erasable programmable ROM (EEPROM), or the like. The CPU executes software stored in the storage device, using the RAM as a work area, thereby executing control functions. The display device14(human-machine interface (HMI)) is used to perform various types of display at the time of update processing of software of the electronic control units13athrough13d, such as a display indicating that update data is available, a display requesting the user for consent to update software, a display of update results, and so forth. Typically, the display device of an automotive navigation system can be used as the display device14, but the display device14is not limited in particular, as long as the display device14is able to display information necessary at the time of software update processing. Note that while four electronic control units13athrough13dare exemplified inFIG.1, the number of electronic control units is not limited in particular. An electronic control unit may further be connected to the bus15dillustrated inFIG.1, besides the display device14. As illustrated inFIG.3, the software updating device11is provided with a microcomputer35having a CPU31(one or more processors), RAM32, ROM33, and a storage device34, and a communication device36. In the software updating device11, the CPU31of the microcomputer35executes software read out from the ROM33, using the RAM32as a work area, thereby executing later-described control processing. The communication device36is equipment that performs communication with the communication module12, the electronic control units13athrough13d, and the display device14, via the busses15athrough15dillustrated inFIG.1. FIG.4is a functional block diagram of the server illustrated inFIG.1,FIGS.5A and5Bare diagrams illustrating examples of prerequisite condition information that the server illustrated inFIG.1stores, andFIG.6is a diagram illustrating an example of error information that the server illustrated inFIG.1stores. The server1is provided with a first storage unit26, a second storage unit27, a communication unit28, and a control unit29. The first storage unit26and the second storage unit27are realized by the storage device23illustrated inFIG.2, and the communication unit28and the control unit29are realized by the CPU21illustrated inFIG.2executing software stored in the storage device23, using the RAM22. The first storage unit26stores the prerequisite condition information. The prerequisite condition information is information defining prerequisite conditions that the vehicle is to satisfy at the time of updating the software of one of the electronic control units13athrough13din the vehicle.FIGS.5A and5Bshow examples of prerequisite condition information. The prerequisite condition information shown inFIG.5Ais information in which vehicle IDs that identify vehicles, and prerequisite conditions that the vehicles identified by the vehicle IDs are to satisfy when the software updating devices11thereof executes software updating processing in the vehicles identified by the vehicle IDs, are correlated. It is sufficient for the vehicle IDs to be information that uniquely identifies vehicles, and examples thereof include vehicle identification numbers (VIN), frame numbers, or the like. One or more prerequisite conditions may be set as to one vehicle ID. In the example inFIG.5A, two prerequisite conditions are set as to vehicle ID “VID 1001”, and three prerequisite conditions are set as to vehicle ID “VID 2001”. The types and number of the prerequisite conditions set for each vehicle ID may be changed in accordance with the grade and equipment of the vehicle identified by the vehicle ID, provision of options, and so forth. The prerequisite condition information shown inFIG.5Bis information in which make IDs that identify makes, and prerequisite conditions that the vehicles identified by the make IDs are to satisfy when the software updating devices11thereof perform software updating processing in the vehicles identified by the make IDs, are correlated. The make IDs typically are information indicating models, but it is sufficient to be information whereby the model of the vehicle can be uniquely identified, and may be expressed by part of or a combination of a plurality of parts of information included in vehicle identification numbers (VIN) or frame numbers. One or more prerequisite conditions can be set to each make ID in the example inFIG.5Bas well. Now, specific examples of the prerequisite conditions that vehicles are to satisfy at the time of execution of software updating processing will be described. Examples of the prerequisite conditions include state of charge of a battery, operating state of a direct current (DC)-to-DC converter, state of error occurring at an electronic control unit that is the object of updating, installation state of particular sensors or accessories, availability of functions of an electronic control unit that is the object of updating, shift range, vehicle speed, Global Positioning System (GPS) coordinates, and so forth. The battery state of charge and the operating state of the DC-to-DC converter are conditions requesting that electric power necessary for the software updating processing is secured, and a value of the battery state of charge capable of supplying the electric power necessary for the software updating processing and electric power necessary to maintain other functions of the vehicle, and that the DC-to-DC converter that supplies electric power from a traction battery to an accessory battery is running, can be defined as the conditions. The state of error occurring at an electronic control unit that is the object of updating is a condition requesting that an error that would impede execution of the software updating processing at the electronic control unit that is the object of updating (e.g., malfunctioning of the electronic control unit itself, insufficient capacity available at the data storage region, or the data storage region being inaccessible) is not occurring. The installation state of particular sensors or accessories is a condition requesting that equipment such as sensors, accessories, and so forth, necessary for the electronic control unit that is the object of updating to operate are installed in the vehicle. The availability of functions of the electronic control unit that is the object of updating is a condition requesting that settings are made by the user to use the functions of the electronic control unit that is the object of updating. The shift range, vehicle speed, and GPS coordinates are conditions requesting that the vehicle is in a safe state at the time of execution of the software updating processing, and specifically, the shift range of P range, the vehicle speed of 0 km/h, and the current GPS coordinates in a coordinate range other than on a public road, such as in a parking lot or in a space where the vehicle can be stopped, or the like can be defined. Note that these described prerequisite conditions are exemplary, and that other prerequisite conditions can be defined. Also, the prerequisite conditions can be expressed as particular threshold values, numerical value ranges, binary values regarding whether conditions are satisfied, and so forth. When prerequisite condition information is defined for each vehicle, as shown inFIG.5A, conditions necessary for the software updating processing can be finely set in accordance with equipment, user settings, and so forth, of each vehicle. On the other hand, when defining the prerequisite condition information for each make, conditions necessary for the software updating processing can be set in a form appropriate for each make, in accordance with battery capacity and differences among equipment depending on makes such as sensors and the like that are installed, as illustrated inFIG.5B. The second storage unit27stores error information. The error information is information identifying, out of the prerequisite information included in the prerequisite condition information transmitted to the software updating device11, prerequisite conditions determined by the software updating device11not to be satisfied, before starting or during execution of the software updating processing by the software updating device11. The error information is information in which the vehicle ID identifying the vehicle and the prerequisite condition regarding which an error occurred at the time of software updating processing are correlated. The error information may include information of the date and time at which the error occurred due to not satisfying a prerequisite condition. The error information stored in the second storage unit27can be used as information to identify the cause with which the software updating processing was not successfully executed, perform individual handling necessary for software updating processing for each vehicle, and so forth. Based on a request from the software updating device11, the communication unit28acquires prerequisite condition information to be applied to this software updating device11from the first storage unit26, and transmits the acquired prerequisite condition information to the software updating device11. Also, the communication unit28receives, from the software updating device11, error information including prerequisite conditions determined by the software updating device11not to be satisfied at the time of software updating processing, out of all prerequisite information included in the prerequisite condition information transmitted to the software updating device11. The communication unit28causes the aforementioned second storage unit27to store the received error information, for each vehicle. Also, the communication unit28accepts various types of requests (later-described update confirmation requests for software and transmission requests for prerequisite condition information) transmitted from the software updating device11. The control unit29sets prerequisite conditions for the vehicle corresponding to the error information, based on the error information received from the software updating device11. A conceivable example of setting prerequisite conditions based on error information is to temporarily ease part of the prerequisite conditions that the vehicle is to satisfy. For example, a case can be assumed in which the battery state of charge is set as a prerequisite condition that the vehicle is to satisfy at the time of software updating processing. The battery state of charge (r1) set as the prerequisite condition is set to a value obtained by adding a certain margin to a minimum state of charge (r0) necessary for the software updating processing and for maintaining functions of the vehicle. When software updating is not performed due to an error of the battery state of charge not satisfying the prerequisite condition, the battery state of charge for the prerequisite condition is set to a value that is higher than the minimum state of charge (r0) and is lower than the initially-set battery state of charge (r1). Thus, the error can be resolved and software updating can be performed in a timely manner. FIG.7is a functional block diagram of the software updating device illustrated inFIG.1. The software updating device11is provided with a communication unit37, a storage unit38, and a control unit39. The communication unit37and the control unit39are realized by the CPU31illustrated inFIG.3executing software stored in the ROM33using the RAM32, and the storage unit38is realized by the storage device34illustrated inFIG.3. The communication unit37transmits a confirmation request to the server1at a predetermined timing, such as a timing when the power source or the ignition of the vehicle is turned on, or the like, to confirm whether there is update data for the software of the electronic control units13athrough13d, and receives confirmation results (information indicating whether there is update data) at the server1. Also, the communication unit37transmits a download request for a distribution package to the server1, and receives the distribution package transmitted from the server1. The distribution package transmitted from the server1includes update data to be used for software updating of one or more electronic control units that are the objects of updating, and the above-described prerequisite condition information. The distribution package may contain verification data for verifying authenticity of the update data, number of pieces of update data, order of installation, and various types of control information used when updating software, and so forth. The communication unit37causes the storage unit38to store the received distribution package. The communication unit37verifies the authenticity of the received update data. The communication unit37also transmits error information generated by the later-described control unit39to the server1. The storage unit38stores the distribution package that the communication unit37has received from the server1. The control unit39controls the software updating of the electronic control units13athrough13dof the vehicle in which the software updating device11is installed. The control unit39references the prerequisite condition information included in the distribution package that the communication unit37has received from the server1, and determines whether the prerequisite conditions included in the prerequisite condition information are satisfied. The control unit39acquires information relating to the state of the vehicle from the electronic control units13athrough13dconnected via the busses15band15c, and determines whether the prerequisite conditions included in the prerequisite condition information are satisfied, based on the acquired information. Examples of information relating to the state of the vehicle include the battery state of charge, the operating state of the DC-to-DC converter, errors occurring at each of the electronic control units13athrough13d, the availability and operation state of equipment such as sensors, accessories, and so forth, connected to the electronic control units13athrough13d, the shift range, the vehicle speed, the GPS coordinates, and so forth. When determining that all prerequisite conditions included in the prerequisite condition information are satisfied, the control unit39executes installation and activation of the electronic control units that are the objects of updating. Note that the electronic control unit that is the object of updating can be identified based on information included in the distribution package (identification information of the electronic control unit correlated with the update data). Now, software updating processing includes the three phases of downloading, in which update data is transmitted from the server1to the vehicle, installation in which the downloaded update data is transferred to the electronic control unit that is the object of updating and is written to the storage region of the electronic control unit that is the object of updating, and activation in which the update program installed in the electronic control unit that is the object of updating is enabled. Downloading is processing of receiving and storing update data that is transmitted from the server1, for updating software of an electronic control unit. The downloading phase includes not only reception of update data, but also includes a series of processing relating to downloading, such as determining whether execution of downloading is permissible, verifying the update data, and so forth. Installation is processing of causing the electronic control unit that is the object of updating to write an update-version program (update software) to a storage unit of onboard equipment, based on the downloaded update data. The installation phase includes not only execution of installing, but also includes control of a series of processing relating to installation, such as determining whether installation is permissible, transferring the update data, verifying the update-version program, and so forth. Activation is processing of enabling (activating) the installed update-version program. The control of activation includes not only execution of activation, but also includes control of a series of processing relating to activating, such as determining whether execution of activation is permissible, verifying the execution results, and so forth. The update data transmitted from the server1to the software updating device11may contain any of update software for electronic control units, compressed data in which update software has been compressed, and divided data in which update software or compressed data has been divided. Also, the update data may contain an identifier for identifying the electronic control unit that is the object of updating (ECU ID), and an identifier for identifying software before updating (ECU software ID). The update data is downloaded as the aforementioned distribution package, and the distribution package contains update data of one or a plurality of electronic control units. When the update data includes the update software itself, the software updating device transfers the update data (update software) to the electronic control unit that is the object of updating in the installation phase. Also, when the update data includes compressed data, differential data, or divided data of the update software, the software updating device11may transfer the update data to the electronic control unit that is the object of updating and the electronic control unit that is the object of updating may generate the update software from the update data, or the software updating device11may generate the update software from the update data, and transfer the update software to the electronic control unit that is the object of updating. Now, generating the update software can be performed by decompressing compressed data, or assembling differential data or divided data. Installation of the update software can be performed at the electronic control unit that is the object of updating, based on an installation request from the software updating device11. Alternatively, the electronic control unit that is the object of updating, which has received the update data, may autonomously install the update software without receiving any explicit instruction from the software updating device11. Activation of the update software can be performed at the electronic control unit that is the object of updating, based on an activation request from the software updating device11. Alternatively, the electronic control unit that is the object of updating, which has received the update data, may autonomously activate the update software without receiving any explicit instruction from the software updating device11. Note that updating processing of software can be performed consecutively or in parallel on each of the electronic control units. Upon the control unit39determining that all prerequisite conditions included in the prerequisite condition information are satisfied, the control unit39transfers one or more pieces of received update data to the electronic control units that are the objects of updating and the electronic control units that are the objects of updating perform installation of the update data. When installation is completed, the control unit39requests the electronic control units that are the objects of updating to activate the updated software, and the electronic control units that are the objects of updating perform activation. When there is only one software storage region provided to the storage device of the electronic control units, installation and activation are performed as a sequence. The control unit39performs the above control processing, thereby completing the software updating of the electronic control units that are the objects of updating. Note that the “software updating processing” in the present specification is not limited to processing in which downloading, installation, and activation are all performed continuously, and includes processing of performing only part of the downloading, installation, and activation. Control processing that the server1and the software updating device11execute will be described below. FIG.8is a flowchart showing an example of control processing that the server according to the embodiment executes. The control processing shown inFIG.8is repeatedly executed at the server1at predetermined time intervals, for example. In step S1, the communication unit28determines whether a confirmation request for whether there is update data has been received from the software updating device11. When the determination in step S1is YES, the processing advances to step S2, and otherwise, the processing advances to step S3. In step S2, the communication unit28determines whether there is update data for the vehicle in which the software updating device11that transmitted the confirmation request is installed, and transmits information indicating whether there is update data to the software updating device11. Whether there is update data can be determined based on management information stored in the storage device23or another server connected to the server1. Thereafter, the processing advances to step S3. In step S3, the communication unit28determines whether a download request for a distribution package has been received from the software updating device11. When the determination in step S3is YES, the processing advances to step S4, and otherwise, the processing advances to step S5. In step S4, the communication unit28generates a distribution package including update data for updating the software of the electronic control units of the vehicle that transmitted the download request and prerequisite condition information corresponding to this vehicle, and transmits the generated distribution package to the software updating device11. Thereafter, the processing advances to step S5. In step S5, the communication unit28determines whether error information has been received from the software updating device11. When the determination in step S5is YES, the processing advances to step S6, and otherwise, the processing advances to step S7. In step S6, the communication unit28causes the second storage unit27to store the received error information. Thereafter, the processing advances to step S7. In step S7, the prerequisite condition information stored in the first storage unit26is set (updated) by the control unit29based on the error information stored in the second storage unit27. Note that in the example inFIG.8, setting of prerequisite condition information based on error information is periodically executed at a predetermined cycle, but setting processing of the prerequisite condition information may be performed based on error information when instructed by an operator at the center, a mechanic at a dealer, or the like. Thereafter, the processing advances to step S1. FIG.9is a flowchart showing an example of control processing that the software updating device according to the embodiment executes. The control processing shown inFIG.9is processing that is started with the power source or ignition of the vehicle being turned on, for example, as a trigger. In step S11, the communication unit37transmits a confirmation request to the server1requesting confirmation of whether there is update data for the electronic control units13athrough13dconnected to the software updating device11. Thereafter, the processing advances to step S12. In step S12, the communication unit37receives the confirmation results acquired from the server1. Thereafter, the processing advances to step S13. In step S13, the control unit39determines whether there is update data, based on the confirmation results acquired from the server1. When the determination in step S13is YES, the processing advances to step S14, and otherwise, the processing ends. In step S14, the communication unit37executes downloading processing. More specifically, the communication unit37transmits a download request for a distribution package to the server1, receives the distribution package transmitted in response to the download request, and stores the received distribution package in the storage unit38. The communication unit37verifies the authenticity of the update data included in the received distribution package. Determination of whether execution of downloading is permissible, and notification to the server1that downloading is completed, may be performed in step S14. Thereafter, the processing advances to step S15. In step S15, the control unit39determines whether all of the prerequisite conditions included in the prerequisite condition information acquired from the server1are satisfied. When determining that all prerequisite conditions are satisfied in step S15, the processing advances to step S16, and otherwise, advances to step S18. In step S16, the control unit39executes installation processing and activation processing as to the electronic control units that are the objects of updating. The installation processing in step S16includes processing of requesting the user for consent to install, and accepting input of the consent, transferring update data from the software updating device11to the electronic control units that are the objects of updating, requesting the electronic control units that are the objects of updating to perform installation, requesting verification of installation to the electronic control units that are the objects of updating, and so forth. The electronic control units that are the objects of updating use the update data received from the software updating device11and install the update version of the software in a storage region. The activation processing includes processing of requesting the user for consent to activate, and accepting input of the consent, requesting the electronic control units that are the objects of updating to perform activation, and so forth. The electronic control units that are the objects of updating switch the software to be executed to the update-version software, thereby enabling and starting the update-version software. Thereafter, the processing advances to step S17. In step S17, the control unit39determines whether the software updating processing is completed. When the determination in step S17is YES, the processing ends, and otherwise, the processing advances to step S19. In step S18, the control unit39determines whether a predetermined amount of time has elapsed from the point in time at which the determination in step S15was first made. Even when not all prerequisite conditions are satisfied in the determination in step S15, prerequisite conditions not satisfied conceivably may be satisfied later. For example, a case can be assumed in which one prerequisite condition that “shift range is P range” is not satisfied at the time of determination in step S15, but the user soon shifts the shift range to the P range. In this case, executing installation and activation at the point in time when all prerequisite conditions are satisfied, rather than processing as an error (i.e., neither installation nor activation can be executed since all prerequisite conditions are not satisfied) based on the determination in step S15is considered to suit the convenience of the user. Accordingly, in the present embodiment, a step S18is provided, and the determination of prerequisite conditions in step S15is repeated as long as a predetermined amount of time has not elapsed from the first determination in step S15. When the determination in step S18is YES, the processing advances to step S21, and otherwise, the processing advances to step S15. Note that when the determination in step S18is NO, there may be a predetermined interval time for standing by before advancing to step S15. In step S19, the control unit39determines whether all prerequisite conditions included in the prerequisite condition information acquired from the server1are satisfied. The determination processing in step S19is processing for confirming that all prerequisite conditions continue to be satisfied after the installation and activation in step S16are started. When determination is made in step S19that not all prerequisite conditions are satisfied, the processing advances to step S20, and otherwise, the processing advances to step S17. In step S20, the control unit39cancels the installation and activation. Thereafter, the processing advances to step S21. In step S21, the communication unit37generates error information including the prerequisite conditions regarding which the control unit39determined in step S19not to be satisfied, and transmits the generated error information to the server1. Thereafter, the processing ends. As described above, the server1according to the present embodiment stores prerequisite conditions to be satisfied by the vehicle at the time of updating software, and transmits the prerequisite conditions in response to a request from the software updating device11. Accordingly, the prerequisite conditions to be satisfied by the vehicle at the time of updating software can be set in accordance with the configuration of the vehicle. As a specific example, assumption will be made that there is a vehicle of model A that has a relatively large battery capacity, and a vehicle of model B that has a relatively small battery capacity, and the same state of charge (SOC) value (battery state of charge) is set as a prerequisite condition for software updating. In this case, even when the SOC value is below the SOC value set as the prerequisite condition, there may be sufficient electric power to execute the software updating processing actually left in the battery of the vehicle of model A, since the battery capacity is large. In this case, setting the prerequisite conditions uniformly for model A and model B may result in software updating processing not being performed in a timely manner for model A. According to the present embodiment, prerequisite conditions for software updating processing can be set at the server1for each vehicle or each model, and the prerequisite condition information is transmitted in response to a download request from the software updating device11. Accordingly, prerequisite conditions necessary for software updating processing can be centrally managed at the server1side, and the prerequisite conditions can be changed as appropriate. Also, the server1can accumulate the prerequisite conditions that the software updating device11determines not to be satisfied at the time of software updating processing, as error information for each vehicle, and accordingly the server1can manage the state of errors occurring at the time of software updating in each vehicle, and can handle each vehicle individually (individual software updating processing) based on the error information. Also, the software updating device11according to the present embodiment acquires prerequisite conditions to be satisfied at the time of software updating processing from the server1, and performs software updating processing when the acquired prerequisite conditions are satisfied. Accordingly, the server1can set prerequisite conditions to be satisfied by the vehicle at the time of updating software in accordance with the configuration of the vehicle. Also, the software updating device11can transmit prerequisite conditions determined not to not be satisfied at the time of software updating processing to the server as error information, and accordingly the state of errors at the time of software updating processing at the vehicle can be aggregated at the server1. Also, when one of the prerequisite conditions is no longer satisfied after the software updating processing is started, the software updating device11cancels the software updating, and accordingly a situation where the software updating processing is continued under inappropriate conditions can be suppressed from occurring. Also, even when the software updating device11determines that not all necessary prerequisite conditions are satisfied at the time of software updating processing, software updating is started when the prerequisite conditions are satisfied before a predetermined amount of time elapses, and accordingly opportunities to enable software updating can be sufficiently secured. The functions of the server1exemplified as an embodiment can also be realized as an updating management method executed by a computer provided with a processor (CPU), a memory, and a storage device, or as an updating management program for the computer to execute, and as a computer-readable non-transitory storage medium storing the updating management program. In the same way, the functions of the software updating device11exemplified as an embodiment can also be realized as an update control method executed by an onboard computer provided with a processor (CPU), a memory, and a storage device, as an update control program for the onboard computer to execute, and as a computer-readable non-transitory storage medium storing the update control program. In the above embodiment, an example has been described in which the software updating device11provided in an onboard network at the vehicle side performs software updating control of all of the electronic control units13athrough13d, as a master device, but an arrangement may be made where one of the electronic control units13athrough13dhas the update control functions shown inFIGS.8and9, and software updating of the other electronic control units is controlled thereby, instead of the software updating device11being provided. Also, an arrangement may be made where the update control functions shown inFIGS.8and9are provided to external equipment that is capable of wired connection with the onboard network2, and software updating processing of the electronic control units13athrough13dis performed using this external equipment, instead of providing the software updating device11. The technology of the present disclosure can be used in network systems for updating software of electronic control units. A server according to an aspect of the present disclosure includes: a storage device storing prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; and one or more processors configured to transmit the prerequisite condition information to the vehicle based on a request from the vehicle. An updating management method according to another aspect of the present disclosure is executed by a computer provided with a processor, memory, and a storage device. The update managing method includes: storing, in the storage device, prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; and transmitting the prerequisite condition information to the vehicle, based on a request from the vehicle. A non-transitory storage medium according to another aspect of the present disclosure stores a program that are executable by a computer provided with a processor, a memory, and the non-transitory storage medium, and that cause the computer to perform a update managing method comprising: storing, in the non-transitory storage medium, prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; and transmitting the prerequisite condition information to the vehicle, based on a request from the vehicle. A software updating device according to another aspect of the present disclosure includes: one or more processors configured to: receive, from a server, prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; determine whether the prerequisite conditions included in the prerequisite condition information acquired by the one or more processors are satisfied; and execute updating of the software of the electronic control unit when determining that all of the prerequisite conditions are satisfied. A center according to another aspect of the present disclosure includes: a storage device storing prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; and one or more processors configured to transmit the prerequisite condition information to the vehicle based on a request from the vehicle. An over-the-air (OTA) master according to another aspect of the present disclosure includes: one or more processors configured to: receive, from a center, prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; determine whether the prerequisite conditions included in the prerequisite condition information acquired by the one or more processors are satisfied; and start updating of the software of the electronic control unit when determining that all of the prerequisite conditions are satisfied.
40,607
11861348
DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. In adding reference numerals to components of each drawing, it should be noted that the same or equivalent components have the same reference numerals, although they are indicated on another drawing. In describing the embodiments of the present disclosure, detailed descriptions associated with well-known functions or configurations have been omitted to avoid unnecessarily obscuring subject matter of the present disclosure. In describing elements of embodiments of the present disclosure, the terms first, second, A, B, (a), (b), and the like may be used herein. These terms are only used to distinguish one element from another element, but do not limit the corresponding elements irrespective of the nature, order, or priority of the corresponding elements. Furthermore, unless otherwise defined, all terms including technical and scientific terms used herein should be interpreted as is customary in the art to which the present disclosure belongs. It should be understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of the present disclosure and the relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. When a component, device, element, or the like of the present disclosure is described as having a purpose or performing an operation, function, or the like, the component, device, or element should be considered herein as being “configured to” meet that purpose or to perform that operation or function. Hereinafter, various embodiments of the present disclosure are described in detail with reference toFIGS.1-8. FIG.1is a block diagram illustrating an entire system for an update system of a vehicle controller according to an embodiment of the present disclosure.FIG.2is a block diagram of an update system of a vehicle controller according to an embodiment of the present disclosure. Referring toFIGS.1and2, an update system of the vehicle controller according to an embodiment of the present disclosure may include an over-the-air (OTA) management server100, a communication control unit (CCU)200, and a battery sensor300. The OTA management server100may collect vehicle information, battery information, and a state-of-charge (SOC) value changed when a controller in each vehicle is updated using an OTA service. The OTA management server100may also group the vehicle information and the battery information that indicate similar SOC change rates to each other. The OTA management server100may also extract a SOC change rate pattern of a group having a type similar to that of a vehicle to be updated and that of a battery mounted in the vehicle as an optimal pattern for calculating an expected value of a remaining SOC value. The CCU200may calculate the expected value of the remaining SOC value based on the SOC change rate pattern extracted from the OTA management server, may determine whether to update the controller, and may update the controller. The battery sensor300may be connected to the battery of the vehicle to measure an SOC value of the battery changed when the controller is updated and to transmit the measured SOC value of the battery to the CCU. When new software (S/W) to be mounted on each of various controllers in a vehicle is received, the OTA management server100may store the new software (S/W) to manage the version of firmware (S/W). In order to simultaneously update a plurality of controllers by using the newly-received software (S/W) in a bundle, the OTA management server100may manage an update event for integrating updates of such the controllers in a bundle. Moreover, as illustrated inFIG.3, the OTA management server100may include a vehicle information management device110, a battery information management device120, and an SOC change amount management device130. The vehicle information management device110may collect and store vehicle information from the CCU200provided in each vehicle actually being driven. The battery information management device120may collect and store information about a battery installed in each vehicle that receives the vehicle information. The SOC change amount management device130may collect and store an SOC value for grasping the consumption degree of a battery consumed when a controller is updated by using the OTA service in each vehicle that receives the battery information. At this time, as illustrated inFIG.3, the vehicle information management device110may receive various pieces of information associated with a vehicle or driving, such as vehicle identification number (VIN) of each vehicle actually being driven on a road by users, a specification (option) of a controller applied to each vehicle, a distance driven of each vehicle, and driving habits from the CCU200provided in each vehicle. The vehicle information management device110may also store the various pieces of information in a database. As such, in addition to obtaining and storing the VIN and the distance driven as data for determining whether a vehicle itself is aging, the vehicle information management device110may obtain and store a specification (option) of a controller installed in each vehicle and a user's driving habits (duration per trip or average number of trips per week) as data that affects battery consumption. In other words, the user's driving habits, such as the duration per trip or the average number of trips per week may be an important factor affecting an actual SOC change because affecting the charging/discharging performance of a battery and battery aging. As such, the vehicle information management device110may obtain and store data for determining whether each vehicle is aging and data capable of being a factor affecting battery consumption. Accordingly, when the SOC change amount management device130analyzes a SOC change amount stored in the database, the vehicle information management device110may easily group pieces of vehicle information of vehicles indicating similar SOC change patterns to one another. Moreover, as illustrated inFIG.3, the battery information management device120may receive various pieces of information associated with a battery such as a maker, a type (AGM or Flooded), a production date, or capacity of the battery installed in each vehicle from the CCU and then may store the various pieces of information in the database. As such, the battery information management device120may obtain and store data for determining the basic performance of a battery itself and whether the battery is aging. Accordingly, when the SOC change amount management device130analyzes an SOC change amount stored in the database, the battery information management device120may easily group pieces of battery information indicating similar SOC change patterns. Moreover, while the controller is being updated using the OTA service, the SOC change amount management device130may receive, from the CCU200, the SOC value obtained through a battery sensor to store the SOC value cumulatively. The SOC change amount management device130may also generate and store a change graph indicating a degree of SOC reduced as each controller is being updated. At this time, the SOC value received from the CCU200refers to a value measured by the battery sensor300while a controller is being updated using the OTA service in a state of constant power (B+) of a battery after a vehicle is turned off (KEY OFF). As such, the SOC change amount management device130may generate the change graph composed of SOC values before and after the update and may derive a SOC change rate that occurs at a point in time when each controller is updated, by using the degree of change in the slope of the graph. At this time, in addition to deriving the SOC change rate of each controller when an update using the OTA service is an update event in which a plurality of controllers are simultaneously updated in a bundle, as illustrated inFIG.4A,FIG.4B, andFIG.4C, it is also possible to derive a continuous SOC change rate while all of the controllers included in the update event are sequentially updated. InFIG.4A,FIG.4B, andFIG.4C, in a case of event 1 in which controllers A, B, and C are updated, it is indicated that various SOC change rates are derived depending on a vehicle or an installed battery as illustrated in the drawings from (a) to (n). In a case of event 2 in which controllers B, C, D, and E are updated, and event 3 in which controllers B and C are updated, it is indicated that various SOC change rates are derived depending on a vehicle or an installed battery as illustrated in drawings from (a) to (n). Also, the OTA management server100may further include a SOC change rate grouping device140that integrally matches the SOC change rate with vehicle information and battery information when a controller is updated using the OTA service, groups together a vehicle information type and a battery information type that indicate similar SOC change rate patterns, and stores the grouped result in a database. The SOC change rate grouping device140may divide the vehicle information into a plurality of types (A1 type, A2 type, A3 type to An type, and the like) indicating identical or similar conditions to one another as contents (whether there is commonality such as a vehicle model, a vehicle year, a distance driven, a controller specification, a user's driving habits, and the like) and may store the divided result. Likewise, the SOC change rate grouping device140may divide the battery information into a plurality of types (B1 type, B2 type, B3 type to Bn type, and the like) having identical or similar conditions to one another as contents (whether there is commonality such as a maker, type, production date, capacity, and the like) and may store the divided result. Furthermore, the SOC change rate may also be divided into a plurality of types (C1 type, C2 type, C3 type to Cn type, and the like) indicating similar patterns in which the degree of reduction during the update is within a specific range. As such, the SOC change rate grouping device140may generate a SOC change rate, which occurs during an update using an OTA service in a vehicle having a specific vehicle information type and a specific battery information type, as a SOC change rate pattern of each group, by matching and grouping the divided vehicle information type, the divided battery information type, and the divided SOC change rate type. As such, such the generated SOC change rate pattern may be generated based on the SOC change rate that occurs while each controller is updated using the OTA service in the vehicle driving in an actual road. Accordingly, the SOC change rate pattern that occurs in another vehicle having vehicle information and battery information of a type similar to that of the corresponding vehicle may be applied with higher similarity. At this time, the SOC change rate grouping device140may first generate an event group based on the number, types, and update order of controllers that are sequentially updated through a single update process. Besides, as well as the SOC change rate pattern for each event that occurs when a plurality of controllers are sequentially updated for each event group, a SOC change rate pattern for each controller that belongs to each event may also be generated. As such,FIG.5is a graph illustrating SOC change rate patterns of several groups generated by integrating the vehicle information, the battery information, and the SOC change rate through the SOC change rate grouping device. FIG.5illustrates an SOC change rate pattern (in each graph, a SOC pattern is expressed in bold as compared to other lines) capable of being generated depending on a vehicle information type and a battery information type in a case of event 1 group, in which controllers A, B, and C are updated, together with example graphs for each group (Group 1, Group 2, and Group 3 to Group N). At this time, in addition to event 1 group illustrated inFIG.5, it is natural that a plurality of groups are capable of being generated even within event 2 group in which controllers B, C, D, and E disclosed inFIG.4Bare updated. A plurality of groups are capable of being generated even within event N group in which controllers B and C are updated. FIG.5illustrates all of the vehicle information, the battery information, and the SOC change rate of each group are implemented in different types (A1 B1 C1 type, A2 B2 C2 type, A3 B3 C3 type, . . . , An Bn Cn type, and the like), but are not limited thereto. A group that has the same vehicle information type and the same battery information type and has only the different SOC change rate type may be generated. A group that has a type, in which at least one of the vehicle information type or the battery information type is different and has the same SOC change rate may also be generated. Moreover, the SOC change rate pattern generated by the SOC change rate grouping device140refers to a SOC reduction rate while the update is in progress. Thus, initial SOC values at a point in time when updates are started may be different from one another. Accordingly, inFIG.5, all dotted lines or thin straight lines, each of which has the different location of an SOC start point and decreases with a similar pattern, are illustrated in addition to the SOC change rate pattern expressed in bold in each group. Moreover, the OTA management server100may further include an optimal pattern suggesting device150that extracts, from a database, one group among groups belonging to an event having the highest similarity with an event for updating controllers by using an OTA service, and then provides the CCU with the SOC change rate pattern matched to the one group as an optimal pattern that is a criterion for determining whether to start the update in the vehicle. At this time, first of all, the optimal pattern suggesting device150may select an event group for extracting an optimal SOC change rate pattern by determining whether an update event using the OTA service is similar to each event, based on the number, types, and update order of controllers belonging to each event. As such, the optimal pattern suggesting device150may specify a group based on the vehicle information type and the battery information type of the corresponding vehicle within the selected event group. The optimal pattern suggesting device150then may extract the SOC change rate pattern matched to the specified group as an optimal pattern for determining whether to start an update in the corresponding vehicle to provide the optimal pattern to the CCU200. Also, the optimal pattern suggesting device150may derive an actual SOC change rate by receiving changes in SOC values respectively measured before and after the update of each controller belonging to the corresponding update event from the CCU200. The optimal pattern suggesting device150may also re-specify a group indicating a pattern most similar to a pattern of the derived actual SOC change rate. The optimal pattern suggesting device150may also re-extract the SOC change rate pattern matched to the group as an optimal pattern for determining whether to continue the update and may provide the optimal pattern to the CCU200. The CCU200may include an SOC change rate pattern applying device210that receives an SOC change rate pattern transmitted by the OTA management server100to calculate the degree of SOC, which is reduced until the update event is completed, depending on the SOC change rate pattern and compares the calculated remaining SOC value with a reference SOC value to determine whether to perform the update. At this time, the SOC change rate pattern applying device210may receive, from the optimal pattern suggesting device150of the OTA management server, an SOC change rate pattern matched to a group having a high similarity with an update event to be performed and a high similarity with a vehicle information type and a battery information type of the corresponding vehicle. Accordingly, the accuracy of determination about the degree of SOC reduction of a battery may be improved while the update event is being performed. The SOC change rate pattern applying device210may calculate a remaining SOC value that remains at the end of the update when the remaining SOC value decreases with a change rate on the pattern, by matching a current SOC value obtained from the battery sensor300to an SOC change rate pattern. As such, the calculated remaining SOC value may be compared with the reference SOC value required to perform basic functions of a vehicle. When the remaining SOC value is greater than the reference SOC value, it may be determined that the update is performed. Otherwise, the execution of the update may be suspended. Accordingly, it may be determined whether to perform the update. Moreover, as illustrated inFIG.6, the CCU200may further include an SOC change monitoring device220that receives SOC values, which are respectively measured before and after the update of each controller on the update event to be performed when it is determined to perform the update, from the battery sensor and then transmits the SOC values to the OTA management server. The SOC change monitoring device220may transmit, to the OTA management server100, the SOC values, which are respectively measured before and after the update of each controller, as basic data for re-extracting the SOC change rate pattern for predicting the remaining SOC value that remains after decreasing during the update event. Accordingly, the optimal pattern suggesting device150of the OTA management server may re-select a group indicating the most similar change rate pattern based on a change in the actual SOC value reduced during the update of each controller. The optimal pattern suggesting device150may also provide the SOC change rate pattern matched to the selected group as an optimal pattern for determining whether to continue the update. Besides, the CCU200may further include an update continuation determining device230that re-calculates a degree of SOC, which is reduced until all the remaining controllers belonging to the update event are updated, as a new optimal pattern through the received SOC change rate pattern, compares the re-calculated remaining SOC value with the reference SOC value again, and determines whether to continue the update. The update continuation determining device230may compare the re-calculated remaining SOC value with the reference SOC value again. When the remaining SOC value is greater than the reference SOC value, the update continuation determining device230may continue the update. Otherwise, the update continuation determining device230may suspend the execution of the update. Accordingly, the update continuation determining device230may continue to maintain the remaining SOC value of the battery so as to be greater than or equal to an appropriate level regardless of whether to continue the update. As such, the update continuation determining device230may determine whether to proceed with an update by primarily predicting the degree of SOC reduction, which occurs during the update in the corresponding vehicle through the type of an update event and vehicle information and battery information, through the SOC change rate pattern extracted by the OTA management server. After the update is started, the update continuation determining device230may measure the degree of SOC reduction that actually occurs and may secondarily correct and predict the degree of SOC reduction through the SOC change rate pattern matched to a new group indicating the most similar pattern based on the measured degree of SOC reduction. Accordingly, when the remaining SOC value calculated using the SOC change rate pattern of group 1 (Group 1) provided from the OTA management server100exceeds the reference SOC value upon performing an update event in which controllers A, B, and C are updated, the first controller A may be updated and the SOC change monitoring device220may measure SOC values respectively before and after controller A is updated and then may transmit the SOC values to the OTA management server100. Afterward, when it is determined that a change pattern of an SOC value generated when controller A is updated is more similar to a change pattern when controller A in group 2 (Group 2) is updated, the optimal pattern suggesting device150of the OTA management server may re-extract an SOC change rate pattern matched to group 2 (Group 2) as an optimal pattern for determining whether to continue the update and may provide the optimal pattern to the CCU200. Furthermore, the update continuation determining device230may re-calculate a remaining SOC value when the updates of controller B and controller C are completed, based on a current SOC value in the state where the update of controller A is completed, by using the SOC change rate pattern extracted again as a new optimal pattern. The update continuation determining device230may determine whether to continue the update by comparing the re-calculated remaining SOC value with the reference SOC value again. Afterward, when the update of controller A has been completed even after the update of controller B is completed, the SOC change monitoring device220may transmit, to the OTA management server100, SOC values respectively before and after controller B is updated. The optimal pattern suggesting device150of the OTA management server may determine whether to re-extract the SOC change rate pattern based on the SOC values. As described above, the update continuation determining device230receiving the SOC change rate pattern re-extracted by the optimal pattern suggesting device150may determine whether to continue the update, by re-calculating the remaining SOC value depending on the SOC change rate pattern. While this process is repeated, the update continuation determining device230may accurately predict the remaining SOC value after the update using the OTA service is completed. Accordingly, it is possible to prevent the remaining SOC value from falling below the reference SOC value during the update process. Thus, unexpected situations such as failure of update due to power cut-off during update or the lack of start-up voltage after an update is completed may be prevented. It is possible to increase the success rate of controller update using the OTA service without unnecessarily setting a large margin, by calculating the remaining SOC value more accurately. Besides, the update may be started even when a current SOC value of a battery reaches a specific level. Thus, the performance rate of the update may be increased. Next, an update control method of a vehicle controller according to another embodiment of the present disclosure is described with reference toFIGS.7and8. FIG.7is a block diagram of an update control method of a vehicle controller according to another embodiment of the present disclosure.FIG.8is a flowchart illustrating an update process of a vehicle controller according to another embodiment of the present disclosure. Referring toFIGS.7and8, the update control method of a vehicle controller according to another embodiment of the present disclosure may include a SOC change rate pattern building step S100, a SOC optimal pattern extracting step S200, and an update start determining step S300. The SOC change rate pattern building step S100may include collecting a SOC value changed depending on the power consumed upon updating a controller by using an OTA service in each vehicle and then grouping vehicle information and battery information, which indicate a similar SOC change rate to store the grouped result in a database of an OTA management server. The SOC optimal pattern extracting step S200may include extracting an SOC change rate pattern of a group, which has a type similar to that of a vehicle to be updated and that of a battery mounted in the vehicle, as an optimal pattern for calculating an SOC reduction expected value when an update event occurs. The update start determining step S300may include determining whether to update a controller and proceeding with the update after calculating, by a CCU of each vehicle receiving the SOC change rate pattern, a remaining SOC value after an update is completed. The SOC change rate pattern building step S100may include a vehicle information registering procedure S110, a battery information registering procedure S120, and a SOC change amount storing procedure S130. The vehicle information registering procedure S110may include a step of collecting vehicle information from the CCU installed in each vehicle and storing the vehicle information in a database. The battery information registering procedure S120may include a step of collecting information about a battery mounted in each vehicle receiving vehicle information and storing the battery information in the database. The SOC change amount storing procedure S130may include a step of accumulating and storing a SOC value for grasping the consumption level of a battery, which has been consumed upon updating a controller by using the OTA service in each vehicle receiving the battery information, collecting a SOC change rate changed over time, and storing the SOC change rate in the database. At this time, the vehicle information registering procedure S110may include a step of collecting and storing data for determining whether a vehicle is aging and data regarding important factors affecting consumption of a battery, such as VIN of each vehicle actually being driven on a road by users, a specification of a controller, a distance driven of each vehicle, driving habits (a duration per trip, an average number of trips per week, or the like) from the CCU of each vehicle. Moreover, the battery information registering procedure S120may include a step of collecting and storing data for grasping the basic performance and aging of the battery, such as a maker, type, production date, and capacity of a battery installed in each vehicle, from the CCU of each vehicle. Moreover, the SOC change amount storing procedure S130may include a step of receiving, from the CCU, an SOC value obtained through a battery sensor provided in each battery while a controller is being updated using the OTA service, storing the SOC value cumulatively in the database, generating a change graph indicating a degree of SOC reduced as each controller is being updated, and storing the change graph in the database. As such, the change graph indicating the degree of SOC reduction may be generated in the SOC change amount storing procedure S130, and thus the SOC change rate continuously generated while each controller is updated or while a plurality of controllers are sequentially updated may be derived by the slope of the change graph. Also, the SOC change rate pattern building step S100may further include an SOC change rate grouping procedure S140. The SOC change rate grouping procedure S140may include a step of integrally matching the SOC change rate with vehicle information and battery information, grouping together a vehicle information type and a battery information type that indicate a similar SOC change rate pattern, and storing the grouped result in a database. As such, the SOC change rate grouping procedure S140may include a step of generating the SOC change rate, which is generated during an update in a vehicle having a specific vehicle information type and a specific battery information type, as an SOC change rate pattern of each group, by matching and grouping a plurality of vehicle information types, battery information types, and SOC change rate types that have identical or similar conditions to one another. Moreover, the SOC optimal pattern extracting step S200may include a step of extracting one group among the event groups with high similarity with an event to be updated using the OTA service from the database and then providing a SOC change rate pattern matched to the extracted one group as an optimal pattern, which is a criterion for determining whether to start the update in the corresponding vehicle, to the CCU, when an update event for a controller occurs. At this time, as illustrated inFIG.8, in the SOC optimal pattern extracting step S200, first of all, when a new version of a controller's software (S/W) is recognized, the OTA management server may receive information about a new update event to be performed from a vehicle control center. The OTA management server may also determine whether the new update event is similar to an update event of the event group stored in the database, based on the number, types, and update order of controllers belonging to each event. The OTA management server may also select an event group for extracting the SOC change rate pattern. Furthermore, a group may be specified based on the vehicle information type and the battery information type of the corresponding vehicle within the selected event group. The SOC change rate pattern matched to the specified group may be extracted as an optimal pattern for the corresponding vehicle to provide the optimal pattern to the CCU200. The update start determining step S300may include a remaining SOC value calculating procedure S310and an SOC value comparing procedure S320. The remaining SOC value calculating procedure S310may include a step of applying a current SOC value obtained from a battery sensor to the SOC change rate pattern received from the OTA management server and calculating the remaining SOC value, which is an expected value to be decreased until an update event is completed. The SOC value comparing procedure S320may include a step of comparing the calculated remaining SOC value with the reference SOC value to determine whether to perform the update. To the end, in the remaining SOC value calculating procedure S310, the CCU that receives a SOC change rate pattern from the OTA management server may substitute the current SOC value obtained from the battery sensor for the SOC value before a controller is updated on the SOC change rate pattern. The CCU may also calculate the remaining SOC value, which is an expected value when being decreases until the update event ends with a slope on the corresponding SOC change rate pattern. Furthermore, in the SOC value comparing procedure S320, the remaining SOC value calculated using the SOC change rate pattern may be compared with the reference SOC value required to perform basic functions of a vehicle. When the remaining SOC value is greater than the reference SOC value, the update may be performed. Otherwise, the execution of the update may be suspended. Accordingly, it may be determined whether to perform the update. As such, in an embodiment in which the update start determining step is executed, inFIG.8, when the remaining SOC value, which is the battery SOC after an update is completed, is greater than 65% of the maximum charge value, it is possible to display an update approval window through the audio video navigation (AVN) of a vehicle by determining to proceed with the update. Accordingly, it is indicated that a driver selects whether to proceed with the update. It is natural that the reference SOC value being 65% of the maximum charging value shown in the embodiment is changeable. Afterward, it is natural that the driver is capable of terminating the update without proceeding with the update. However, when the driver selects proceeding with the update, the update may be performed while the software (S/W) for the first controller among controllers to be updated, which belong to the update event, is received from the OTA management server. Moreover, the update control method of a vehicle controller according to an embodiment of the present disclosure may further include a SOC change monitoring step S400. The SOC change monitoring step S400may include a step of receiving SOC values respectively measured before and after the update from a battery sensor after the update of each controller belonging to an update event is in progress and transmitting the SOC values to the OTA management server. As such, it is possible to determine whether the SOC change rate pattern transmitted in the SOC optimal pattern extracting step is appropriate, by transmitting the SOC values respectively measured before and after update of each controller in the SOC change monitoring step S400to the OTA management server. Besides, it is natural that the OTA management server is capable of accumulating and storing received SOC values before and after an update as data for correcting and supplementing the SOC change amount pattern in a database. Moreover, the update control method of a vehicle controller according to an embodiment of the present disclosure may further include an update continuation determining step S500. The update continuation determining step S500may include a step of determining whether to continue the update, based on the SOC change rate pattern that is re-extracted based on the actual SOC change rate derived from the SOC value obtained in the SOC change monitoring step after it is determined whether there is a controller to be updated. The update continuation determining step S500may include an SOC change rate pattern re-extracting procedure S510that re-selects a group indicating the SOC change rate most similar to an actual SOC change rate calculated based on SOC values respectively measured before and after the update of a controller in the corresponding event group with respect to the corresponding controller. In step S510, the SOC change rate pattern matched to the re-selected group is presented as an optimal pattern for re-calculating an expected value of the remaining SOC value. At this time, in the SOC change rate pattern re-extracting procedure S510, it is possible to compare SOC change rates for the same controller with each other and to determine whether the SOC change rates are similar to each other. Moreover, an expected value of the remaining SOC value re-calculated afterward needs to become a value at a point in time when all update events to be performed are completed. The SOC change rate pattern needs to be re-extracted within an event group with the highest similarity such as the number, types, and order of controllers included in the update event. Accordingly, it is natural that the re-extracted SOC change rate pattern is the same as an existing SOC change rate pattern. Moreover, the update continuation determining step S500may include a remaining SOC value re-calculating procedure S520and an SOC value re-comparing procedure S530. The remaining SOC value re-calculating procedure S520may include a step of applying a SOC value after the update of a controller to the re-extracted SOC change rate pattern to re-calculate an expected value of the remaining SOC value when the updates of the remaining controllers are completed. The SOC value re-comparing procedure S530may include a step of comparing the re-calculated remaining SOC value with a reference SOC value to determine whether to continue updating the remaining controllers. To the end, the remaining SOC value re-calculating procedure S520may include a step of matching the latest SOC value obtained in the SOC change monitoring step on the re-extracted SOC change rate pattern received from the OTA management server in the CCU and recalculating the remaining SOC value. The remaining SOC value is an expected value when decreasing with a slope on the SOC change rate pattern until updates of the remaining controllers are completed. Moreover, in the SOC value re-comparing procedure S530, the re-calculated remaining SOC value may be compared with a reference SOC value. When the re-calculated remaining SOC value is still greater than the reference SOC value, an update of the next controller is in progress by continuing the update. Otherwise, the progress of the update may be stopped. Accordingly, it is possible to determine whether to continue the update event. As such, the present disclosure may extract a SOC change rate pattern estimated to be most suitable for the corresponding update event and vehicle after the update event occurs. The present disclosure may also provide the extracted SOC change rate pattern as an optimal pattern for calculating an expected value of the remaining SOC value. After the updates of some controllers belonging to the corresponding update event is completed, the present disclosure may re-extract a SOC change rate pattern indicating a change rate most similar to an actual SOC change rate depending on SOC values respectively measured before and after the update. Whether to continue updating the remaining controllers in the future may be determined based on the remaining SOC value predicted by using the re-extracted SOC change rate pattern. The present disclosure may improve the accuracy of calculating an expected value for a remaining SOC value calculated while updates of controllers belonging to an update event is in progress. Accordingly, the success rate of the update may be improved without setting too much margin upon calculating the remaining SOC value. In addition, an appropriate remaining SOC value may be predicted within a range that is not excessive and is capable of being reduced during the update. Thus, the update performance rate may be increased. Hereinabove, although the present disclosure has been described with reference to embodiments and the accompanying drawings, the present disclosure is not limited thereto. The present disclosure may be variously modified and altered by those having ordinary skill in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims. Therefore, embodiments of the present disclosure are not intended to limit the technical spirit of the present disclosure, but provided only for the illustrative purpose. The scope of protection of the present disclosure should be construed by the attached claims and all equivalents thereof should be construed as being included within the scope of the present disclosure. The present disclosure may improve the success rate of controller update because it is possible to prevent an update from being interrupted due to an unexpected sudden decrease in SOC such as battery aging while a controller is being updated using an OTA service, by accurately calculating an expected value of a remaining SOC value by the SOC change rate pattern extracted from an OTA management server. Moreover, it is possible to accurately calculate the expected value of a remaining SOC value, and thus there is no need to unnecessarily set a large margin. Accordingly, the present disclosure may improve the performance rate of the update because the update is capable of being performed even when the battery is a little bad (when a battery is aging, or when a remaining SOC is low). Besides, a variety of effects directly or indirectly understood through the specification may be provided.
39,353
11861349
DETAILED DESCRIPTION OF DRAWINGS The following description in combination with the Figures is provided to assist in understanding the teachings disclosed herein. The following discussion will focus on specific implementations and embodiments of the teachings. This focus is provided to assist in describing the teachings and should not be interpreted as a limitation on the scope or applicability of the teachings. However, other teachings can certainly be used in this application. The teachings can also be used in other applications and with several different types of architectures. For purposes of this disclosure, an information handling system (IHS) may include any instrumentality or aggregate of instrumentalities operable to compute, determine, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, a two-in-one laptop/tablet computer, mobile device (e.g., personal digital assistant (PDA), smart phone, tablet computer, or smart watch), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or a video display. The information handling system may also include one or more virtual or physical buses operable to transmit communications between the various hardware and/or software components. FIG.1illustrates an example information handling system100. Information handling system100may include a processor102, a memory104, a chipset106, one or more PCIe buses108, a universal serial bus (USB) controller110, a USB bus112, a keyboard device controller114, a mouse device controller116, a configuration a SATA bus controller120, a SATA bus122, a hard drive device controller124, a compact disk read only memory (CD ROM) device controller126, a storage128, a graphics device controller130, a network interface controller (NIC)140, a wireless local area network (WLAN) or wireless wide area network (WWAN) controller150, a serial peripheral interface (SPI) bus160, a NVRAM170for storing BIOS172, and a baseboard management controller (BMC)180. In one example embodiment, chipset106may be directly connected to an individual end point via a PCIe root port within the chipset and a point-to-point topology as shown inFIG.1. BMC180may be referred to as a service processor or embedded controller (EC). Capabilities and functions provided by BMC180may vary considerably based on the type of information handling system. For example, the term baseboard management system may be used to describe an embedded processor included at a server, while an embedded controller may be found in a consumer-level device. As disclosed herein, BMC180represents a processing device different from CPU102, which provides various management functions for information handling system100. For example, an embedded controller may be responsible for power management, cooling management, and the like. An embedded controller included at a data storage system may be referred to as a storage enclosure processor or a chassis processor. System100may include additional processors that are configured to provide localized or specific control functions, such as a battery management controller. Bus160can include one or more busses, including a SPI bus, an I2C bus, a system management bus (SMBUS), a power management bus (PMBUS), and the like. BMC180can be configured to provide out-of-band access to devices at information handling system100. As used herein, out-of-band access herein refers to operations performed prior to execution of BIOS172by processor102to initialize operation of system100. BIOS172may include instructions executable by CPU102to initialize and test the hardware components of system100, and to load a boot loader or an operating system (OS) from a mass storage device. BIOS172additionally may provide an abstraction layer for the hardware, such as a consistent way for application programs and operating systems to interact with the keyboard, display, and other input/output devices. When power is first applied to information handling system100, the system may begin a sequence of initialization procedures, such as a BIOS boot procedure. During the initialization sequence, also referred to as a boot sequence, components of system100may be configured and enabled for operation, and device drivers may be installed. Device drivers may provide an interface through which other components of the system100can communicate with a corresponding device. BIOS, as used herein, may also refer to a unified extensible firmware interface (UEFI). In some embodiments, one or more BIOS firmware modules to be loaded and executed by the BIOS172during booting of the information handling system may be stored in a memory170of the BIOS172. One or more BIOS firmware modules to be loaded and executed by the BIOS172during booting of the information handling system100may also be stored in system storage128, such as in a hard drive of the information handling system or in a solid-state drive of the information handling system. In some embodiments, such BIOS firmware modules may be stored in hard drive124. For example, available space in the BIOS memory170, such as in a serial peripheral interface flash memory, may be limited. To allow for a more extensive array of BIOS firmware modules to be loaded and executed by the BIOS172, BIOS firmware modules, such as BIOS feature sets, BIOS recovery files, drivers or driver components, telemetry data, and other BIOS firmware modules, along with a host operating system, recovery operating system boot sensitive files, may be stored in an extended system partition of a hard drive or solid state drive of the information handling system accessible to the BIOS172. File paths to the information stored in an extended system partition on a hard drive or solid-state drive of the information handling system may be hard coded into the BIOS172of the information handling system100. BIOS firmware modules may, for example, include BIOS drivers and other BIOS firmware components. In some embodiments, BIOS firmware modules may include third party firmware modules, such as third-party drivers, that may be hosted and run by a BIOS, such as to collect telemetry from one or more devices, such as components of the information handling system, during booting of the information handling system. Information handling system100may include additional components and additional busses, not shown for clarity. For example, system100may include multiple processor cores, audio devices, and the like. While a particular arrangement of bus technologies and interconnections is illustrated for the purpose of example, one of skill will appreciate that the techniques disclosed herein are applicable to other system architectures. System100may include multiple CPUs and redundant bus controllers. One or more components may be integrated together. For example, portions of chipset106can be integrated within CPU102. Additional components of information handling system100may include one or more storage devices that may store machine-executable code, one or more communications ports for communicating with external devices, and various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. Firmware of an information handling system may be stored in a flash memory, such as a SPI flash memory of the information handling system. For example, NVRAM170ofFIG.1may be a flash memory and may store a firmware image comprising multiple firmware volumes of the information handling system. An example flash memory200of an information handling system is shown inFIG.2. The flash memory200may, for example, be a SPI flash memory, such as an NVRAM memory, including one or more firmware volumes. The firmware volumes stored on the flash memory may include a system BIOS firmware volume202, a management engine (ME) firmware volume204, an embedded controller firmware volume206, an integrated sensor hub (ISH) firmware volume208, a core features firmware volume210, a graphics firmware volume212, and other firmware volumes. The firmware volumes202-212stored in the flash memory200may be further divided into modules to allow for modular updates to be applied to each firmware volume. For example, updating a firmware volume, as opposed to one or more of the firmware volume's constituent firmware modules, may be a time-consuming process, requiring overwriting the entire firmware volume, which may lead to substantial downtime of an information handling system. An example layout300of a firmware memory of an information handling system is shown inFIG.3. As shown inFIG.3, a firmware memory302may include multiple firmware volumes in a firmware image304stored on the firmware memory302. In some embodiments, only a single firmware volume may be stored in a firmware memory302, while in other embodiments up to four, or more, firmware volumes may be stored in a firmware image304of a firmware memory302. Each firmware volume306may include one or more firmware files. In some embodiments, for example, each firmware volume306may include a firmware volume header, a firmware file system, and one, two, or more firmware files. Each firmware file308may include one or more firmware file sections. For example, a firmware file308may include one, two, three, or more firmware file sections. A firmware file308may also include a firmware file header, and a globally unique identifier (GUID) storage area. The GUIDs stored in the GUID storage may be GUIDs of the firmware volume in which the firmware file resides, GUIDs of the firmware file308, or GUIDs of sections of the firmware file. Firmware file sections may, for example, be firmware modules, individually updated by a modular firmware update. Each firmware file section310,320,322may include a firmware file section header and firmware file section data. In some embodiments, firmware file sections may include links to firmware file section data stored at other locations in the memory302or on another memory. Each section of firmware file data may begin at an offset, such as offset320, indicating a location at which the firmware file section data begins in the memory302, such as an address within the memory302. For example, the offset320for a first firmware file section data312may be 0xFF050000. Each firmware file may also be associated with a firmware file section size, such as firmware file section size322, indicating an amount of memory allocated to the firmware file section. For example, the first firmware file section data312may have a size322of 0x00000000. During a firmware build where space is allocated to firmware modules and the firmware is stored on the memory302, more memory than is required for data of a firmware file section may be allocated to the firmware file section. The firmware file size of each firmware file section may thus include unused memory space that is allocated to the firmware file section. In some embodiments, such allocation may be performed based on a determination that an update to the firmware file section data may require more space than data of a current firmware file section requires. Each firmware file section data set312may include one or more lines of data comprised of multiple data words. Each data word may, for example, include four hexadecimal characters. A second firmware file section data set314may have a size and offset that differs from the first firmware file section data set312. Likewise, a third firmware file section data set316may reside within a third firmware file section320and may have a size and offset that differs from the first firmware file section data set312and the second firmware file section data set314. Multiple firmware file section data sets may be included in a firmware file section, up to an Nth firmware file section data set318within an Nth firmware file section322. Offsets, such as offset320, and sizes, such as size322, of firmware file sections, such as firmware modules, may be stored in an offset list of the firmware volume containing the firmware modules when the firmware volume is built and stored on the memory302or after mapping the firmware volume at a later time. An offset list may, for example, include a list of GUIDs or pointers to GUIDs of firmware modules and/or volumes stored on a memory, offsets at which firmware modules and/or volumes are stored on the memory, sizes of firmware modules or volumes stored on the memory, and other information regarding a firmware image stored on a memory. Modules of a firmware of an information handling system may be updated individually to reduce a downtime required for application of an update. An example layout400of data during multiple stages of a firmware update is shown inFIGS.4A-B. An original flash map402of a firmware image may include multiple firmware modules, with the beginning of each module indicated by a flash offset, such as 0xFF050000, 0xFF110000, etc. The flash modules may, for example, be firmware files or firmware file sections. Firmware file sections may be housed within firmware files, and the firmware files may be housed within the firmware volume. The original flash map402may be a flash map of a firmware volume stored in a flash firmware memory of the information handling system when an update to one or more modules of the firmware volume is received. A received update package may include updates to multiple firmware modules included in the original flash map402. The original flash map402may, for example, be an Intel Silicon Package portion of a system BIOS firmware volume. An example map404of an update to multiple firmware modules may include multiple changes to multiple firmware modules of the firmware volume shown in flash map402. The update may, for example, include an Intel infrastructure processing unit (IPU) BIOS firmware update drop including one or more security fixes. For example, an update package including updates to one or more firmware modules may be scanned to generate the updated firmware module map404by comparing the received update package with the original flash map and determining one or more firmware modules that are updated. In some embodiments, a firmware module update package may be generated including only firmware modules of the update package, such as only firmware file section data sets, that are changed in the firmware update. The original flash map402may, for example, be generated upon receipt of a firmware update, to allow the information handling system to determine a current content of a firmware volume stored on the flash firmware memory. The comparison may be used to generate a flash offset table406indicating locations in the memory of the information handling system at which current firmware modules that are updated by the firmware update package are stored. The flash offset table may, for example, be a modular firmware offset list, as described herein. The flash offset table406may include identifiers of offsets, such as 0xFF050000, 0xFF110000, 0xFF160000, 0xFF870000, 0xFFC30000, and 0xFFEF0000 at which firmware modules to be updated begin within the memory. The flash offset table may also include sizes of portions of the memory that are allocated to the firmware modules to be updated, such as 0x00000000, 0x00010000, 0x001E0000, 0x00040000, and 0x000E0000. The sizes may, for example, include an area of the memory in which the current firmware module is stored and unused space within the memory allocated to the firmware module, such as empty space following the firmware module and preceding a subsequent firmware module in the memory. The flash offset table may also include further offsets within each firmware module, such as offsets indicating a plurality of words containing one or more words that are changed by the firmware update, such as 0xFF050020, 0xFF050040, and 0xFF050050 within the firmware module indicated by 0xFF050000. The flash offset table406may be used in applying the updates to each of the firmware modules to create an updated firmware module. The flash layout following the modular update408may include multiple updated firmware modules updated using the updated firmware module map404and the flash offset table406. For example, the flash offset table406may be used by the information handling system to determine a location of a firmware module to be updated and the updated firmware module map404may be used to determine a content of the update to be overwritten on the current firmware module. In some embodiments, an entire firmware module may be overwritten, while in other embodiments, firmware modules may be updated at a word level. For example, if the flash offset table indicates that a firmware module beginning at flash offset 0xFF050000 is to be updated, and that lines 0xFF050020, 0xFF050040, and 0xFF050050 are to be updated, the information handling system may overwrite one or more words in the existing flash module that differ from one or more corresponding words in the flash update package. For example, bolded words 8843, 882A, and 1090 in line 0xFF050020 may be overwritten, and may thus differ from the words 98EE, 1222, and 3445 present in the corresponding locations in the original flash map402. Updating the firmware modules may also include determining whether a size of space allocated to each firmware module is sufficient for storage of the updated firmware module. If sufficient space is allocated for storage of the updated firmware module, the firmware module may be updated at its location in the flash memory. If insufficient space is allocated for storage of the updated firmware module, at least a portion of the updated firmware module may be stored in a different area of the memory, or another memory such as an extended system partition of an SSD or hard drive of the information handling system. Furthermore, a pointer to the different location at which the portion of the memory is stored in the memory or in a different memory may be stored in the memory. Thus, in some embodiments, the information handling system may update only firmware modules that are changed by a received firmware update, reducing an amount of downtime required over an update in which a full firmware volume and/or file is overwritten. Updating only words in firmware modules that are changed by a firmware update may reduce an amount of time required for a firmware update. Furthermore, updating only firmware modules and/or words that are changed by a firmware update may, in some cases, allow for application of updates to a firmware without rebooting an information handling system, where overwriting an entire firmware volume may require a reboot of an information handling system. When updating firmware modules, updated signatures for the firmware modules, volumes, files, and, in some embodiments, an entire firmware memory, may be used to authenticate the updated firmware. An example block diagram of a system500for updating firmware signatures with a modular firmware update is shown inFIG.5. A firmware update module502, such as a firmware update module of the information handling system, may include a runtime modular flash update module504for applying one or more updates to one or more firmware modules. The firmware update module502may also include a signature array506, for application of updated signatures to the updated firmware. Signatures of the signature array506may, for example, be requested and received by the information handling system from a remote signature server. For example, a firmware update package, such as a security vulnerability fix firmware update package, may be received by the firmware update module502of the information handling system. The firmware update module502may assemble a modular security firmware update package520, which may include information regarding the firmware modules to be updated. For example, the modular security firmware update package520may include an index of one or more firmware volumes to be updated based on the received update package including a list of firmware modules to be updated, such as a list of firmware file and/or file section names, parent firmware volume names of the firmware modules to be updated, and globally unique identifiers (GUIDs) for the firmware modules, files, file sections, and/or volumes. In some embodiments, the modular security update package may include an update to a firmware volume header, a firmware volume mapped firmware image list of GUIDs for firmware modules to be updated and, in some embodiments, offsets within the memory at which firmware modules to be updated are stored, and one or more firmware module updates, such as an update to second and seventh firmware files of a first firmware volume, updates to a first firmware file of a third firmware volume, and other updates to other firmware files. The firmware update module502may request a signature array506from a remote signature server. In some embodiments, requesting the signature array may include providing the signature server with information regarding the update, such as identification of firmware modules, files, file sections, and/or volumes that are changed in the generated modular security update package. The firmware update module502may incorporate received signatures into the modular security update package and may apply the updated signatures when updating one or more firmware modules of the information handling system. The signed modular security firmware update package520including the signature array506may, for example, comprise a signed BIOS image for the firmware modules to be updated. In some embodiments, firmware modules may be referred to as firmware fragments. In some embodiments, the runtime modular flash update module504may obtain vendor provided security patches and scan the firmware layout of the current stored firmware image to dynamically generate modular fragments using a received firmware update package to form the modular security update package. The firmware update module502may apply the signature array506to update one or more firmware signatures when updating firmware modules of the information handling system. For example, a plurality of firmware signatures, such as a plurality of firmware signatures for a plurality of firmware modules updated, may be applied to firmware stored in a memory508when updating one or more firmware modules of the firmware. As an example, the memory508may be a SPI flash firmware memory, and may include a SPI flash header, a firmware flash header, which may include the firmware update module, a firmware signature510, a header for a first firmware volume, a header for a first firmware file of the first firmware volume, data for the first firmware file of the first firmware volume, a header for a second firmware file of the first firmware volume, data for the second firmware file of the first firmware volume, headers and data for other firmware files of the first firmware volume, a header for a second firmware volume, multiple firmware files of the second firmware volume, a header of a final firmware volume of the information handling system, firmware files of the final firmware volume of the information handling system, a flash modular update firmware volume, other firmware volumes and firmware volume headers, and a modular firmware offset list for the flash memory firmware image. The memory508may, for example, include a single BIOS firmware image, comprising multiple firmware volumes. The modular firmware offset list may, for example, include GUIDs of each firmware module, such as each firmware file and/or each firmware file section, GUIDs of firmware volumes, offsets at which firmware volumes, files, and file sections are stored in the memory508and sizes of firmware modules, such as sizes of firmware files or file sections. For example, the modular firmware offset list of memory508may include reflex pointers518to offsets at which firmware files, file headers, file sections, file section headers, volumes, volume headers, and other firmware elements begin. In some embodiments, the modular firmware offset list may include a table including indexes to GUIDs of a firmware volume and extensible firmware interfaces of the firmware volume. In some embodiments, the modular firmware offset list may be updated following application of an update to a firmware module, to update one or more offsets of the modular firmware offset list and/or to include a pointer to a portion of a firmware module stored on a different memory, such as in an extended system partition on a hard drive or solid-state drive of the information handling system. A firmware signature510for the entire firmware image stored on the memory508may be updated using the signature array506. Furthermore, a signature512included in a header of a first updated firmware file, such as a first firmware file including a first firmware module that is updated by the modular security firmware update package520, may be updated. A signature514of a second firmware volume including one or more updated modules may be updated. Furthermore, a signature516of a final firmware included in a header of the final firmware volume may be updated. Thus, signatures for a firmware memory, firmware volumes, and other firmware components that are updated during a modular firmware update may be updated using a signature array506received from a remote signing server. In some embodiments, updates to firmware modules, such as a generated modular firmware update package, or a portion of a generated modular firmware update package, may be stored in a memory of the information handling system, to allow the information handling system to individually roll back updates to one or more firmware modules. For example, if an update to a firmware module is determined to be flawed or potentially cause errors, the update to the firmware module may be rolled back, while other updates to other firmware modules may remain in place. An example system600for dynamically linking and extending modular firmware storage to support modular recovery and rollback is shown inFIG.6. For example, firmware modules, or fragments, such as firmware modules in prior states before updating or data to allow rolling back of a firmware module to a prior state, may be stored in a firmware volume storage. In some embodiments, the data stored in the firmware volume storage622may include delta data indicating a difference between two firmware module versions, such as a current version of a firmware module and an old version of the firmware module, to allow rolling back of updates to firmware modules. The firmware volume storage622may, in some embodiments, include space on the memory620, such as SPI flash firmware memory, and/or on one or more other memories of the information handling system, such as a solid-state drive or hard drive of the information handling system. The firmware volume storage622may include signature data, such as a signature for a first firmware volume608, a signature for a second firmware volume610, and a signature for a third firmware volume612, from a previous firmware version, in addition to previous versions of firmware modules associated with the signatures, such as a previous version of a first security firmware module of a first firmware volume, a previous version of a second security firmware module of a second firmware volume, a previous version of a third security firmware module of a third firmware volume, and previous versions of other firmware modules. The firmware volume storage622may also include version data for stored firmware deltas and indexes for firmware modules. In some embodiments, the firmware volume storage622may be dynamically tagged at a pre-extensible firmware interface (pre-EFI) phase of booting an information handling system, and a hosted operations support system/business support system (HOB) data channel may be passed from the PEI boot phase to a driver execution environment (DXE) boot phase to support automatic recovery based on a firmware offset list along with a version for a rollback of a firmware module to a previous stable module. A firmware volume offset table602may be used to store information regarding previous versions of firmware modules and may be used to individually roll back updates to individual firmware modules. For example, the firmware volume offset table602may include firmware volume data604, such as data regarding where previous versions of firmware modules are stored and data regarding which firmware modules the previous versions of firmware modules correspond to. For example, information indicating offsets at which a corresponding current firmware module is stored in the memory620may be stored in the firmware volume data604, along with information indicating where previous versions of the firmware module are stored by the information handling system. The firmware volume offset table602may also include signature node data606, such as data for updating signatures of a firmware volume and/or firmware modules, when rolling back one or more updates to one or more firmware modules. The signature node data606may, for example, include data regarding signatures for previous versions of firmware modules and/or signatures for firmware volumes including the previous versions of the firmware modules. When an information handling system determines to roll back an update to the firmware module, the information handling system may use the firmware volume offset table602to determine a location of the firmware module to be rolled back stored in the memory620and a location of a previous version of the firmware module stored in the firmware volume storage622. The information handling system may overwrite the firmware module, such as a second firmware file of a first firmware volume of the memory620with a previous version of the firmware module, and may update a signature, such as a signature614in a firmware file header for the second firmware file based on the rollback. Firmware volume signatures616,618may also be updated when firmware modules within the firmware volumes are rolled back. Thus, instead of rolling back an entire firmware, or entire firmware volume, if a fault is discovered in a firmware module following an update, the information handling system may roll back the update to the individual firmware module, while updating signatures of the firmware to reflect the rollback. Application of updates to individual modules of a firmware of an information handling system may reduce an amount of system downtime required to update the firmware of the information handling system. An example method700for updating firmware modules is shown inFIG.7. The method700may begin at step702with receipt of a firmware update. For example, an information handling system may receive a firmware update, such as a security or other firmware update, from a remote information handling system. The firmware update may, for example, be a firmware update package including an update to one or more firmware volumes of the information handling system. In some embodiments, the received firmware update may be a received modular firmware update package, including modular updates to one or more firmware modules of the information handling system. At step704, the information handling system may determine one or more firmware modules that are changed by the firmware update. For example, the information handling system may, in some embodiments, map a current firmware of the information handling system stored on a firmware memory of the information handling system and may compare the received firmware update with the current firmware. In some embodiments, the information handling system may use a firmware offset list to determine where each firmware module of the current firmware memory begins and/or a size of each firmware module of the current firmware memory. Firmware modules of the current firmware memory that have different contents from corresponding portions of the received firmware update may be firmware modules that are changed by the received firmware update. For example, the received firmware update may change one or more words of one or more firmware modules stored on the memory of the information handling system, and the information handling system may determine firmware modules to update by determining which firmware modules currently stored in the firmware memory of the information handling system have one or more words that differ from the corresponding contents of the received firmware update. A firmware module may, for example, be a boot device select (BDS), PEI, or DXE firmware module. At step706, the information handling system may determine a location of each firmware module changed by the update. For example, the information handling system may determine a location in a firmware memory of the information handling system at which each firmware module changed by the received firmware update is stored. Such a determination may, for example, be made using a firmware offset list specifying offsets for each firmware module of the information handling system. A firmware offset may be determined for example, at which a first firmware module updated by the received firmware update begins in the firmware memory of the information handling system. In some embodiments, sizes of each firmware module to be updated may also be determined. For example, the firmware offset list may include a size of a portion of a firmware memory of an information handling system allocated to each firmware module. At step708, the information handling system may update the firmware modules changed by the received update. For example, the information handling system may use the determined location of each firmware module to be updated to overwrite at least a portion of each firmware module having contents that differ from the corresponding contents of the received firmware update. In some embodiments, the information handling system may perform the update at a word level, such as updating only four character hexadecimal words that are different in the firmware update from the current contents of the firmware module. Thus, the firmware update may be applied to individual firmware modules of the information handling system. Updating firmware modules, instead of overwriting an entire firmware volume or image including one or more firmware modules to be updated, may reduce an amount of time required to implement the firmware update. For example, some firmware modules may not require a system reboot for application of a firmware update, but overwriting an entire firmware volume including the firmware modules may require such a reboot. Thus, modular firmware updates may allow an information handling system to update one or more modules of a firmware without overwriting an entire firmware image or an entire firmware volume on a SPI flash firmware memory of the information handling system. In some embodiments, an information handling system may also, at step710, roll back a firmware module update. For example, deltas for firmware module versions may be stored in a firmware volume storage of the information handling system. If an updated firmware module is determined to be flawed, the individual firmware module may be rolled back to a last known, or other, good version of the firmware module. Data for rolling back firmware modules, such as locations of storage of data for rolling back firmware modules, may be stored in a firmware module offset list of the information handling system. At firmware build time, firmware modules may be intelligently allocated additional storage space to accommodate future modular firmware updates. In applying a modular firmware update, an information handling system may determine whether sufficient space is available in a portion of a firmware memory allocated to the firmware module for storage of the updated firmware module. An example method800for updating a firmware module based on a determination of whether sufficient space is allocated in a firmware memory for storage of the updated firmware module is shown inFIG.8. The method800may begin at step802with determination of an offset within a first memory, such as a flash firmware memory, at which a firmware module to be updated is stored. For example, the step802may be performed as a part of step706of the method700following receipt of a firmware update. An information handling system may, for example, determine an offset within a first memory at which the firmware module is stored using an offset list stored in the first memory, or another memory, of the information handling system. At step804, the information handling system may determine whether sufficient space is allocated to the firmware module to store the firmware module once the update is applied. For example, a predetermined amount of space within the first memory may be allocated to the current firmware module. In some embodiments, the firmware module may require more space following application of the update than the firmware module did prior to the update. In some embodiments, the amount of space allocated to the firmware module within the first memory may be greater than an amount of space required to store a current version of the firmware module, prior to application of the update. For example, an amount of space within the first memory following an end of the current version of the firmware module and prior to a starting point of a header and/or data of a next firmware module may be left empty and allocated to the firmware module. An amount of space allocated to the firmware module, such as an amount of used and unused space in the first memory allocated to the firmware module may be retrieved from the offset list, in addition to the location of the firmware module within the first memory. In determining whether sufficient space is allocated to the firmware module within the first memory for storage of the updated firmware module, the information handling system may compare an amount of space within the first memory required for storage of the firmware module after the firmware module is updated with an amount of space allocated to the firmware module. If sufficient space is available within the first memory for storage of the firmware module following application of the update, the information handling system may, at step806, update and store the updated firmware module within the first memory, such as within the space allocated to the firmware module within the first memory. For example, the information handling system, at step806, may update the firmware module as described with respect to step708of method700. Thus, if sufficient space is allocated in the first memory to store the updated firmware module, the information handling system may update and store the updated firmware module in the first memory without overwriting contents of other firmware modules with the contents of the firmware module, even if the size of the firmware module following the update exceeds the size of the firmware module prior to the update. If sufficient space is not allocated to the firmware module in the memory for storage of the firmware module following the update, the information handling system may, at step808, update and store at least a portion of the updated firmware module in a different memory, such as an extended system partition on a hard drive or solid-state drive of the information handling system. For example, if the information handling system determines that storage of the updated firmware module within the first memory would require overwriting at least a portion of another firmware module in the first memory, such as a firmware module immediately following the firmware module being updated, the information handling system may store at least a portion of the updated firmware module in another memory. In some embodiments, the information handling system may store the entire updated firmware module in a different memory and store a pointer to the updated firmware module in the first memory. In some embodiments, the information handling system may update a portion of the firmware module stored in the first memory and may store another portion of the firmware module in a different memory. The information handling system may store a pointer to the portion of the firmware module stored in the different memory in the first memory. The pointer may, for example, be stored in a portion of the first memory allocated to the firmware module that was updated. The method800may, for example, be performed along with or as part of the method700. Thus, if insufficient space is allocated to a firmware module to be updated in a first memory, an information handling system may store a portion or all of the updated firmware module in a different memory. Such flexible storage can allow the information handling system to avoid overwriting other firmware modules in the first memory with contents of the firmware module being updated. An information handling system may update one or more signatures of a firmware after updating one or more modules of the firmware. An example method900, shown inFIG.9, may begin, at step902, with requesting and receiving updated signatures from a remote signature server. For example, after determining one or more firmware modules changed by the firmware update, at step704ofFIG.7, the information handling system may transmit a request for updated signatures to a remote signing server. The requested signatures may, for example, include signatures for an entire firmware image, signatures for one or more firmware volumes including firmware modules that are to be updated, signatures for one or more firmware files including firmware modules that are to be updated, and other firmware signatures. In some embodiments, the information handling system may transmit information identifying one or more firmware modules to be updated, such as GUIDs of the firmware modules to be updated and/or contents of the updated firmware modules, for use by the remote signing server in generating one or more signatures. The information handling system may receive the signatures from the remote singing server. In some embodiments, the information handling system may incorporate the received signatures into a modular firmware update package. For example, a received update package may include an entire updated firmware volume, one or more updated firmware modules, or other firmware update contents. The information handling system may, after determining the firmware modules to be updated, generate a modular firmware update package including one or more updates to one or more firmware modules, at step904. For example, portions of a received update package that apply to individual firmware modules may be packaged in a modular firmware update package. In some embodiments, for example, an entire updated firmware volume, or file, received in the firmware update package may be mapped to determine portions of the firmware volume, or file, that correspond to modules of a firmware stored by the information handling system but differ from current contents of the firmware modules of the information handling system. The information handling system may generate one or more updates to one or more firmware modules based on the received firmware update package and may incorporate the updates in a modular firmware package. In some embodiments, the information handling system may transmit the entire modular firmware package to the remote signing server when requesting signatures at step902. When the signatures are received from the remote signing server, the information handling system may generate a modular firmware update package including the signatures. For example, the information handling system may incorporate the received signatures into the received modular firmware update package. In some embodiments, the information handling system, or the remote signing server, may dynamically compute a hash of a new firmware update image using a static signature to cross check and update one or more signatures of the flash firmware memory. At step906, the information handling system may apply signatures when updating the firmware. For example, such application of signatures may be performed as part of step708of the method700. Application of signatures may include updating an overall signature of a firmware memory image, updating one or more signatures of one or more firmware volumes including firmware modules that are updated, updating one or more signatures of one or more firmware files including one or more firmware modules that are updated, and updating other signatures. Thus, signatures of a firmware of an information handling system that is updated with one or more modular firmware updates may also be updated to maintain authentication and security of the firmware of the information handling system. In some embodiments, the steps of method900may be performed alongside or as part of the method700in application of modular firmware updates. Updating signatures of a firmware when updating one or more firmware modules may allow secure boot functionality following the update to one or more firmware modules. At firmware build time, an information handling system may reserve unused space for one or more firmware modules for use in future modular firmware updates. An example method1000, for reserving space for future firmware module updates is shown inFIG.10. The method1000may begin, at step1002, with reserving flash memory space for future modular firmware updates. For example, an information handling system, at firmware build time, may reserve space for one or more firmware modules on a flash firmware memory of the information handling system. In some embodiments, the information handling system may determine a size of each of a plurality of firmware modules and may reserve space within a flash memory on which the firmware modules will be stored in excess of the space required for storage of each of the firmware modules. As one example, a first firmware module may be allocated space in addition to space required for storage of the first firmware module following an end of the first firmware module and before a beginning of a next firmware module. The additional space allocated to the first firmware module may remain unused until a modular firmware update is applied to the firmware module that requires more space than was used by the original version of the firmware module. In some embodiments, a predictive algorithm for allocating more space than is required for storage by a current version of a firmware module to the firmware module may be used to determine an amount of memory space to allocate to the firmware module based, at least in part, on a frequency of usage of the firmware module, a frequency of updates to the firmware module, a security level of the firmware module, and other characteristics of the firmware module, along with characteristics of other firmware modules stored in the memory. When a modular firmware update is applied to the firmware module, the updated firmware module may make use of the previously unused space allocated to the firmware module, as described with respect to the method800ofFIG.8. At step1004, the information handling system may map a flash memory containing the firmware of the information handling system. For example, the information handling system may determine a location at which each firmware module is stored, such as an offset indicating a beginning of a location within the memory at which each firmware module is stored. In some embodiments, the information handling system may also determine a size of a memory space allocated to each firmware module, such as an amount of space taken by the firmware module and an amount of empty space in the memory between an end of the space currently used to store the firmware module and a beginning of space used to store a next firmware module. In some embodiments, steps1002,1004may be performed by a flash layout build module. At step1006, the information handling system may store data obtained during the mapping of the flash memory at step1004. For example, the information handling system may store in a modular firmware offset list the location of each of the firmware modules, such as the offsets within the memory at which each of the firmware modules begin, and the size of the area of the memory allocated to each firmware module. In some embodiments the modular firmware offset list may be stored within a firmware memory, such as the SPI flash firmware memory, of the information handling system. Thus, an information handling system may organize and map a firmware memory to allow for future modular firmware updates. An information handling system may determine whether modular firmware updates are supported and may update a firmware of the information handling system based on the determination. An example method1100for updating a firmware of an information handling system is shown inFIG.11. The method1100may begin, at step1102, with receiving and reading a firmware update package. For example, an information handling system may receive a firmware update package for updating one or more firmware volumes of the information handling system. In some embodiments reading the firmware update package may include determining whether a modular firmware update or a full firmware update should be performed based on the contents of the firmware update package. At step1104, the information handling system may scan the firmware layout of the current firmware memory and the received firmware update, and, at step1106, may generate a modular update package including one or more updates to one or more firmware modules of the information handling system based on the firmware layout of the current firmware and the received firmware update package. For example, the information handling system may scan a predictive firmware layout of the information handling system and may dynamically generate multiple modular firmware fragments for incorporation in the modular update package using the received firmware update. In generating the modular update package, the information handling system may also request and receive one or more signatures from a remote signing server for incorporation into the modular update package. The signatures may, for example, include a BIOS image signature for the updated firmware modules. At step1108, the information handling system may determine whether application of modular firmware updates is supported by the information handling system. For example, the information handling system may determine whether a current version of a firmware to be updated by the information handling system supports modular firmware updates, such as whether one or more firmware volumes to be updated are organized into firmware modules and mapped in a modular offset list, or whether a firmware update module of the information handling system supports application of modular updates to firmware modules. Determining whether application of modular firmware updates is supported may also include determining whether a remote signature server is available for provision of signatures to incorporate in a modular update package for updating one or more modules of the firmware of the information handling system. In some embodiments, steps1104and1106may be performed following step1108and prior to step1114if a determination is made that the information handling system supports application of modular firmware updates. If a determination is made, at step1108, that the information handling system does not support modular firmware updates, the information handling system may, at step1110, perform a full firmware image update. For example, the information handling system may overwrite one or more entire firmware volumes of the information handling system. An information handling system may, for example, not support modular firmware updates during application of a first update to a full BIOS image. At step1112, the information handling system may reboot following and/or during application of the firmware update to the firmware of the information handling system. If a determination is made, at step1108, that modular firmware updates are supported, the information handling system may, at step1114, extract firmware update data. For example, the information handling system may extract a list of file names, GUIDs of firmware elements, such as firmware volumes, files, file sets, file sections, and modules, sizes of firmware elements, an entire image size of a firmware update, and other data regarding the modular firmware update from the modular firmware update package and may store the information in an update package in a separate firmware volume of the information handling system. At step1116, the information handling system may generate a firmware update queue. For example, the information handling system may generate a list of firmware modules to be updated using the information extracted from the firmware update package at step1114. For example, a list of firmware modules, parent firmware volumes of the firmware modules, associated GUIDs of the firmware modules and volumes, and other firmware update data may be read into a firmware update queue. At step1118, the information handling system may read modular firmware data regarding the current firmware from a modular firmware offset list of the information handling system. For example, a runtime modular firmware update module may locate a flash modular firmware volume in a SPI flash firmware memory of the information handling system, and may read a modular firmware offset list stored in the SPI flash firmware memory to determine GUIDs of firmware modules and volumes, sizes of portions of the memory allocated to the firmware modules and volumes, offsets for the firmware modules and volumes, and other information regarding firmware modules and volumes stored in the SPI flash firmware memory. The information handling system may, at step1120, compare the list of firmware modules to be updated determined at step1114with the modular firmware data read at step1118to determine firmware modules of the information handling system to be updated. For example, the information handling system may compare GUIDs of firmware modules of the firmware update package included in the firmware update package with GUIDs of firmware modules read from the modular firmware offset list to determine which firmware modules stored in the memory of the information handling system are updated by the received update package, offsets of the firmware modules to be updated, and sizes of memory portions allocated to each of the firmware modules to be updated. The information handling system may construct an update process queue including one or more update events using the update queue of step1118and the information regarding the offsets and sizes of each of the current firmware modules to be updated. In some embodiments, an update process queue may be generated for every firmware volume including one or more firmware modules to be updated. The update process queue may, for example, include events for every firmware module to be updated. In some embodiments, all or some of steps1104-1108and1114may be performed by a security scan module. At step1122, the information handling system may update the firmware modules indicated in the update process queue. For example, the information handling system may sequentially read offsets indicating locations of firmware modules to be updated on the SPI flash memory of the information handling system and may update only modules indicated in the update process queue, rather than overwriting entire firmware volumes. In some embodiments, steps1116-1122may be performed by a runtime modular flash firmware update module. In some embodiments, the information handling system may, at step1112, reboot following application of the modular firmware updates. For example, if security fixes within a boot path are applied to one or more firmware modules, the information handling system may reboot to implement the updated to the firmware. Thus, the information handling system may update one or more firmware modules individually, rather than overwriting entire firmware volumes, to reduce system downtime required for application of firmware update. The flow chart diagrams ofFIGS.7-11are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of aspects of the disclosed method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagram, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown. If implemented in firmware and/or software, functions described above may be stored as one or more instructions or code on a computer-readable medium. Examples include non-transitory computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc includes compact discs (CD), laser discs, optical discs, digital versatile discs (DVD), floppy disks and Blu-ray discs. Generally, disks reproduce data magnetically, and discs reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media. In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims. Although the present disclosure and certain representative advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
61,136
11861350
DETAILED DESCRIPTION As described herein, a software update for a client device can comprise multiple individual software assets that make up the various components of a system image or update image for a client device. For example, an update to an operating system of a client device can include software assets for each of the various components of the operating system. The various software assets associated with a given update can be listed in an update catalog that enumerates the individual assets and asset versions of the software update. Embodiments described herein provide a system and method for facilitate secure delivery of the various assets of a software update and to limit the availability of those assets only to trusted devices that are authorized to execute assets associated with the software update. Reference in the specification to “one embodiment” or “an embodiment” means that a feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. An element having a reference number between 100 and 199 is first shown inFIG.1, while an element having a reference number between 200 and 299 is first shown inFIG.2, etc. Within a description of a given figure, previously introduced elements may or may not be referenced. As described herein multiple levels of verification are implemented to enable components of a software update and asset delivery system to verify other components within the system. Previous software update systems relied upon on-device logic to manage and acquire software updates for a client device. Instead of using on-device update logic, embodiments described herein provide a server-based system for managing software updates. Secure software updates are facilitated via server-based cryptographic signing asset receipts and server-based management of software updates via software asset registration. During a software update, a client device can download a list of software assets associated with the update according to catalogs provided by an asset server. In one embodiment, asset catalogs are provided only to client devices that are authorized to receive software updates. For example, a device manufacturer can determine whether a communicating device is a legitimate device from that manufacturer, rather than an unauthorized duplicate or counterfeit device. Additionally, the assets specified by a catalog for a software update can be tuned for individual devices to enable specific devices to download specific versions of an assets that are tailed for the specific client device. Specific devices can be enrolled on specific update tracks, such that those devices will automatically receive development versions of specific assets, even of other, unrelated assets are production versions. Some embodiment described herein enable managed software updated for enterprise client devices. A device can be enrolled in a specific enterprise update program in which software updates are delayed according to an update policy established in coordination between the device vendor and technology personnel associated with the multiple enterprises that are enrolled in the enterprise managed software update system. The managed software update system can restrict visibility of certain software updates or assets associated with those software updates until the internal software system of an enterprise is prepared for internal devices to receive such updates. Furthermore, the managed update system can be used to gate installation of assets or software updates acquired via alternate mechanisms than the standard update system. For example, a client device can be configured to communicate with a managed update server to determine if an otherwise legitimate software update may be installed on the client device. The managed update server can indicate that the software update is not to be installed if enterprise update policies indicate that managed devices are not yet ready for a specific update. Secure Update System FIG.1is a block diagram of a secure software update system100, according to an embodiment as described herein.FIG.1provides an overview of components of the secure software update system100, whileFIG.2A-2Cdescribe various sub-systems in greater detail. The secure update system100includes a set of server devices that are interconnected via one or more networks. The one or more networks that interconnect the server devices can include local networks, such as corporate or datacenter networks, and can also include wide-area networks, such as the Internet. In one embodiment, the software update system100includes a build server101, an asset server105. One embodiment additionally includes a managed update server107. A further embodiment additionally includes a signing server102and attestation server104, which respectively cooperate with and facilitate operations of the build server101and the asset server105. Embodiments described herein can additionally include server devices associated with a content delivery network103. Each server described herein can be an individual server device, a virtual server, or a virtual server system hosted on a cluster of server devices. A subset of the server devices described herein can interact with a client device106. The illustrated client device106represents one or more of a plurality of client devices that can connect with the secure software update system100to acquire assets associated with a software update to be performed by the client device106. The secure update system100, in one embodiment, may be expected to service potentially millions of individual instances of the client device106. Each client device106described herein can be one of various types of electronic devices, including but not limited to mobile electronic devices such as smartphones, table computers, and wearable electronic devices. In one embodiment, a client device106can also include a portable computing device, such as a laptop computing device. In one embodiment, the client device106can be a desktop computing device. In one embodiment, a client device106can be one of a variety of other electronic devices, including cellular telephones, personal digital assistants (PDAs), including cellular-enabled PDAs, network connected television set top boxes, or entertainment systems, including electronic gaming systems. The client device106can also be or integrated into other consumer electronic devices, such as smart appliance devices, or one or more implementations of a smart media playback device, such as a smart speaker device. The client device106can also be a network connected smart home device, such as a smart thermostat, lighting system, garage door opener, or other smart home devices. As shown inFIG.1, the build server101can be configured to compile software code associated with software assets and package the compiled software code into installable software assets. The collective packaged software assets can be further packaged into a larger software update for a client device. A software update package for a client device can include update packages for the various components of software executing on a client device, such as user interface modules, hardware drivers, and various frameworks and libraries used by system software and application software that can execute on a client device. The build server101can build each of these assets and register new versions of those assets with the asset server105. In one embodiment, the assets built by the build server101can be stored to a content delivery network103. The content delivery network103a system of networked and distributed servers that can be used to deliver software and other content to users and/or client devices. The content delivery network103can reduce the latency of content delivery via the use of various cache servers that are geographically distributed. The cache servers replicate data stored to an origin server. When asset data is requested from a content delivery network103, the asset data can be delivered from one of multiple servers based on the geographic location of the client device. Storage location and version information for an asset that is built by the build server101can be provided to the asset server105during an asset registration process. Based on the provided registration data, the asset server105can maintain a catalog of various production and development versions of assets. During development of a software update, functionality and compatibility of the various assets can be tested and validated by software development and validation personnel. Various software updates that include development and production version of assets can be cataloged on the asset server105. In one embodiment, the build server101also generates a receipt for each asset that is registered along with the asset at the asset server105. The receipt can include descriptive information for the asset and can be signed as a method of attesting to the authenticity of the asset, where an authentic asset is an asset that is built by and associated with the vendor or software provider of the client device106. In one embodiment, the build server101can use a signing server102to sign the asset receipt. The signed asset receipt can then be registered with the asset at the asset server105. In one embodiment, the asset server105provides a registry of live assets that are available to a client device106. To perform a software update, the client device106can send an asset request to the asset server105. The asset server105can provide a response to the client device that includes catalog data that tells the client device106how to acquire the asset. Before catalog information allowing acquisition of an asset is provided to the client device106, the asset server105can verify that the client device106is an authentic device. For example, a client device106is an authentic device if the device was manufactured by and registered with the device vendor of the client device. The client device106can include one or more hardware reference keys that are derived from and/or stored within hardware of the client device106. The hardware reference key can be provided to the asset server105, which can verify the legitimacy of the hardware reference key, and thus, verify the legitimacy of the client device106. In one embodiment, the attestation server104can be used to attest to the validity of the client device106. The asset server105can provide the one or more hardware reference keys, as well as other identifying data for the client device106to the attestation server104. The attestation server104can then determine the authenticity of the client device106. For example, unauthorized and/or counterfeit versions of the client device may not have a hardware reference key that conforms to the proper cryptographic technique used to validate client devices106. In addition to key verification, in one embodiment the attestation server104can query a device registry of client devices to determine if the client device106, and/or associated keys or identifiers of the client device, were registered to the device registry by the hardware vendor. After the asset server105determines the client device106to be valid, a catalog can be provided to the client device106that includes a download location and a signed asset receipt for a requested asset. Determining the validity of the client device106before providing catalog information enables the list and location of assets to be provided only to authorized client devices, rather than storing the catalog information to a well-known network location that may be accessible to third-parties. In one embodiment, the asset server105can provide customized assets to specific instances of the client device106. Based on a device identifier or hardware reference key associated with a given device, specific versions of specific assets can be provided to the client device106. A specific device can be identified as a development device that will receive the latest available development version of an asset, even if such asset will not be included in production versions of software updates. In one embodiment, the asset server105can work in concert with a managed update server to enable managed updates for enterprise client devices. For example, the client device106can be enrolled in an enterprise update program in which software updates are delayed according to an enterprise update policy. Multiple enterprises can be enrolled in the enterprise managed software update system. Devices associated with those enterprises can registered with the secure software update system100using one or more of various device registration techniques. For example, specific devices associated with a specific enterprise can be enumerated based on a list or range of device identifiers. Alternatively, enterprise devices can be provisioned with certificates or profiles that associated the device with a specific enterprise managed software update program. The specific update schedule for devices associated with a given enterprise managed software update system can be established and managed in coordination between the device vendor and technology personnel associated with the enterprise. During a delay period, the managed update server107can work in concert with the asset server to restrict visibility of certain software updates or assets associated with those software updates until the update is allowed by the enterprise. For example, updates can be delayed until the internal software system of an enterprise is updated to support internal devices having the latest available update. Furthermore, the managed update server107can be used to gate installation of assets or software updates acquired via alternate mechanisms than the standard update system. For example, a client device106can be configured to communicate with the managed update server107to determine if an otherwise legitimate software update may be installed on the client device. The managed update server can indicate that the software update is not to be installed if enterprise update policies indicate that managed devices are not yet ready for a specific update. Once a client device106receives a catalog for an asset, the client device can determine the validity of the asset by verifying the signed asset receipt associated with the asset. The signed asset receipt is generated by the build server101and attests to the source of the asset. Verifying the signed asset receipt provides a degree of assurance to the client device106that the asset is a genuine asset that is built by the build server101, or otherwise provided by a legitimate software vendor. Secure Update Sub-Systems and Methods FIG.2A-2Cdescribe various sub-systems of the secure software update system100ofFIG.1.FIG.2Aillustrates a build sub-system200for assets associated with a software update.FIG.2Billustrates an asset service sub-system210to facilitate access to software update assets for the client device106.FIG.2Cillustrates a managed update sub-system for the client device106. Operations for methods associated with the illustrated sub-systems are shown inFIGS.3A-3CandFIGS.4A-4B. The processes and operations depicted in the figures that follow can be performed via processing logic that includes hardware (e.g. circuitry, dedicated logic, etc.), software (as instructions on a non-transitory machine-readable storage medium), or a combination of both hardware and software. Although some of the processes are described below in terms of sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially. Additionally, some operations may be indicated as optional and are not performed by all embodiments. FIG.2Aillustrates the build sub-system200, which includes the build server101, the signing server102, and the content delivery network103. The build server101can build an asset and generate the asset receipt204. In one embodiment, while the build server101may build assets and generate receipts for those assets, the build server101may not include the cryptographic keys used to sign assets receipts. In such embodiment, overall security of the build sub-system200is enhanced by restricting asset receipt signing to one or more specific, dedicated servers that are accessible by a limited number of individuals. The build server101includes an authentication key201that identifies the build server101and authenticates the build server with the signing server102. The build server101can authenticate with the signing server102using the authentication key201. The build server101can then request the signing server102to generate a signed asset receipt206using a receipt signing key202. The build server101can then register the asset and signed asset receipt206with the asset server105. Registration can include indicating to the asset server105the randomized location208in which the build server101has stored the asset on the content delivery network103. Storing the asset to a randomized location on the content delivery network103can limit the ability of third parties to browse well-known locations on the content delivery network103to acquire assets or asset catalogs. FIG.3Aillustrates a method300of operating the build sub-system200, according to an embodiment. In one embodiment, the build server101can build an asset associated with a software update for an electronic device, as shown at block302. The build server101can then create an asset receipt for the asset, as shown at block304. The asset receipt can be used to attest to the validity of the asset, in that the asset was built by the official build server for the assets. The build server101can then register the asset with an asset server105, as shown at block306. The asset server105can host a registry of assets associated with the software update. As part of the registration the build server101can provide a signed version of the asset receipt to the asset server105. The signature of the asset receipt can be used to attest to the validity of the asset receipt as well as to the validity of the build server101. The build server101can then upload the asset to a randomized location on a content server, as shown at block308. The content server, in one embodiment, is a server of the content delivery network103. FIG.2Billustrates an asset service sub-system210, which in one embodiment includes the asset server105, attestation server104, and client device106. The asset server105can receive registration of assets from the build server101of build sub-system200. The asset server105can also sync asset registration data with the managed update server107, which is further described in relation to sub-system220ofFIG.2C. In one embodiment, the client device106can send an asset request212to the asset server105. The asset request212can include a hardware reference key211of the client device106. The asset server105can validate the authenticity of the client device106using the hardware reference key211. Validation of the authenticity of the client device106can include sending an attestation request214to the attestation server104. The attestation server104can determine whether or not the client device106is authentic and can send an attestation response216to the asset server105. Having determined the client device106is authentic or received attestation that the client device106is authentic, the asset server105can use a response signing key213to sign an asset response. The asset server105can then send the signed asset response218to the client device106. The signed asset response218can indicate assets that are specific to the client device106. Depending on the configuration of the client device106, the asset server105can provide an asset response218that indicates mass-market assets that are generally available to client devices, specialized assets for specific devices, such as development versions of the assets, or assets associated with an enterprise managed update program, which can be determined based on coordination between the asset server105and the managed update server107. In one embodiment, the signed asset response218includes the signed asset receipt206that was provided by the build server and, in one embodiment, signed by the signing server102. The signed asset response218also includes an asset response signature, which is a signature applied using the response signing key213held by the asset server105. The client device106can verify the signature on the signed asset receipt206and the asset response signature219of the signed asset response218. FIG.3Billustrates a method310of operation of the asset server105of the asset service sub-system210, according to one embodiment. As shown at block312, the method310includes to receive, at the asset server105, an asset request212including a hardware reference key211of the client device106. The asset server105can verify authenticity of the client device106based on the hardware reference key211of the client device, as shown at block314. Verifying authenticity can be performed via the attestation server104in response to the attestation request214sent by the asset server105. If the requesting client device106is an authentic client device, as determined at block315, the asset server105can provide a signed response and a signed version of the asset receipt to the client device106, as shown at block318. Otherwise, the asset server can deny the asset request at block316. FIG.3Cillustrates a method320of operation of the client device106of the asset service sub-system210, according to an embodiment. As shown at block322, a client device106can request, from the asset server105, an asset associated with a software update. As shown at block324, the client device106can receive a signed response to the request, where the signed response includes a signed asset receipt206. The client device106can verify the asset response signature219and the signature of the signed asset receipt206, as shown at block326. If the asset response and receipt are authentic, as determined at block327, the client device106can download the asset from the randomized location indicated by the asset request response, as shown at block329. Otherwise, the client device106can reject the asset at block328, as the asset may not be a legitimate software module or may be a legitimate software module that includes modifications not included in the asset as originally uploaded by the build server101. FIG.2Cillustrates a managed update sub-system220in which access to a software update and/or specific assets associated with the software update can be delayed. In one embodiment, the managed update sub-system220can be used to gate installation of assets acquired via the asset service sub-system210FIG.2B. The client device106can send a managed update request222to the managed update server107. The managed update request222can be received by the managed update server107, which can then determine whether a downloaded asset221can be installed as part of a software update to the client device106. The downloaded asset221can be downloaded by the client device106based on catalog information provided to the client device106by the asset server105. The asset server105can synchronize with the managed update server107with respect to asset versions that are available to the client device106when the client device is enrolled in an enterprise managed update system. For example, the managed update server107and the asset server105can synchronize to determine a special asset list225, which lists the current asset versions to be made available for a client device106in a managed update system. If the downloaded asset221can be installed, the downloaded asset can be included as part of a managed software update. A signed software update228can be generated by the managed update server107, which, in one embodiment, can be signed by a secure boot signing key223associated with the managed update server107. The secure boot signing key223can be used to apply an update signature229to a signed software update228, enabling the client device106to securely boot the signed software update228. The managed update server107includes a secure boot signing key223to enable the generation of special boot images for the client device106when the client device is enrolled in a specific enterprise managed update program. Such images may differ from the boot images deployed to mass market devices. FIG.4A-4Billustrate methods to perform operations associated with managed updates of a client device, according to embodiments described herein.FIG.4Aillustrates a method400by which a device specific asset can be provided to a client device.FIG.4Billustrates a method410of gating asset installation for a client device that is enrolled in a managed update system. As part of method400shown inFIG.4A, an asset server105, at block402, can determine that a client device106associated with an asset request is an authentic client device. In one embodiment, the asset server105can determine that the client device106is authentic using operations of method310illustrated inFIG.3B. As shown at block404, the asset server105, in consultation with managed update server107, can determine if the client device106is associated with a special asset list225. The special asset list225, in one embodiment, can list assets associated with client devices in an enterprise managed update system. In one embodiment, the special asset list225, or a similar list, registry, or database, can be used to determine, at block405, if the client device106is a special client device. The client device106is a special client device if the client device is, for example, part of an enterprise managed software update system in which access to software updates are delayed. The client device106is also a special client device if the device is on a list of devices for which access to software updates are accelerated. For example, the client device can be part of a beta software update program or another early access software update program in which development versions of assets may be supplied during a software update. If at block405the client device106is determined not to be a special client device, the asset server105can provide a response that indicates a mass market asset, as shown at block406. If at block405the client device106is determined to be a special client device, the asset server105can determine, at block407, if the client device is an enterprise client device. Specifically, the asset server105can determine if the client device106is enrolled in an enterprise managed software update system that is managed in concert with the managed software update server107. If, at block407, the client device106is determined to be an enterprise client device, or otherwise enrolled in an enterprise managed update system, the asset server105can provide a response that is tailored for the enterprise of the client device, as shown at block409. If the client device106is a special client device but is not an enterprise client device, in one embodiment the asset server105can provide a response that is tailored for the individual client device, as shown at block408. The operations of method400are exemplary of one embodiment described herein and the specific operations may vary across embodiments. Additional techniques of tailoring assets to multiple specific client devices can also be employed. For example, client devices associated with members of a team of internal software developers of a device vender can be added to a list of special devices, such that those devices can receive the latest published assets of that team, even if those assets are development versions, while other assets may be production versions of those assets. As part of method410shown inFIG.4B, a client device106can download an asset (e.g., downloaded asset221) associated with a software update, as shown at block412. The client device106, as shown at block414, can send a managed update request222to the managed update server107to determine if installation of the asset is allowed. If at block415the managed update server107indicates install is allowed, the client device106can install the asset as part of a managed software update, as shown at block417. If the managed update server107indicates that install is not allowed, for example if the enterprise managed update policy indicates that the asset is too new of a version to be installed on the client device106, the client device will not install the asset as part of the managed software update, as shown at block416. Additionally, the method410can be used to prevent unauthorized rollbacks of assets once a newer version of the asset has been installed. In one embodiment, the method410can be applied to client devices in general, such that once a version of an asset has been installed, a rollback to a previous version is not allowed. Processing Components of a Client Device FIG.5A-5Billustrate block diagrams of processing components of a client device106as described herein.FIG.5Aillustrates a processing system500that enables secure and authenticated access to software update assets and provides processing resources to enable an operating system of the client device106to install such assets.FIG.5Billustrates a secure processor520of the processing system500that can be used to accelerate cryptographic operations and can securely store hardware keys associated with the client device106. The processing system500shown inFIG.5Acan perform operations including generating one or more hardware reference keys, securely storing hardware reference keys, and attesting to details of the processing system500(e.g., processor type). The processing system500can additionally attest to the operating system, including operating system version, in use when the one or more hardware reference keys are generated. In one embodiment, the processing system500includes memory510, one or more processors, such as application processors515, and a secure processor520. Instructions for an operating system can be stored in memory510, where the instructions can be executed at least in part by the application processors515. The processing system500, in one embodiment, can be a system on a chip (SoC) in which at least the application processors515and the secure processor520are integrated into a single integrated circuit. The memory510, application processors515, and secure processor520can be coupled via an interconnect505, which can include one or more buses, rings, fabrics, or other types of interconnects. The secure processor520is further illustrated inFIG.5B. In one embodiment, the secure processor520includes, but is not limited to a public key accelerator522, an advanced encryption standard (AES) module524, and a secure memory526. The secure processor520can also execute a dedicated secure processor operating system530that is separate from an operating system executed by the application processor. In one embodiment, the secure processor operating system530can use the accelerators and modules of the secure processor520to generate keys used to demonstrate that a client device106is a trusted client device. One or more hardware reference keys can be generated by the secure processor to demonstrate authenticity and trustworthiness of a client device106to the attestation server104. In one embodiment, a hardware reference key can be generated in part based on a unique identifier (UID) stored in secure memory526. The secure processor operating system530can generate a seed and pass the seed to the AES module524. The AES module524can read the UID from the secure memory526and encrypt the seed and the UID. The AES module524can then pass the seed, the encrypted seed, and the encrypted UID to the public key accelerator522. The public key accelerator522can then generate one more key pairs having public and private keys. In one embodiment, the public key accelerator522can generate the one or more key pairs using information such as the version of the secure processor operating system530. The one or more key pairs can be used as hardware reference keys to attest to the validity of the client device106during an asset request. For example, a public portion of the hardware reference key can be provided along with additional device information such as one or more chip identifiers, a processor type or class, and one or more signatures. The provided information can be analyzed by an attestation server104to determine if the client device106is a valid and/or legitimate client device that is authorized to receive catalog data for assets associated with a software update. FIG.6is block diagram illustrating a secure processor600, according to an embodiment. The secure processor600can be a variant of the secure processor520ofFIG.5A-5B. In the illustrated embodiment, the secure processor600includes one or more processor(s)632, security peripherals636A-636E, the secure ROM634, secure mailbox660, filter662, power control unit664, clock control unit665, and a unique identifier (UID)668. The filter662may be coupled to the interconnect505ofFIG.5Aand to a local interconnect670to which the other components of the secure processor600are also coupled. The local interconnect670can be configured as a bus-based interconnect or another interconnect such as a packet-based, hierarchical, point-to-point, or cross bar fabric. In one embodiment, the security peripherals636A-636E coupled with the local interconnect670include a set of AES encryption engines636A-636B, an authentication circuit636C, a secure interface unit636D, and other security peripherals636E. In one embodiment, a first AES encryption engine636A can couple to the processor(s)632. The processor(s)632are one or more processor cores that manage operations within the secure processor. The processors)632can execute a secure operating system that is separate from the host operating system, such as the operating system executed by the application processors515ofFIG.5A. In one embodiment, the secure processor operating system is a micro-kernel based operating system that is optimized for mobile or embedded processors. The processor(s)632can couple with the secure mailbox660and the power control unit664. The power control unit664can be coupled to the clock control unit665and an external power manager. The clock control unit665can also be coupled to the power manager, which can cause the clock control unit665to enable or disable the input clock signal. The clock control unit665can then provide clock signals to the other components of the secure processor600. In one embodiment, a second AES encryption engine636B can couple with a set of fuses that define the UM668, which at least quasi-uniquely identifies the specific device that includes the secure processor600. The second AES encryption engine636B may be responsible for secure key generation and can output generated keys to cryptographic circuits and/or other circuitry within the SoC that houses the secure processor600. For example, in one embodiment the second AES encryption engine636B can act as the public key accelerator522as inFIG.5B. In one embodiment, the filter662can be configured to tightly control access to the secure processor600to increase the isolation of the secure processor from the rest of the SoC that contains the secure processor (e.g., application processor515ofFIG.5A). In an embodiment, the filter662may permit read/write operations from an interconnect (e.g., interconnect505ofFIG.5A) to enter the secure processor600only if the operations address the secure mailbox660. The secure mailbox660may include an inbox and an outbox, which each may be first-in, first-out (FIFO) buffers. The FIFO buffers may have any size and can contain any number of entries, where each entry can store data from a read or write operation. In one embodiment, the inbox is configured to store write data from write operations sourced from the interconnect, while the outbox can store write data from write operations sourced by the processor(s)632. In one embodiment, the filter662can permit write operations to the address assigned to the inbox portion of the secure mailbox660and read operations to the address assigned to the outbox portion of the secure mailbox660. All other read/write operations may be discarded or blocked by the filter662. In one embodiment, the filter662responds to other read/write operations with an error and can sink write data associated with a filtered write operation without passing the write data on to the local interconnect670. In one embodiment, the filter662can also supply nonce data as read data for a read operation that is filtered. The supplied nonce data can be any data that is unrelated to the address resource within the secure processor600, and may be all zeros, all ones, random data from a random number generator, data programmed into the filter662to respond as read data, the address of the read transaction, or other data. In an embodiment, the filter662only filters incoming read/write operations, allowing components within the secure processor600to have full access to other components to which the secure processor is integrated. In such embodiment, the filter662will not filter responses from the SoC fabric that are provided in response to read/write operations issued by the secure processor600. In one embodiment, write data for write operations generated by the processor(s)632that are to be transmitted by the secure processor600may optionally be encrypted by an AES encryption engine636A-636B. An attribute of the write operation issued by the processor(s)632may indicate whether data is to be encrypted. The attribute may be a packet field, in packet-based embodiments, a signal transmitted with the write operation, in bus-based embodiments, or may be transmitted in any other desired fashion. In the illustrated embodiment, AES encryption engines636A-636B are described. However, additional or alternate encryption circuits can be included accelerate other encryption algorithms, including but not limited to ECC, RSA, or DES. The power control unit664may be configured to control the power gating of the secure processor600. The power control unit664may be coupled to processor(s)632, and may monitor the processor to determine when power gating is to be requested. Responsive to determining that power gating is to be requested, the power control unit664can transmit a power gating request to an external power manager. The power manager can determine that the secure processor600is to be powered gated and can then power gate the secure processor600. The power control unit664may also be configured to control clock gating in the secure processor600. Alternatively, the clock control unit665may be configured to control the clock gating in the secure processor600. Clock gating may be controlled locally or may be requested from the power manager in various embodiments. The clock control unit665may be configured to control the local clocks in the secure processor600. The clock control unit665may be coupled to receive an input clock and may generate the clocks local to the secure processor600. The clock control unit665may be programmable (e.g. by processor(s)632) with clock ratios, clock enables, clock gating enables, etc. for the various clocks in the secure processor600. The secure ROM634is coupled to the local interconnect670and may respond to an address range assigned to the secure ROM634on the local interconnect670. The address range may be hardwired, and the processor(s)632may be hardwired to fetch from the address range at boot to boot from the secure ROM634. The secure ROM634may include the boot code for the secure processor600as well as other software executed by processor(s)632during use (e.g. the code to process inbox messages and generate outbox messages, code to interface to the security peripherals636A-636E, etc.). In an embodiment, the secure ROM634may store all the code that is executed by the processor(s)632during use. In one embodiment, the security peripherals636A-636E include an authentication circuit636C that is used to perform authentication operations for the secure processor600. The authentication circuit636C may implement one or more authentication algorithms, such as but not limited to a secure hash algorithm (SHA) such as SHA-1, SHA-2, SHA-3, or any other authentication algorithm. In one embodiment, the authentication circuit can work in concert with other security peripherals636E within the secure processor600. In addition to security peripherals designed to perform specific functions, there may also be security peripherals that are interface units for secure interfaces such as the secure interface unit636D. In the illustrated embodiment, the secure interface unit636D is an interface to an off-chip secure memory that may be used to secure storage by the secure processor600. The secure memory can be encrypted using an ephemeral key that is based in part on the UID668. The ephemeral key is occasionally re-generated. For example, and in one embodiment the secure processor600can re-generate the ephemeral key during each boot cycle. Only the secure processor600has access to the ephemeral key used to access secure memory. The secure memory enables the secure processor600to have secure access to system memory to store data that may not fit within memory internal to the secure processor600. Exemplary Device Architectures FIG.7is a block diagram of a device architecture700for a mobile or embedded device, according to an embodiment. The device architecture700includes a memory interface702, a processing system704including one or more data processors, image processors and/or graphics processing units, and a peripherals interface706. The various components can be coupled by one or more communication buses or signal lines. The various components can be separate logical components or devices or can be integrated in one or more integrated circuits, such as in a system on a chip integrated circuit. The device architecture700can be used to implement a client device106as described herein. The memory interface702can be coupled to memory750, which can include high-speed random-access memory such as static random-access memory (SRAM) or dynamic random-access memory (DRAM) and/or non-volatile memory, such as but not limited to flash memory (e.g., NAND flash, NOR flash, etc.). Sensors, devices, and subsystems can be coupled to the peripherals interface706to facilitate multiple functionalities. For example, a motion sensor710, a light sensor712, and a proximity sensor714can be coupled to the peripherals interface706to facilitate the mobile device functionality. One or more biometric sensor(s)715may also be present, such as a fingerprint scanner for fingerprint recognition or an image sensor for facial recognition. Other sensors716can also be connected to the peripherals interface706, such as a positioning system (e.g., GPS receiver), a temperature sensor, or other sensing device, to facilitate related functionalities. A camera subsystem720and an optical sensor722, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips. Communication functions can be facilitated through one or more wireless communication subsystems724, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the wireless communication subsystems724can depend on the communication network(s) over which a mobile device is intended to operate. For example, a mobile device including the illustrated device architecture700can include wireless communication subsystems724designed to operate over a GSM network, a CDMA network, an LTE network, a Wi-Fi network, a Bluetooth network, or any other wireless network. In particular, the wireless communication subsystems724can provide a communications mechanism over which a media playback application can retrieve resources from a remote media server or scheduled events from a remote calendar or event server. An audio subsystem726can be coupled to a speaker728and a microphone730to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In smart media devices described herein, the audio subsystem726can be a high-quality audio system including support for virtual surround sound. The I/O subsystem740can include a touch screen controller742and/or other input controller(s)745. For computing devices including a display device, the touch screen controller742can be coupled to a touch sensitive display system746(e.g., touch-screen). The touch sensitive display system746and touch screen controller742can, for example, detect contact and movement and/or pressure using any of a plurality of touch and pressure sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with a touch sensitive display system746. Display output for the touch sensitive display system746can be generated by a display controller743. In one embodiment, the display controller743can provide frame data to the touch sensitive display system746at a variable frame rate. In one embodiment, a sensor controller744is included to monitor, control, and/or processes data received from one or more of the motion sensor710, light sensor712, proximity sensor714, or other sensors716. The sensor controller744can include logic to interpret sensor data to determine the occurrence of one of more motion events or activities by analysis of the sensor data from the sensors. In one embodiment, the I/O subsystem740includes other input controller(s)745that can be coupled to other input/control devices748, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus, or control devices such as an up/down button for volume control of the speaker728and/or the microphone730. In one embodiment, the memory750coupled to the memory interface702can store instructions for an operating system752, including portable operating system interface (POSIX) compliant and non-compliant operating system or an embedded operating system. The operating system752may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system752can be a kernel. The memory750can also store communication instructions754to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers, for example, to retrieve web resources from remote web servers. The memory750can also include user interface instructions756, including graphical user interface instructions to facilitate graphic user interface processing. Additionally, the memory750can store sensor processing instructions758to facilitate sensor-related processing and functions; telephony instructions760to facilitate telephone-related processes and functions; messaging instructions762to facilitate electronic-messaging related processes and functions; web browser instructions764to facilitate web browsing-related processes and functions; media processing instructions766to facilitate media processing-related processes and functions; location services instructions including GPS and/or navigation instructions768and Wi-Fi based location instructions to facilitate location based functionality; camera instructions770to facilitate camera-related processes and functions; and/or other software instructions772to facilitate other processes and functions, e.g., security processes and functions, and processes and functions related to the systems. The memory750may also store other software instructions such as web video instructions to facilitate web video-related processes and functions; and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions766are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. A mobile equipment identifier, such as an International Mobile Equipment Identity (IMEI)774or a similar hardware identifier can also be stored in memory750. Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory750can include additional instructions or fewer instructions. Furthermore, various functions may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. FIG.8is a block diagram of a computing system800, according to embodiments described herein. The computing system800is intended to represent a range of computing systems including, for example, desktop computer systems, laptop computer systems, tablet computer systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes, entertainment systems or other consumer electronic devices, smart appliance devices, or one or more implementations of a smart media playback device. Alternative computing systems may include more, fewer and/or different components. The computing system ofFIG.8may be used to provide the computing device and/or a server device to which the computing device may connect. For example, the computing system800can be part of any of the server devices described herein, such as, but not limited to the build server101, signing server102, or a server of a content delivery network103. The computing system800can also be or can be included within the attestation server104, asset server105, or managed update server107described herein. The computing system800includes bus835or other communication device to communicate information, and processor(s)810coupled to bus835that may process information. While the computing system800is illustrated with a single processor, the computing system800may include multiple processors and/or co-processors. The computing system800further may include random access memory (RAM) or other dynamic storage device coupled to the bus835. The RAM can be configured as main memory820and can store information and instructions that may be executed by processor(s)810. Main memory820may also be used to store temporary variables or other intermediate information during execution of instructions by the processor(s)810. The computing system800may also include read only memory (ROM)830and/or another data storage device840coupled to the bus835that may store information and instructions for the processor(s)810. The data storage device840can be or include a variety of storage devices, such as a flash memory device, a magnetic disk, or an optical disc and may be coupled to computing system800via the bus835or via a remote peripheral interface. The computing system800may also be coupled, via the bus835, to a display device850to display information to a user. The computing system800can also include an alphanumeric input device860, including alphanumeric and other keys, which may be coupled to bus835to communicate information and command selections to processor(s)810. Another type of user input device includes a cursor control870device, such as a touchpad, a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor(s)810and to control cursor movement on the display device850. The computing system800may also receive user input from a remote device that is communicatively coupled via one or more network interface(s)880. The computing system800further may include one or more network interface(s)880to provide access to a network, such as a local area network. The network interface(s)880may include, for example, a wireless network interface having antenna885, which may represent one or more antenna(e). The computing system800can include multiple wireless network interfaces such as a combination of Wi-Fi, Bluetooth®, near field communication (NFC), and/or cellular telephony interfaces. The network interface(s)880may also include, for example, a wired network interface to communicate with remote devices via network cable887, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. In one embodiment, the network interface(s)880may provide access to a local area network, for example, by conforming to IEEE 802.11 b and/or IEEE 802.11 g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols can also be supported. In addition to, or instead of, communication via wireless LAN standards, network interface(s)880may provide wireless communications using, for example, Time Division, Multiple Access (TDMA) protocols, Global System for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, Long Term Evolution (LTE) protocols, and/or any other type of wireless communications protocol. The computing system800can further include one or more energy sources805and one or more energy measurement systems845. Energy sources805can include an AC/DC adapter coupled to an external power source, one or more batteries, one or more charge storage devices, a USB charger, or other energy source. Energy measurement systems include at least one voltage or amperage measuring device that can measure energy consumed by the computing system800during a predetermined period of time. Additionally, one or more energy measurement systems can be included that measure, e.g., energy consumed by a display device, cooling subsystem, Wi-Fi subsystem, or other frequently used or high-energy consumption subsystem. In the foregoing specification, the invention has been described regarding specific embodiments thereof. It will, however, be evident that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The specifics in the descriptions and examples provided may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform acts of the method, or of an apparatus or system according to embodiments and examples described herein. Additionally, various components described herein can be a means for performing the operations or functions described in accordance with an embodiment. While the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
56,248
11861352
DETAILED DESCRIPTION The present concepts relate to smart ways to deploy packages (e.g., update, payload, or any modification) to targets. The present concepts will be explained below in the context of deploying updates to target clusters in a cloud computing fleet. However, similar concepts can be applied to other contexts. Deployment Background Cloud computing services (e.g., computation services and storage services) may be supported by data centers in multiple geographic regions in many countries around the globe. A cloud computing fleet that provides such cloud computing services may include hardware infrastructure (e.g., servers, CPUs, networking devices, storage disks) and software infrastructure (e.g., operating systems, applications, firmware, management tools). To fix, maintain, and/or improve the cloud computing services, certain packages may be deployed to the cloud computing fleet. Such packages may include new applications, operating system updates, software feature updates, security fixes, and/or configuration changes. Other types of packages may include hardware upgrades, system reboots, firmware updates, networking configuration changes, etc. FIG.1shows a safe deployment practice (SDP) framework100, which may be used with the present concepts. The SDP framework100may provide a guideline for developing a cloud deployment plan having multiple phases. The SDP framework100may involve phased rollout of the package to incrementally larger number of targets while validating the deployment of the package at various phases. In one implementation, an update (which may be considered a package or a payload in a deployment context) may be developed and tested at a development and testing phase102. Next, in a staging phase104, the update may be tested for its integration and interactions with other components as well as stress-tested for stability, quality, and interoperability, etc. The staging phase104may involve rolling out the update to one or more clusters of computers. Next, the update may be deployed to a canary phase106, which may include one or more geographic regions that include one or more availability zones (AZs) of clusters. For example, the update may be deployed to 20 clusters in the canary phase106. In the canary phase106, the update may be more fully validated and tested for further replication to other geographical regions. The canary phase106may be useful for full production-level testing, including testing the update's effects on third party services. Next, in a pilot phase108, the update may be deployed to even more clusters with more diversity of hardware, software, and configurations. For example, the update may be deployed to 70 clusters in the pilot phase108. After the pilot phase108, the update may be deployed to the remaining clusters in the cloud fleet in multiple phases. Typically, to minimize risk, the update may be deployed to clusters in geographical regions with light or medium load (i.e., low to medium utilization) before the update is deployed to clusters in geographical regions with heavy load (i.e., high utilization). Subsequently, the update may be deployed more broadly to geographical regions with a much larger number of clusters. Moreover, certain regions of clusters may be paired to serve as redundant backups for each other, especially for high-load regions that are more heavily used and/or more critical. In one example, after the pilot phase108, the update may be deployed to a medium loaded region110and then to a heavy loaded region112. Furthermore, a broad phase114may also be performed to broadly roll out the update incrementally to the remainder of clusters, for example, starting with first broad region pairs116and then second broad region pairs118, and so forth, until the update has been deployed to all target clusters of computers. The SDP framework100illustrated inFIG.1is just one example of a multi-phased deployment plan. The present concepts may be applicable in other deployment plans that vary in the number of phases, the names of the phases, the organizational and hierarchical groupings of computers, etc. The SDP framework100may help reduce deployment risks by detecting and correcting errors early, and preventing widespread problems associated with deploying packages to the cloud computing fleet. For instance, bake time (or delay time) may be incorporated in between phases to ensure that deployment to one phase does not exhibit any problems before moving forward to the next phase. These strategies of deploying the update to a small number of low-risk clusters first may reduce the risk of widespread downtimes and other problems experienced by many users. Validations at the early phases can build confidence in the quality of the update. Thereafter, the deployment can progress to a broader set of clusters in later phases. The multiple phases in the SDP framework100, illustrated inFIG.1, may be ordered from lowest risk (on the left-hand side ofFIG.1) to highest risk (on the right-hand side ofFIG.1) in terms of user exposure. In some implementations, planning the deployment may involve determining one or more deployment parameters, such as ordering of the clusters (i.e., which clusters should be deployed to first versus which clusters should be deployed to later), the number of clusters to deploy concurrently (i.e., batch size), completion percentage (i.e., what percentage of clusters in a group should be completed before advancing deployment to the next group), bake time (i.e., how long to wait before advancing deployment to the next group), etc. When determining optimal deployment plans, one or more objectives or goals may be considered. For example, one objective may be to minimize deployment time. The serial nature of deployment can prolong the overall deployment time significantly. Furthermore, the deployment time needed for each cluster can vary a lot from cluster to cluster. Additionally, one exploratory analysis of deployments observed that about 50% of the overall deployment time is often spent on the last 10% of the clusters, resulting in a long tail problem where some parallel resources are deploying to the last clusters while other parallel resources are inefficiently idle. Deployment reports have shown that fast deployment times can potentially be critical when deploying significant security patches. Another objective may be to reduce or minimize risk. Risk in the deployment context may include any degradation in user experience, such as errors, lags, or service interruptions. Thus, minimizing risk may be important for user satisfaction and/or business profits. In some circumstances, multiple objectives may include tradeoffs. For example, decreasing deployment time may result in increasing deployment risk, and decreasing deployment risk may result in increasing deployment time. If a package is very quickly deployed to many clusters concurrently, then deployment time would be low, but deployment risk would be high. Conversely, if a package is slowly deployed to few clusters at a time and there are long validations in between, then deployment risk would be low, but deployment time would be high. Thus, there may be a tradeoff between time and risk. FIG.2shows a cloud computing fleet200, in which a smart deployment plan, consistent with the present concepts, may be implemented. The cloud computing fleet200may include multi-leveled structure. For example, multiple computers in the cloud computing fleet200may be grouped into an availability zone (AZ), multiple AZs may be grouped into a region, and multiple regions may be grouped into an SDP phase, such as canary, pilot, and broad. In one scenario, an AZ may represent a data center, and a region may represent a geographical region, such as a continent. Thus, the hierarchy inFIG.2shows the least granular level on the top row and the most granular level on the bottom row. The cloud computing fleet200illustrated inFIG.2may be a simplified depiction that does not show every computer, cluster, AZ, region, and/or SDP phase. That is, to avoid clutter and for the sake of readability,FIG.2only shows a limited number of AZs and regions. Specifically,FIG.2shows AZ1202and AZ2204paired as a first AZ pair206.FIG.2shows AZ3208and AZ4210paired as a second AZ pair212. The first AZ pair206and the second AZ pair212are part of Region1214.FIG.2shows Region1214and Region2216paired as a first region pair218.FIG.2shows Region3220and Region4222paired as a second region pair224. The first region pair218and the second region pair224are part of the canary phase. However, each SDP phase may contain more regions than shown inFIG.2, each region may contain more AZs than shown inFIG.2, and each AZ may contain more clusters than shown inFIG.2. Furthermore, as demonstrated by this example inFIG.2, some of the regions and AZs may be grouped in twos (pairs or duos) or threes (trios) or larger numbers, where they redundantly back each other up. For instance, two regions (e.g., US-East and US-West) may be configured to back each other up so that if one fails, then the other takes over. Therefore, deployment plans may be limited or constrained so that not all regions or AZs in a group can be updated concurrently. For example, a deployment plan may deploy an update to only one region in a region pair at a time rather than deploying to both regions in the region pair concurrently, to ensure that there is active backup support for each region during deployment. The numbered dotted arrows inFIG.2may represent a hypothetical sequence of deployment of the circled computers. Although some deployment plans may require 100% completion of an AZ or a region before proceeding to deploy to the next AZ or the next region, other deployment plans may allow advancement to the next AZ or the next region after less than 100% completion. For example, as illustrated inFIG.2, once 50% of the clusters in the first AZ pair206are completed (as indicated by the circled clusters in AZ2204), deployment may proceed to the second AZ pair212, as indicated by the dotted arrow labeled “1.” However, the example deployment plan illustrated inFIG.2may wait until 100% of the clusters in the first region pair206are completed (as indicated by the circled clusters in Region1214and in Region2216) before proceeding to the second region pair224, as indicated by the dotted arrow labeled “3.” A different minimum completion percentage constraint may be set for each AZ, region, and/or SDP phase. The clusters that are not circled inFIG.2would still be deployed to, but the progression of deployment may not have to wait for those clusters to complete before moving forward with other clusters. In addition, a concurrency limit (or a maximum batch size) may be another constraint that limits how many clusters, AZs, and/or regions may be concurrently updated together in batches. For example, a batch size limit of one for a particular AZ may constrain the deployment of a payload to one cluster at a time sequentially within that AZ. Similarly, any other deployment parameters may be limited per individual computer or groupings of clusters (e.g., individual AZ, region, and/or SDP phase). Deployment Planning Problem As demonstrated through examples above, there may be many variable deployment parameters and constraints (including constraints on such parameters) related to planning deployment. For example, decisions may be made regarding the ordering of clusters, AZs, regions, and/or SDP phases; the limit on the number of concurrent deployments for each cluster, AZ, and/or region; the minimum percentage completion required to proceed for each AZ, region, and/or SDP phase; whether concurrent deployment is permitted based on redundancies; and/or bake time that should elapse before proceeding with next group, etc. Such decisions should advance one or more objectives associated with deployment, such as reducing/minimizing time and/or reducing/minimizing risk. Historically, deployment parameters have been selected by human judgment and/or randomly, rather than being informed by data or artificial intelligence. That is, there is no systematic intelligence built into the process of planning deployment and no technological solutions for this task. For example, conventionally, human decisions may account for following the SDP framework100(proceeding through the staging phase104, the canary phase106, the pilot phase108, and then the broad phase114) and staying within the bounds of parameter constraints that account for redundancies and batch size limits. However, conventional deployment planning often involves choosing the deployment order of clusters within an AZ randomly, choosing the deployment order of AZs within a region randomly, and choosing the deployment order of regions within an SDP phase randomly. Although manual ordering of target clusters is theoretically possible, such a task is too cumbersome when planning a deployment to thousands of clusters. Moreover, conventionally, there is no way to automatically and intelligently achieve or even strive for optimal balancing of multiple objectives, such as speed and risk. For example, conventionally, there is no insight into the current state of the cloud fleet when developing a deployment plan. That is, the deployment plan does not take into consideration which clusters are relatively idle (which would have high deployment speed and low deployment risk) and which clusters are relatively busy (which would have low deployment speed and high deployment risk). Furthermore, multiple deployments that take too long can clog up the deployment queue, further delaying subsequent deployments, delaying proper validation of interoperability of multiple updates, delaying time-sensitive fixes of critical security vulnerabilities, and/or potentially endangering the security of the cloud fleet. Thus, there is a need for a technical solution that enables smart and automated ways to plan deployments such that objectives (such as speed and risk) can be automatically optimized using current status data. Also, it would be desirable if a deployment administrator's preferences for balancing multiple objectives could be automatically factored into orchestrating deployments. For instance, a security team may want to deploy a critical update to all computers in the entire fleet very quickly (e.g., within 5 days) even if doing so is risky. On the other hand, an operating system team may want to deploy a new feature to the entire fleet with minimal risk even if the deployment time takes a long time. Deployment Planning Solution Summary The present concepts provide a technical solution that enables a smart automated data-driven systematic approach to determining optimal deployment parameters that advance one or more objectives. A smart deployment system or method, consistent with the present concepts, may use one or more prediction models associated with the one or more objectives. Additionally, a smart deployment system or method, consistent with the present concepts, may use an optimization model to determine optimal parameters for orchestrating deployments. The term “optimal” as used herein does not necessarily mean only the best but can mean more efficient or better than manual determinations. In some implementations, the prediction models may be rule-based models and/or machine-learning models. For example, a time prediction model (also called a speed prediction model) may be trained to predict a deployment time (i.e., speed) associated with a payload and a cluster, and a risk prediction model may be trained to predict deployment risk associated with a payload and a cluster. These and additional prediction models may predict speed, risk, and additional objectives or factors that may be considered when planning deployment. The optimization model may use one or more optimization algorithms to find deployment parameters (e.g., the sequence of clusters, completion percentage, batch size, bake time) that optimize the one or more objectives (e.g., speed and risk). In some implementations, the optimization model may be a graph model that builds, for example, a multi-layer, multi-objective graph based on the predictions from the one or more prediction models. Thus, the optimization model may model the deployment planning problem as a graph problem, such as a shortest path graph problem. The optimization model may then use one or more optimization algorithms to find one or more optimal solutions to the shortest path graph problem. In some implementations, an optimization algorithm may find the truly optimal solution(s) to shortest path graph problems. In other implementations, an optimization algorithm may estimate or approximate the optimal solution(s) to shortest path graph problems. In certain scenarios where multiple competing objectives may be balanced, an optimization algorithm may estimate multiple non-dominated solutions along the Pareto front (explained below) in the objective space. By outputting multiple solutions, an administrator may be presented with multiple choices of deployment plans that correspond to the multiple solutions. Then, the administrator may choose one deployment plan that best fits her preferences for the desired objectives, such as the desired levels of speed and risk. In turn, a deployment orchestrator may carry out the preferred deployment plan selected by the administrator to the extent possible. In some implementations, inputs to the smart deployment system may include a payload (or details about the payload), a set of target clusters (e.g., a list of target clusters and/or their groupings), limits on deployment parameters, and/or limits on objectives (e.g., the maximum deployment time). These inputs may be provided by an administrator and/or from a computer (e.g., the structure of the cloud fleet). In some implementations, outputs from the smart deployment system may include a cluster ordering (e.g., at multiple levels), other deployment parameters (e.g., optimal completion percentage, batch size, bake time), and predicted objective values (e.g., estimated overall deployment time and risk). The outputs from the smart deployment system may be presented to an administrator and/or provided to a computer. For instance, the administrator may select one of multiple deployment plans depending on her desired levels of speed and/or risk. The outputs from the smart deployment system and/or the administrator's selection may be provided to a deployment orchestrator to execute the deployment plan. Therefore, the smart deployment system may augment and/or work together with existing deployment orchestration mechanisms. Models The present concepts can provide deployment plan recommendations that are optimal (true optimal, approximately optimal, and/or otherwise preferred) with respect to one or more objectives. Therefore, using prediction models and/or optimization models, the present concepts may enable deployment time reduction and/or deployment risk reduction compared to conventional deployment planning methods. The present concepts may allow an administrator to customize the deployment plan, such as target cluster ordering, bake times, completion percentages at various stages, acceptable deployment time, acceptable deployment risk, concurrent deployments, etc. The present concepts further allow for early detection of problems while minimizing end user exposure to such problems. FIG.3shows a block diagram of a smart deployment system300, consistent with the present concepts. In some implementations, the smart deployment system300may include a feature engineering module302. The feature engineering module302may collect and/or receive input data from one or more data sources. For example, input data may include the type of machines in various clusters, geographical locations of clusters, the number of computers in clusters, the kind of payload (e.g., security patch, network update), the size of payload, networking capacity of clusters, current usage of clusters, capabilities of clusters (e.g., processor speed), etc. The feature engineering module302may receive the data, clean the data, curate the data, and format the data into a data format and structure that can be fed into prediction models304. In some implementations, the smart deployment system300may include one or more prediction models304that can predict one or more objectives associated with deployment. For example, in some implementations, the prediction models304may include two machine-learning models: a time prediction model306and a risk prediction model308for predicting speed and risk, respectively, based on historical data. AlthoughFIG.3illustrates two prediction models, in alternative implementations, one prediction model may predict both time and risk by using both time prediction functions and risk prediction functions. In some implementations, the prediction models304may be multi-class hierarchical models (e.g., a combination of classification and regression). In one implementation, the prediction models304may use a gradient boosting algorithm, such as XGBoost. Alternatively, other predictive machine-learning models or techniques may be used. The time prediction model306may be trained using historical data from the feature engineering module302to predict estimated deployment time for a given payload and for a given cluster or for a group of clusters. For instance, the time prediction model306may learn that a payload of a certain size and of a certain type would take a certain estimated time to deploy on a certain cluster of a specific number of computers of a certain machine type having a certain usage status based on historical data of similar deployments in the past. In some implementations, deployment time may be defined as the time needed to complete deployment on all target clusters. Deployment time may be a function of at least deployment parameters and deployment constraints, and thus may be formulated as follows: deployment time=ƒ(parameters,constraints) The deployment parameters that the time prediction model306considers may include, but are not limited to, one or more of the following:1) A completion percentage for each SDP phase, region, and AZ before moving onto the next.2) A bake time for each SDP phase, region, and AZ before moving onto the next.3) A number of failures allowed before ceasing the deployment.4) A batch size within each AZ and region.5) A priority level of the deployment.6) A deployment health evaluation time. The deployment constraints that the time prediction model306considers may include any limitations on the deployment parameters, and may also include, but are not limited to, one or more of the following:1) The SDP framework that includes an ordering of phases.2) The deployment queue time, where the present deployment may contend with other deployments in the queue for the target clusters, which may be a popular target for deployments. The queue time may affect the overall deployment time.3) The deployment time for a cluster, which may be affected by the deployment life cycle (including deployment initiation, approval, queueing, and rollout), the number of nodes or computers in the cluster, the fault domain (i.e., the set of resources that may be affected by a fault caused during deployment), the virtual machines in the cluster, the hardware machines in the cluster, etc. Similarly, the risk prediction model308may be trained using historical data from the feature engineering module302to predict the estimated risk for a given payload and for a given cluster or a group of clusters. For instance, the risk prediction model308may learn that a payload of a certain size and of a certain type carries a certain level of risk to deploy on a certain cluster of a specific number of computers of a certain machine type having a certain usage status based on historical data of similar deployments in the past. In one implementation, deployment risk may be quantified in terms of annual interruption rate (AIR). In another implementation, risk may be measured by the number of virtual machines that rebooted due to a crash, failure, and/or error resulting from the deployment. Other measurements of risk, such as the duration of downtimes, as well as combinations of different measurements of risk, may alternatively or additionally be used. For instance, risk may be defined as an expected value of change in AIR. The estimation of change in AIR may be represented as a distribution, and the expected value may be used as the estimated risk value. In some implementations, risk may be a function of at least features and deployment parameters, and thus may be formulated as follows: risk=ƒ(features,deployment parameters) The risk prediction model308may consider one or more deployment parameters, such as those described above with respect to the time prediction model306. The features that the risk prediction model308considers may include, but are not limited to, one or more of the following:1) Characteristics of the payload.2) Characteristics of the target clusters, such as the number of computer nodes in the cluster; the number of virtual machines in the cluster; the type of machines; the type of operating system; and the availability zone, the region, and the SDP phase that the cluster belongs to.3) Saturation or coverage rate, which may include the percentage of hardware machines of a particular type (e.g., manufacturer, model, and configuration of machines) the rollout has already deployed to out of the total hardware machines of that particular type present in a target cluster. For example, if a rollout is targeting a particular cluster with three hardware machines of a certain type and the payload has been deployed to two of the three machines, then the saturation rate for this cluster would be ⅔ or 67%. Deployments can react differently to different types of hardware machines. The risk prediction model308may consider the historical results of past deployments to certain types of machines.4) Risk history of the team that developed and is deploying the payload, e.g., the historical AIR impact from previous rollouts created by the team.5) The risk performance of the current rollout of the payload, such as node failures caused by the current rollout. For example, if the current rollout has already deployed to 100 clusters so far, the AIR impact to those earlier 100 clusters can be used as features to predict the potential AIR impact to later clusters remaining as targets in the current rollout by analyzing the causes of faults and similarities between the earlier clusters and the later clusters. The above listed features as well as additional features previously discussed may be processed by the feature engineering module302. The features may be measured and/or estimated. Furthermore, one or more of these features may be manually inputted or manually adjusted, for example, to account for special conditions, such as clusters serving sensitive users. Furthermore, in some implementations, the risk prediction model308may be customizable to predict certain kinds of risk. For instance, the kinds of risk associated with networking updates may be different from the kinds of risk associated with security updates, and different from the kinds of risk associated with operating system updates. Therefore, the risk prediction model308may present a selection of different types of risk for an administrator to choose from, depending on the types of risk that the administrator wants to predict. Additionally or alternatively, the risk prediction model308may include a customizer module that may enable an administrator to onboard risk scores, risk-related features, risk-related formulas, etc. In some implementations, the prediction models304may be retrained on a particular cadence (e.g., every 1 hour, 6 hours, 12 hours, 1 day, 3 days) to take account of the latest input data from the feature engineering module302, such that the prediction models304reflect the current state of the cloud computing fleet. For example, one cluster may be working on an intensive process and experiencing high utilization, such that time and risk associated with deploying to this cluster would be high. Thus, it may be advantageous to avoid deploying to this cluster for now, if possible, and instead deploy to a different cluster that is relatively idle and thus currently has high speed and low risk associated with it for deployment. Any suitable period may be chosen for retraining the prediction models304, such as every hour, every day, etc. Alternatively, retraining may be performed on demand and/or as needed as a certain amount of new input data becomes available to be processed by the feature engineering module302. As more deployments occur and thus more historical data are analyzed by the prediction models304, these machine-learning models can more accurately predict speed and risk associated with future deployments. That is, the prediction models304may learn which payloads and which clusters are high speed from past fast deployments, learn which payloads and which clusters are payloads are low speed from past slow deployments, learn which payloads and which clusters are low risk from past deployment successes, and learn which payloads and which clusters are high risk from past deployment failures. Such machine learning may enable the prediction models304to accurately estimate speed and risk associated with future deployment plans. In some implementations, accuracy metrics may be tracked for the prediction models304. For example, the actual deployment time may be measured and compared with the predicted deployment time to calculate a time prediction error. Similarly, the actual deployment risk may be measured and compared with the predicted deployment risk to calculate a risk prediction error. These time and risk prediction errors may be calculated for each cluster, AZ, region, and/or SDP phase. Furthermore, the errors may be fed into the prediction models304to adjust and improve the machine-learning models to make more accurate predictions in the future. In some implementations, the accuracy of the prediction models304may depend on the quality of the input data. Thus, the errors may help identify additional input data or better input data that should be gathered for the feature engineering module302and/or the prediction models304to improve prediction accuracy of the prediction models304. The smart deployment system300may include an optimization model310. The outputs from the prediction models304may be used as inputs to the optimization model310. The optimization model310may find one or more optimal solutions to deployment planning problems by tackling multiple objectives (e.g., speed and risk). In some implementations, the optimization model310may be a graph model that uses a graph-based algorithm to solve a graph problem derived from a deployment planning problem. The optimization model310may output one or more deployment plan recommendations. A deployment plan recommendation may include, for example, an optimal sequence of clusters and/or optimal deployment parameters. The output sequence of clusters may include the recommended deployment order of clusters for each AZ, each region, and each SDP phase that are included in the overall target for the deployment. The output deployment parameters may include, for example, a recommended completion percentage, a recommended bake time, recommended batch size, etc., for each AZ, region, and/or SDP phase, where applicable. The output may further include expected overall deployment time (total completion time for deployment to all target clusters) and expected overall deployment risk (total level of impact on AIR). In some implementations, the output may include breakdown of deployment times and deployment risks for various clusters, AZs, regions, and SDP phases individually. The smart deployment system300may include or operate with a deployment orchestrator312. The deployment orchestrator312may apply and execute the deployment plan output by the optimization model310and/or selected by an administrator. Thus, the smart deployment system300can work with existing conventional deployment orchestrators. Existing deployment orchestration mechanisms may be augmented with artificial intelligence-based optimization, consistent with present concepts, to achieve faster and safer deployments to cloud computing fleets. Workflow FIG.4shows a diagram of a smart deployment workflow400, consistent with the present concepts. In box402, a new deployment request may be received by a deployment platform from a developer of a payload. The new deployment request may include deployment information, such as details about the payload, identification of the target clusters, and any deployment parameter constraints. In decision box404, a decision by a deployment administrator on whether to use a smart deployment system, consistent with the present concepts, may be received by the deployment platform. If the administrator opts not to use a smart deployment system, then in box406, the payload may be deployed to the target clusters using default orchestration rather than the smart deployment system. If the administrator opts to use a smart deployment system, then in box408, the deployment platform may check the current state of the target clusters and/or use the smart deployment system to generate at least one deployment plan recommendation. As described above, the smart deployment system may run machine-learning prediction models that can predict one or more objectives (such as speed and/or risk) associated with the payload and the target clusters. The smart deployment system may also run an optimization model to find one or more deployment plan recommendations. In box410, the deployment platform may output the one or more deployment plan recommendations. In one implementation, the administrator may review the deployment plan recommendation, and either accept or reject the deployment plan recommendation output by the deployment platform. Alternatively, the administrator may review multiple deployment plan recommendations, and select one of the multiple deployment plan recommendations, for example, depending on the preferred levels of speed and risk. For instance, if the administrator prefers to quickly deploy a critical update to the entire fleet as soon as possible, she may select a deployment plan recommendation that is high speed even if it is also high risk. Additionally or alternatively, an automated set of rules may be applied to accept, select, or reject one or more deployment plan recommendations. In box412, the accepted or selected deployment plan may be executed to roll out the payload to the intended target clusters. In box414, the deployment plan recommendation may be rejected either by the administrator or by automated rules. For example, the developer and/or the administrator may have requested a deployment time that cannot be achieved given the deployment constraints and the current state of the target clusters. In such a scenario, the deployment platform may inform the developer and/or the administrator to change her expectations. Alternatively, a default deployment plan may be implemented by the orchestrator without using the deployment plan recommendations generated by the smart deployment system. In some implementations, the entire deployment (i.e., to all target clusters) may not be planned at once but rather in stages. For example, the smart deployment system may generate deployment plan recommendations for the clusters in one availability zone, one region, or one SDP phase. After the administrator reviews the recommendations for the particular stage of the deployment plan and the selected plan is executed, the current state of the clusters may be checked (box408), and additional deployment plan recommendations may be generated for the next stage of deployment (box410). And this process loop may repeat (e.g., from AZ to AZ, region to region, and/or SDP phase to SDP phase) until the entire deployment is completed. Thus, the smart deployment workflow400may form loops that iterate through the multiple AZs, regions, and/or SDP phases to complete the entire deployment to all target clusters, allowing the administrator to validate the payload and select one or more stages of a deployment plan on an ongoing basis throughout the deployment. Cloud Computing Fleet FIG.5shows a cloud computing fleet500, in which the present concepts may be implemented. The cloud computing fleet500is just one example fleet that is described to illustrate the present concepts. Other fleets can be used as well. In some implementations, the target clusters for deployment may be hierarchically arranged in multiple levels540. The example illustrated inFIG.5shows that the clusters in the cloud computing fleet500are organized into a top level540(1), a middle level540(2), and a bottom level540(3), in order of least granular to the most granular. The top level540(1) in the cloud computing fleet500may include multiple SDP phases501, for example, a staging phase502, a canary phase504, a pilot phase506, and a broad phase508. One or more of these phases may be further broken down into multiple subphases for deployment. For example, the broad phase508may include a medium loaded subphase510, a heavy loaded subphase512, a first batch subphase514, and a second batch subphase516. In some implementations, a deployment may proceed by deploying a payload to each of these phases and subphases in sequential order (from left to right inFIG.5). Generally, a deployment may start with earlier phases that contain sets of fewer clusters and/or lower-risk clusters, and then progressively advance to later phases that contain sets of many clusters and/or higher-risk clusters. For example, the staging phase502may have five clusters with no production workload, and thus have low deployment risk, whereas the broad phase508may have thousands of clusters with end-user production workload, and thus have higher deployment risk. In some implementations, a bake time may be added between each phase and/or subphase of deployment. For instance, the bake time may allow the administrator to check the status of deployment to a current phase(s) and observe the impact of the payload before continuing with deployment to the subsequent phases. If a problem or error is detected during bake time, the deployment may be halted, and the problem or error can be addressed rather than spreading it to a broader base of clusters. In some implementations, to address the long tail deployment time issue mentioned above, rather than waiting until the current phase is 100% complete before moving on to the next phase, the deployment may progress to the next phase when a certain percentage of the current phase is complete. The remainder of the current phase may still continue (e.g., in the background) but would not hold up the deployment from advancing to the next phase. The SDP phase completion percentage may be computed as follows: #⁢of⁢clusters⁢completed⁢in⁢this⁢SDP⁢phasetotal⁢#⁢of⁢target⁢clusters⁢in⁢this⁢SDP⁢phase Alternatively, the SDP phase completion percentage may be computed based on the number of completed versus total regions or AZs rather than clusters. Therefore, the deployment parameters in this top level540(1) may include an SDP phase completion percentage and an SDP phase bake time. These deployment parameters may be constrained (e.g., maximum or minimum limits) by the administrator and/or optimized by the optimization model. The middle level540(2) in the cloud computing fleet500may include regions. In some implementations, one or more of the SDP phases (or subphases) may include one or more regions. To avoid clutter on the drawing sheet,FIG.5shows only Region1518, Region2520, Region3522, and Region4524inside the first batch subphase514. The deployment ordering of the regions may be optimized. Some of the regions may be deployed concurrently, i.e., in parallel, in batches. The total number of regions that may be deployed at the same time may be limited by a maximum batch size. A batch size may refer to the how many concurrent deployments are allowed at the same time. Some of the regions may be paired into region pairs or grouped into trios or larger groups. Such grouped regions may be redundant backups for one another. For example, if the US-East region and US-West region back each other up, then those two regions may not be deployed concurrently so as to avoid the risk of both regions failing at the same time. For these grouped regions, deployment may proceed serially rather than in parallel, or at least not all of the regions in a group may be deployed at the same time. Similar to the SDP phase, deployment may pause for a certain bake time between each region, if so desired, to give time for evaluation of the status of the deployment before allowing it to progress further. In other implementations, deployment to regions within an SDP phase may proceed without any bake time. Similar to the SDP phases, a deployment may advance to the next region when the completion level for the current region reaches a certain percentage rather than waiting until the current region completes 100%. The region completion percentage may be computed as follows: #⁢of⁢clusters⁢completed⁢in⁢this⁢regiontotal⁢#⁢of⁢target⁢clusters⁢in⁢this⁢region Alternatively, the region completion percentage may be a ratio based on the number of completed versus total AZs. Therefore, the deployment parameters in this middle level540(2) may include a deployment ordering of the regions, a region completion percentage, a region batch size, and a region bake time. These deployment parameters may be constrained by the administrator and/or optimized by the optimization model. The bottom level540(3) in the cloud computing fleet500may include AZs. In some implementations, one or more of the regions may include one or more AZs. To reduce cluster on the drawing sheet,FIG.2only shows AZ7526inside Region4524, and only shows four clusters (C1528, C2530, C3532, and C4534) inside AZ7526. However, additional clusters and additional regions that are not illustrated inFIG.2may be included in the cloud computing fleet500. Some of the AZs in a region may be deployed concurrently, i.e., in parallel, in batches. Some regions may be small, e.g., with only one to five AZs, whereas other regions may be large, e.g., with hundreds of AZs. The small number of AZs in a small region may be deployed to concurrently, but deploying to all AZs in a very large region may be inadvisable due to higher risk. The total number of AZs that may be deployed to at the same time may be limited by a maximum AZ batch size. The deployment ordering of the AZs and/or the AZ batch size may be optimized. Some of the AZs may be grouped in twos or threes or larger numbers. Such grouped AZs may be redundant backups for one another. For these grouped AZs, deployment may proceed serially rather than in parallel, or at least not all of the AZs in a group may be deployed at the same time. Similar to the SDP phase, deployment may pause for a certain bake time between each AZ, if so desired, to give time for evaluation of the status of the deployment before allowing it to progress. In other implementations, deployment to AZs within a region may proceed without any bake time. Similar to the regions, a deployment may advance to the next AZ when the completion level for the current AZ reaches a certain percentage rather than waiting until the current availability zone completes 100%. The availability zone completion percentage may be computed as follows: #⁢of⁢clusters⁢completed⁢in⁢this⁢AZtotal⁢#⁢of⁢target⁢clusters⁢in⁢this⁢AZ Therefore, the deployment parameters in this bottom level540(3) may include a deployment ordering of the AZs, an AZ completion percentage, an AZ batch size, and an AZ bake time. These deployment parameters may be constrained by the administrator and/or optimized by the optimization model. In some implementations, the optimization model may start by optimizing at the most granular level, such as the AZ level or the bottom level540(3) in this example, and then progressively move up to less granular levels, such as the region level and then the SDP phase level. The optimization model may take into account the redundancies, maximum batch sizes, minimum bake times, minimum completion percentages, and any other constraints imposed on the deployment plans. For example, an administrator may specify a maximum batch size of 100 clusters, such that the optimization model may find an optimal batch size (e.g., 40 clusters) for a particular AZ and find another optimal batch size (e.g., 30 clusters) for another AZ, each within the 100-cluster limit. As another example, an administrator may specify a minimum percentage completion of 50% as another constraint, such that the optimization model may find an optimal completion percentage (e.g., 80%) before moving onto the next deployment. There may be one or more deployment parameter constraints at every cluster, every AZ, every region, every SDP phase, every level, overall fleet, etc. Thus, the optimization model may be a multi-layer, multi-objective, constrained optimization model. As described above, a group of nodes or machines may form a cluster. A group of clusters may form an availability zone. A group of availability zones may form a region. A group of availability zones may form an SDP phase. And a group of SDP phases may constitute the entire fleet (or at least the entire targeted clusters within the fleet). Other hierarchical structures (e.g., with different names and/or different numbers of levels) are contemplated as well. Furthermore, although the present concepts are described as orchestrating deployment at the cluster level, it is possible to orchestrate deployment at the computer node level or any other level (e.g., application level, service level, AZ level). The optimization model may find deployment parameters that optimize over the multiple objectives at multiple layers, from most granular layer to the least granular layer. The layers in the optimization model may correspond to the levels in the cluster structure. Thus, the nested, parent-child relationship of clusters in the deployment structure may add multiple layers to the optimization problem. Graph Problem Consistent with some implementations of the present concepts, the optimization model may represent the deployment planning problem as a multi-layer, multi-objective graph problem. The optimization model may generate different graphs at the different layers, from the most granular layer to the least granular layer. In some implementations, the deployment optimization problem may be modeled as a shortest path directed graph problem and the goal may be to find the shortest path in the graph. For example, the optimization model may start by creating different sequences of target clusters in an AZ to generate a graph. FIG.6. shows a graph600, consistent with the present concepts. The graph600is a very simplified example that helps illustrate the present concepts using a miniature deployment planning problem for an AZ having only two clusters. The graph600may include a source vertex602as the starting point of the multiple paths that are possible in the graph600. The source vertex602may be a starting vertex that is empty, i.e., no clusters have been deployed to at this point. The graph600may include one or more state vertices604. Each state vertex604may represent a set of completed clusters (i.e., clusters to which the payload has been deployed up to that state). In this simplified example, the AZ may include two clusters C1 and C2. Accordingly, the graph600may include three state vertices: State_i vertex604(1) for deploying first to cluster C1 only, State_j vertex604(2) for deploying first to cluster C2 only, and State_k vertex604(3) for having deployed to both clusters C1 and C2 (either sequentially or concurrent, depending on the path). The graph600may include one or more sink vertices (Sink_x606(1) and Sink_y606(2)) as ending points of the multiple paths that are possible in graph600. Each sink vertex606may represent a termination vertex for the AZ. Each sink vertex606may include a completion percentage for the AZ and/or a bake time. The graph may include one or more directed edges608connecting the vertices. In one implementation, each edge (608(1)-608(5)) leading to a state vertex604may include three types of information: (1) a deployment action, i.e., a set of clusters being deployed to, from the previous state to the next state, (2) a predicted time associated with the deployment action, and (3) a predicted risk associated with the deployment action. A deployment action may include one or more clusters that are being simultaneously deployed to. A predicted time value may be a duration of time that a time prediction model estimates it will take to deploy to the one or more clusters associated with the deployment action. Where multiple clusters are simultaneously deployed to, the predicted time value may be an aggregate (e.g., the sum, the maximum, the average, or any other formula) of the predicted times for deploying to the individual clusters. In one implementation, the predicted time value may be a positive value measured in any applicable time units, such as seconds, minutes, hours, days, etc. For example, the time prediction model may estimate s1 minutes to deploy to cluster C1, s2 minutes to deploy to cluster C2, and s3 minutes to concurrently deploy to both clusters C1 and C2. A predicted risk value may be, for example, a risk metric estimated by a risk prediction model for deploying to the one or more clusters associated with the deployment action. Where multiple clusters are simultaneously deployed to, the predicted risk value may be an aggregate (e.g., the sum, the maximum, the average, a weighted sum, or any other formula) of the predicted risk values for deploying to the individual clusters. In one implementation, the predicted risk value may be measured in any applicable risk metrics and units, such as AIR, percentage downtime, number of reboots, duration of downtime, uptime availability percentage, user satisfaction score, etc. In some implementations, the risk value may be calculated using a function based on one or more of these risk metrics. The predicted risk value may be negative where the payload improves AIR. In the example illustrated inFIG.6, the risk prediction model may estimate a risk value of r1 to deploy to cluster C1, a risk value of r2 to deploy to cluster C2, and a risk value of r3 to concurrently deploy to both clusters C1 and C2. The predicted time values and/or the predicted risk values assigned to the edges may be derived from the prediction models, discussed above, based on applicable features, deployment parameters, and/or the current status of the fleet. Thus, an edge that terminates at a state vertex may represent a transition from one state to another state by having executed the associated deployment action. Other edges may lead from a state vertex to a sink vertex. An edge that leads to a sink vertex may not include a deployment action. Such an edge may still include a predicted time value and/or a predicted risk value associated with different completion percentages and/or different bake times assigned to the sink vertex. In some implementations, a source vertex may be connected by an edge to any state vertex. Furthermore, an edge may connect any State_a vertex to any State_b vertex so long as the set of clusters in State_b is a superset of the set of clusters in State_a. Additionally, a state vertex may be connected by an edge to a sink vertex so long as the set of clusters included in the state vertex equals or exceeds the completion percentage in the sink vertex. In one implementation, each state vertex may be connected by an edge to at most one sink vertex having the highest completion percentage that the state vertex satisfies among one or more sink vertices in the graph. For example, inFIG.6, State_k vertex is connected only to Sink_y vertex of 100% but not to Sink_x vertex of 50%. Therefore, the graph600is a simple example that models an AZ with only two clusters C1 and C2. The possible orders of clusters for deployment are (1) C1-C2, (2) C2-C1, and (3) C1/C2, where the hyphen separates sequential deployments and the forward slash represents concurrent deployments. Here, the batch size may be one or two. Furthermore, the completion percentage may be 50% or 100% (represented by the two sink vertices in the graph600) before the deployment advances to the next AZ of clusters. Although modeled in the simplified example of the graph600, the graph600may also account for different bake times (e.g., none, 6 hours, 1 day, 3 days). For example, the different bake time parameters may be modeled by different sink vertices that include different bake time values (either separately or together with different completion percentage values). As explained above, one or more deployment parameters may have constraints, and the graph600may reflect those constraints. For example, the batch size may limit the maximum number of clusters that can be deployed together as modeled by the deployment actions associated with edges. For example, if the maximum batch size is five, then no edge in the graph may include more than five clusters. Additionally, if the minimum completion percentage is 70%, then no sink vertex in the graph may include a completion percentage value less than 70%. Similarly, if the minimum bake time is two days, then no sink vertex in the graph may include a bake time value that is less than two days. As illustrated by this example, the deployment planning problem may be modeled by a graph that includes different paths representing different ordering of clusters, different batch sizes, different completion percentages, and/or different bake times. Although not illustrated inFIG.6, other deployment parameters may be modeled by a graph. The graph may account for any limits or constraints placed on the acceptable ranges of these parameters that may have been specified by the administrator, policies, best practices, and/or existing physical limitations. The overall deployment time for a particular path may be an aggregate (e.g., sum) of the predicted time values associated with the edges along the path. Similarly, the overall deployment risk for a particular path may be an aggregate of the predicted risk values associated with the edges along the path. Thus, the shortest path in the graph that completes all of the target clusters, i.e., the optimal path that minimizes the overall deployment time and the overall risk, would represent the optimal deployment plan for this AZ. Furthermore, multiple graphs may be aggregated and simplified to generate graphs at multiple layers that encompass the entirety of the target clusters. In one implementation, a graph, similar to the example inFIG.6, may be generated for another AZ of clusters, and so on, until a graph has been generated for all target AZs in a region. Then, the multiple graphs of AZs may be connected together to model multiple deployment options across the multiple AZs in the region. Similarly, for the next less granular layer, multiple graphs may be generated to model multiple regions in an SDP phase, and then connected together to model multiple deployment options across the multiple regions in the SDP phase. Again, at the next less granular layer, multiple graphs that model multiple SDP phases may be connected together. FIG.7shows an aggregated graph700, consistent with the present concepts. InFIG.7, a graph702may model Availability Zone1(AZ1), and a graph704may model Availability Zone2(AZ2). The graph702and the graph704may be connected together to build the aggregate graph700that models a transition between Availability Zone1and Availability Zone2. First, every sink vertex708in the graph702of AZ1may be connected by an edge710to the source vertex712in the graph704of AZ2, as shown inFIG.7. These edges710allow for paths from AZ1to AZ2. Next, every sink vertex712in the graph704of AZ2may be connected by an edge to the source vertex714in the graph702of AZ1. (These edges from AZ2to AZ1are not shown inFIG.7.) These edges allow for paths from AZ2to AZ1. This simplified example inFIG.7includes only two AZs. However, where there is a greater number of AZs in a region, every sink vertex of every AZ may be connected to the source vertices in every other AZ, so long as such ordering of AZs is not constrained. These connections from sink vertices to source vertices may create cyclical loops in the aggregated graph700. To avoid cyclical paths, the optimization model may keep track of previously visited vertices (or just the source vertices) to avoid traversing the same vertices multiples times. Alternatively, a specific ordering of the availability zones may be enforced using a greedy approach, which will be explained in more detail below. In some implementations, graphs may be generated at each granularity level. That is, the graph702and the graph704may model AZs, and thus contain vertices that represent completed clusters. Similarly, additional graphs may be generated that model regions (where the vertices may represent completed AZs) and that model SDP phases (where the vertices may represent completed regions). For example, the graph702that models AZ1and the graph704that models AZ2may be aggregated to form the aggregated graph700, as shown inFIG.7. The aggregated graph700may be simplified to generate a simplified graph706at the region layer, as shown at the bottom ofFIG.7. In the simplified graph706, a vertex may represent one or more completed AZs up to that state; and an edge may represent a deployment action of a set of AZs to transition from one state to another state, an estimated deployment time value, and an estimated deployment risk value. This process would create another layer of graphs that model the region layer. The deployment time and risk values in the edges of the graphs at the region layer may be aggregates of the deployment time and risk values in the graphs at the AZ layer. Furthermore, this aggregating and simplifying process may be repeated to generate yet another layer of graphs that model the SDP phase layer. For example, at the SDP phase layer, each state vertex may include a set of completed regions, and the edges may include estimated aggregate deployment time values and estimated aggregate deployment risk values. Similar to the graphs in the AZ layer, graphs in the region layer and in the SDP phase layer can also include source vertices, include sink vertices with varying completion percentage values and/or bake time values, and model batch size limitations, etc. FIG.8shows a transition graph800, consistent with the present concepts. The transition graph800illustrates a simplified example of a transition from the canary phase to the pilot phase, where the canary phase includes two clusters C1 and C2, and the pilot phase includes two clusters C3 and C4. In this example, the maximum batch size may be two for both phases, such that up to two clusters can be simultaneously deployed to at a time. Furthermore, the minimum completion percentage may be 50% for the canary phase so that deployment can advance to the pilot phase when deployment to the canary phase is 50% or more complete. The transition graph800may include one or more transition vertices802(illustrated in the center ofFIG.8) to model the transition between the two phases. A transition vertex802may be a sink vertex for the previous SDP phase and a source vertex for the next SDP phase. Each transition vertex802may include a completion percentage value and/or a bake time value. In some implementations, an edge804leading to a transition vertex804may include an associated deployment time value and an associated deployment risk value to account for time and risk during the phase transition. Accordingly, multi-layer graphs may be generated for the multiple hierarchical levels of cluster groupings (e.g., AZs, regions, and SDP phases). The nested parent-child relationship among SDP phases, regions, AZs, and clusters in the deployment structure may form a multi-layer graph problem in the aggregated graphs. Optimization Consistent with some implementations of the present concepts, the optimization model may use one or more optimization algorithms to find the shortest (or optimal) path in the graph that results in minimum estimated deployment time and/or minimum estimated deployment risk. Any suitable optimization algorithms can be used to solve the shortest path graph problem or estimate an optimal or efficient solution to the shortest path graph problem. During experimental testing of the present concepts, multiple optimization algorithms were evaluated and/or tested to determine their performance (e.g., in terms of computation times and solution optimality). Their pros and cons will be discussed below. In some implementations. the shortest path graph problem may be solved layer by layer, starting from the most granular layer and then working up to the least granular layer. In some implementations, the choice of optimization algorithm may be the same or different for each AZ, each region, and each SDP phase. The brute force method of solving the shortest path graph problem may involve checking every single possible path in the entire graph, one by one. However, the computational complexity of solving the shortest path graph problem may be NP complete. For instance, the size of the graph and thus the number of possible paths in the graph may grow exponentially with the number of clusters within an AZ and with the number of regions within an SDP phase, and/or grow factorially with the number of AZs within a region and with the number of regions within an SDP phase. The complexity of the graph may also grow with the number of parameters being optimized, such as the batch size, completion percentage, bake time, etc. To further complicate the graph problem, the optimization model may optimize for multiple objectives. Here, in this example, the multiple objectives (i.e., speed and risk) contradict each other. That is, the faster the deployment, the riskier the deployment. Accordingly, for real-life scenarios where the number of target clusters can be very large (e.g., multiple thousands), the size of the graph may explode into an incredible size, containing a very large number of state vertices and an even larger number of possible paths. Exhaustively checking every possible solution using the brute force method may be impractical due the long computational time that it takes before commencing deployment. Moreover, even if the brute force method could be used, by the time the optimal solution is determined, the state of the cloud computing fleet may have changed, possibly rendering the solution less than optimal. For these reasons, the brute force method may be useful for a very small number of target clusters (i.e., a very small sized deployment), but the brute force method may not be a feasible choice for a larger number of target clusters. Moreover, the brute force method may be useful for setting a baseline performance to compare and evaluate other techniques (i.e., multiple optimization algorithms). Therefore, different techniques may be more apt for a large number of target clusters. Certain techniques may reduce the computational complexity of the shortest path graph problem. For example, the size of the graph may be reduced by applying bounds on the deployment parameters, and certain optimization algorithms may prune the cluster deployment orders. Moreover, some optimization algorithms may efficiently estimate or approximate the shortest path (rather the finding the actual shortest path) while generating only a limited graph (without generating the entire graph). In some implementations, the concept of a greedy approach, such as an extreme greedy method, may be used. For example, the multiple objectives (i.e., speed and risk) may be combined or scalarized using a function with weights into one objective. Then, the clusters may be ranked by their associated weighted objective, calculated using the prediction models. Furthermore, because generating the entire graph ahead of time and solving for an optimal solution to the entire graph is very computationally expensive, the optimization model may leverage the multi-layered nature of the graph problem by solving a smaller graph at the most granular layer first and applying the greedy approach. At every step, the top ranked (i.e., the lowest weighted) cluster or combination of clusters may be selected. For instance, the optimization model may first solve the smaller graphs for the clusters within the same AZ individually, and then the optimization model may search through the optimal solutions from only these graphs when solving for the aggregated levels at the less granular layers. Therefore, the greedy approach may quickly prune or eliminate non-optimal solutions (or paths that are less likely to be optimal) as early as possible at each of the layers. This greedy approach may significantly reduce the computational complexity of the shortest path graph problem, since less than all possible permutations of deployment orders and parameters need to be graphed and evaluated. The greedy approach may be typically very fast and may waste less computational resources on checking solutions that are unlikely to be optimal. However, because the greedy approach selects only the local best option at the current step, the greedy approach may miss the impact on the overall solution, miss the big picture, and eliminate solutions that may not appear optimal in the current step but may end up being part of the global optimal as a whole. That is, the greedy approach may be too shortsighted by selecting only the local optimal at each step while evaluating only a piece of the entire problem. Thus, the greedy approach may lose the guarantee of finding the actual global optimal solution. Nonetheless, the solutions found by the greedy approach can closely approximate the actual optimal solutions in most scenarios and thus may be acceptable for most real-life situations. As explained above, the greedy approach may only work with scalarization, where multiple objectives have to be combined into one objective. Greedy-Brute Force Algorithm Alternatively, consistent with some implementations of the present concepts, a greedy-brute force method may be used to solve the shortest path graph problem. A greedy-brute force method may combine the concept of the greedy approach and the brute force method. In some implementations, the greedy-brute force method, applied to the present concepts, may solve for solutions at the most granular layer first and then work its way up to the least granular layer. At a high level, the optimization model using the greedy-brute force method may rank clusters by dominance (this is the greedy approach), then iterate through all possible permutations within the cluster ranking determined by dominance (this is the brute force method), and then keep the non-dominated solutions. The concept of dominance will be explained below. The optimization model, consistent with the present concepts, may consider multiple objectives, for example, speed and risk. Some administrators may want to deploy a small-change payload quickly to meet a deadline and are thus willing to take on the increased risk of fast deployment. Other administrators, on the other hand, may want to deploy a new but non-critical feature and thus not be in a hurry but rather prefer to minimize risk and make sure the deployment succeeds without problems. Different administrators for different payloads may have different preferences and tolerances for the multiple objectives. Thus, to address these preferences, the optimization model may output multiple optimal solutions, from which an administrator may choose, rather than outputting just one optimal solution. FIG.9shows a graph plot900of multiple solutions, consistent with the present concepts. The graph plot900is a simplified example that includes only five potential solutions. The axes in graph plot900may represent the multiple objectives. In this example, the y-axis represents deployment time and the x-axis represents deployment risk, which are being minimized. Five solutions numbered 1 through 5 have been plotted in the graph plot900according to their associated time and risk values. Ideally, the best or optimal solutions would be plotted towards the bottom-left corner near the origin of the graph plot900, as such solutions would have minimum time and minimum risk. However, many multi-objective optimization problems may not have one optimal solution that is better than all other solutions. Rather some solutions may be better than (or dominate) other solutions in terms of both time and risk. Whereas some solutions may be better than other solutions only with respect to one objective (e.g., time) but worse with respect to the other objective (e.g., risk). Therefore, in general, Solution_A dominates Solution_B if Solution A is no worse than Solution_B with respect to all objectives. Furthermore, a solution that is not dominated by any other solution is a non-dominated solution or a Pareto-optimal solution. A boundary or a curve defined by the set of non-dominated solutions is called a Pareto-optimal front. For example, in the graph plot900, Solution_1 dominates Solution_3, because Solution_1 has better time and better risk than Solution_3. Similarly, Solution_4 dominates Solution_5, because Solution_4 has better time and better risk than Solution_5. However, Solution_2 does not dominate Solution_1 or Solution_4, because even though Solution_2 has lower risk, it has higher time compared to Solution_1 or Solution_4. In this example, Solution_1, Solution_2, and Solution_4 are considered non-dominated solutions, because no other solution dominates any of them. As can be seen inFIG.9, these non-dominated solutions (Solution_1, Solution_2, and Solution_4) trace a Pareto-optimal front902. The computational complexity of finding the true Pareto-optimal front (i.e., the complete set of all non-dominated solutions) of a multi-objective shortest path graph problem may be NP complete. However, the optimization model, consistent with the present concepts, may use one or more optimization algorithms that can estimate the Pareto front much faster. The computational complexity of using the brute force method, which searches through the entire solution space as discussed above, to solve a graph that models multiple clusters within one AZ may be O(2N-1×N!), where N is the number of clusters in the graph. By using the greedy-brute force method to quickly prune the search space, the computational complexity may be reduced to O(2N-1). In some implementations, the greedy-brute force method may involve the following steps:1) Compute the dominance rank of each cluster. The dominance rank of a cluster may be the number of other clusters that dominate that cluster. Thus, the dominance rank of each cluster can be quickly determined by the prediction models based on the associated time and risk. For instance, a non-dominated cluster may have a dominance rank of zero, whereas another cluster with a high deployment time estimate and a high deployment risk estimate may have a high dominance rank value.2) Generate a graph in which the only edges allowed between two state vertices are those that satisfy two conditions: (1) a directed edge from a state vertex i to a state vertex j is allowed if the clusters in state i are a subset of the clusters in state j, and (2) the dominance rank of the clusters in state i is no higher than the dominance rank of the clusters in state j. Thus, the deployment ordering of the clusters may proceed from low-ranking clusters (i.e., low time and low risk) to high-ranking clusters (i.e., high time and high risk). Thus, the graph would not include other paths (i.e., other edges) that violate these two conditions. This greedy approach may fix the ordering of the clusters, which would prune away other paths that may have included disallowed edges, thus reducing the computational complexity.3) Exhaustively search through the solution space using the brute force method on the limited allowed paths in the graph. Accordingly, the greedy-brute force method may first determine the deployment order of clusters based on a greedy ranking and then search through all options for different deployment batch sizes based on the fixed cluster order. For example, for an AZ containing only three target clusters, the greedy-brute force method may rank the three clusters by dominance in the order C1, C2, C3. This may be the greedy portion of the technique, ranking from the fastest speed and least risky cluster first to the slowest speed and most risky cluster last. Then, a brute force method may be applied to check all possible combinations of paths without changing the rank-based order of clusters:C1-C2-C3C1/C2-C3C1-C2/C3C1/C2/C3 And then, the non-dominant solutions may be kept as the best solutions. Accordingly, there may be no need to generate the entire graph but only a subset of the entire graph that conforms to the dominance ranking. The other portions of the graph and other possible solutions that do not conform to the dominance ranking may not be evaluated. The greedy-brute force method may work with a non-dominated approach and may be fast for small graphs. However, as mentioned above, the computational complexity may still grow exponentially with the number of clusters. Therefore, the greedy-brute force method may be better suited for smaller graphs than larger graphs. Evolutionary Algorithm Alternatively, consistent with some implementations of the present concepts, an evolutionary algorithm may be used to solve the shortest path graph problem. An evolutionary algorithm, such as the non-dominated sorting genetic algorithm version 2 (“NSGA-II”), may be a genetic algorithm that mimics natural evolution of species. Generally, an evolutionary algorithm may randomly pick an initial parent population, evolve the parent population to create a new generation of offspring population, select the best individuals from the combination of the parent population and the offspring population to survive and killing off the rest, and repeat the process multiple iterations by treating the surviving population as the new parent population for further evolution. This evolution process may be repeated until there is a convergence in the population. Applying the evolutionary algorithm to the shortest path graph problem, a set of randomly selected paths may be generated as the initial parent population, these paths may be evolved to generate an offspring population of different paths, the best paths (based on their associated speed and risk) may be selected for survival, and these steps may be repeated using the surviving paths as the new parent population until the new generations of paths converge to approximate a Pareto-optimal front. In some implementations, the evolutionary algorithm may involve the following steps:1) Initialize a parent population of randomly generated solutions of paths (s00, s11, s20. . . , sn0)∈S0.2) Evolve the parent population paths (e.g., through mutation and/or crossover/recombination, which will be explained below) to breed an offspring population of new paths.3) Compute a dominance rank value and a distance value for each path in the parent population and in the offspring population. A dominance rank value may be the number of other solutions that dominate the path. A dominance rank value of zero may represents a non-dominated solution. A distance value may reflect the spacing between the path and other solutions, which may be inversely related to density or proximity. In one implementation, the distance value may be calculated as follows: distancei=∑1m(Oi-1k-Oi+1kmax⁡(Ok)-min⁡(Ok))2where Oikis the kth objective value for the ith individual in the population. The distance value may represent whether a solution is in a sparce or a crowded area in the objective space and may be measured by the square sum of the normalized distance between the neighbors.4) Sort all the paths in the parent population and the offspring population together in ascending order of the dominance rank value first, and then, where dominance rank values are tied, sort in descending order of the distance value.5) Select the top n paths from the sorted paths for survival. The remainder bottom paths may be killed off or discarded.6) Evolve the selected n paths by repeating Steps 2 through 5 for g generations until there is a convergence. In some implementations, the initial population of parent paths may be a randomly generated chain of clusters connected together to represent a deployment sequence of clusters. For example, depending on the number of target clusters, 100 randomly generated paths may be used as the initial set of solutions. In a very simplified example, the initial population may include the sequences: C1-C2, C2-C1, and C1/C2. Several factors may be considered when determining the initial population size. A larger population may have a higher chance of finding good solutions. However, a larger population may require more computation resources and computation time. Generally, the population size may depend on the number of target clusters. The initial population of paths may affect how well the final solutions approximate the true Pareto-optimal solutions and how quickly (in terms of the number of generations) the algorithm converges. Accordingly, to increase the efficiency and/or accuracy of the algorithm, in some implementations, the greedy approach may be applied to the initialization step of generating the initial parent population of paths. Similar to the greedy approach described above, the target clusters may be ranked according to the number of other dominant clusters, and then the different batch size options may be sampled based on the dominance-ranked cluster order. However, unlike the exhaustive brute force evaluation of all possible combinations, as discussed above, here, only a limited number of initial feasible solutions may be generated to obtain the initial parent population. This method may effectively reduce the search space to eliminate paths that are unlikely to be optimal. Furthermore, an initial parent population generated using the greedy approach may be a better starting point for the evolutionary algorithm than an initial parent population generated randomly. Consistent with some implementations of the present concepts, because the initial parent population of paths may include fewer than all possible permutations of target clusters, there is no need to generate the entire graph ahead of time. Instead, a limited graph may be initially generated to model the initial parent paths. And then, moving forward, the graph may be updated on an ongoing basis as additional paths are bred through evolution and/or discarded through the selection process. Consistent with the present concepts, the evolutionary algorithm may start with the initial set of paths and then explore new paths that are bred through mutation and/or crossover techniques in an efficient manner at each new generation. The end goal may be to arrive at a generation of paths that is a very close approximation of the true Pareto-optimal front. In some implementations, mutation evolution may be used to randomly change a small portion of the existing path to explore a new path. For example, if a parent path is C1-C2-C3-C4, then this solution may be mutated by randomly changing the order of two or more clusters to generate a new offspring path, such as C1-C4-C3-C2. In this example, the order of clusters C2 and C4 were swapped (i.e., mutation). Additionally, the new offspring path may be checked to confirm that it is a feasible solution in view of any constraints or limits on deployment parameters. FIG.10shows an example mutation evolution, consistent with the present concepts. The top ofFIG.10shows three state vertices in the middle of a longer sequence of vertices of an example parent solution. The three state vertices may represent State_i containing clusters C1, C2, and C3; State_j containing additional clusters C4 and C5; and State_k containing additional clusters C6 and C7. In one example mutation, the optimization model may select a random state vertex in the parent solution. Here in this example, State_j may be randomly selected. Then, the optimization model may randomly sample one or more feasible paths from State_i to State_k. In other mutations, the optimization model may sample paths from one, two, three, or any other number of vertices before and/or after the randomly selected vertex State_j. FIG.10shows three example mutation options. In Mutation Option 1, four clusters C4/C5/C6/C7 are concurrently deployed to, such that the deployment path advances from State_i directly to State_k. Mutation Option 1 may be feasible if the maximum batch size is at least four. Mutation Option 1 may have lower deployment time but higher deployment risk by deploying to all four clusters at once. In Mutation Option 2, from State_i, only cluster C4 is deployed to, and then from State_j′, three clusters C5/C6/C7 are concurrently deployed to, thereby arriving at State_k. In Mutation Option 3, from State_i, three clusters C4/C5/C6 are concurrently deployed to, and then from State_j″, only cluster C7 is deployed to, thereby arriving at State_k. Mutation Option 2 and Mutation Option 3 may be feasible if the maximum batch size is at least three. FIG.10illustrates only three example mutation options for explanatory purposes, but the optimization model may sample many more possible mutations in the search space (i.e., other feasible paths from State_i to State_k). Next, the optimization model may randomly pick one of the three (or more) mutation options to generate a new offspring solution. The new offspring solution may keep the same state vertices before State_i and after State_k as the parent solution. In one implementation, each parent may be mutated to create one offspring. In another implementation, one parent solution may be mutated multiple times to generate multiple offspring solutions, and/or a subset of the parent population may be randomly selected for mutation evolution. In some implementations, crossover or recombination evolution may be used to breed two parent solutions to generate two new offspring solutions. This cross-breeding may also allow the optimization model to explore new paths. For example, suppose there are two parent solutions with the following deployment paths:C1/C2/C3-C4/C5-C6/C7C1/C2/C3/C4-C5-C6/C7/C8/C9 In some implementations, two parent solutions may be cross-bred if they share at least one state vertex that is not the first or the last state vertex. In this example, the two parent solutions share a common state vertex that contains clusters C1 through C5. As one example crossover evolution, these two parent solutions may be cross-bred by swapping the vertices before and after the shared vertex to generate two new offspring solutions with the following deployment paths:C1/C2/C3-C4/C5-C6/C7/C8/C9C1/C2/C3/C4-C5-C6/C7 The two offspring solutions may include different parts of their parent solutions. Specifically, in the example above, the first offspring solution may include the earlier segment of the first parent solution before the common state vertex and the later segment of the second parent solution after the common state vertex; and the second offspring solution may include the earlier segment of the second parent solution before the common state vertex and the later segment of the first parent solution after the common state vertex. These four solutions (two parents and two offsprings) may differ in deployment order, batch size, and/or completion percentage (i.e., advancing after deploying to seven clusters versus after nine clusters). In some implementations, where solution paths involve advancing when less than 100% of clusters have been deployed to, the final solutions may include all target clusters by tacking on another state vertex that includes all remaining clusters. Such an additional vertex may not impact the overall deployment time and/or the overall deployment risk, because the deployment process may have already advanced to the next stage and the remaining clusters may be deployed to in the background. The above explanation is just one example of crossover evolution. Other kinds of crossover breeding may be possible, for example, swapping multiple segments of vertices where the two parent solutions share multiple common vertices. A feasibility check may be performed on newly generated offspring solutions. FIG.11shows another example of a crossover evolution, consistent with the present concepts. In this example, Parent 1 is shown on the top-left corner ofFIG.11, and Parent 2 is shown on the bottom-left corner ofFIG.11.FIG.11shows only a short segment of three vertices but the full paths for these two solutions may be much longer. The two parent solutions may have State_i as the common state. In this example crossover evolution, two new offspring solutions may be generated by crossing over the two parent solutions' paths at the common State_i vertex. Accordingly, Offspring 1, shown in the top-right corner ofFIG.11, may include the path segment of Parent 2 before common State_i vertex and the path segment of Parent 1 after common State_i vertex; and Offspring 2, shown in the bottom-right corner ofFIG.11, may include the path segment of Parent 1 before common State_i vertex and the path segment of Parent 2 after common State_i vertex. Thus, this crossover evolution may leverage the genetic concept of DNA sequence swapping. In one implementation, random pairs of a parent population that share at least one common state vertex may be selected for crossover evolution to generate an offspring population. Consistent with the present concepts, offspring solutions may be generated using mutation or crossover, or a combination of both. During experimental testing of the present concepts, tests of the evolutionary algorithm were run using only mutation evolution and only crossover evolution. In these tests, mutation evolution found a larger number of estimated Pareto-optimal solutions compared to the crossover evolution. However, crossover evolution was able to find certain better solutions that mutation evolution could not find. In the tests, performing both mutation evolution and crossover evolution generally produced better results than using only one type of evolution. However, mutation evolution ran faster than crossover evolution, so mixing mutation and crossover took additional processing time compared to mutation only. Thus, the marginal improvement in solution optimality by mixing crossover evolution to mutation evolution may be weighed against the sacrifice of computational time and resources associated with crossover evolution. The choice of using only mutation evolution, only crossover evolution, or a mix of mutation and crossover and in what proportions may be tweaked and fine-tuned through experiments and trials. After generating a new population of offspring solutions (whether via mutation and/or crossover), the optimization model may select the elite ones among the combination of both the parent solutions and the offspring solutions for survival. The elite solutions may have lower deployment time and lower deployment risk compared to the remainder of solutions that will be discarded. The surviving elite solutions may be considered a new generation of solutions (i.e., the new parent solution) that may be evolved again, and this process may be iterated multiple times for multiple generations. In some implementations, to select the elite solutions, the evolutionary algorithm may use two metrics to sort the parent population and the offspring population jointly. The first metric may be a dominance rank equal to the number of dominant solutions. As explained above, one solution may be dominated by another solution that is better for all objectives (i.e., has a better deployment time value and a better deployment risk value). Therefore, at the very top of the sorting order may be a set of non-dominated solutions that have a dominance rank value of zero. Next in the sorting order may be a set of solutions that are dominated by one solution and thus have a dominance rank value of one. Next in the sorting order may be a set of solutions that are dominated by two solutions and thus have a dominance rank value of two. This ranking process may be repeated to sort the entire populations of parent and offspring solutions by ascending dominance rank values. The second metric may be a distance value that measures the distance of a solution from other neighboring solutions in the objectives space. This metric may measure the density of other solutions around each solution. The evolutionary algorithm may use the example distance formula given above or any other formula that measures the density or scarcity of other solutions in the neighboring objectives space area. Next, the evolutionary algorithm may select the elite individuals in the populations of parents and offsprings based on the two metrics. In one implementation, the entire populations of the parent solutions and the offspring solutions may be primarily sorted together by their dominance rank values. A lower dominance rank value is preferred for surviving the selection process. Then, a certain number of top sorted solutions may be selected for survival. In one implementation, the number of surviving solutions and thus the size of each generation may be kept the same as the size of the initial population (e.g., 100). In alternative implementations, the sizes of different generations may vary. To break a tie among a group of solutions that have the same dominance rank value, such solutions may be secondarily sorted by their distance values. Solutions in sparse areas (i.e., having high distance values) are more preferred for survival than solutions in dense areas (i.e., having low distance values), because diverse populations are more effective in exploring new solutions. The remainder of the solutions that do not make the cut may be discarded as less optimal solutions. Thus, the solutions space (i.e., the number of solutions at each generation) may be limited, the computational complexity of the evolutionary algorithm may be significantly lower and the problem of exponentially growing graph complexity may be mitigated. The surviving solutions may be considered the strongest in this generation that have been selected for the next iteration of evolution (mutation and/or crossover) to generate new offspring solutions and thereby explore additional new paths. The above-described process may be repeated multiple times to repeatedly create new offspring solutions through evolution, survive the top solutions, and eliminate the bottom solutions. With each iteration, the new generation of solutions may be progressively better (i.e., improved optimality) with respect to the time and risk objectives. In some implementations, the new generations of solutions may converge, meaning subsequent generations are not substantially better than previous generations with respect to the objectives. FIGS.12A and12Bshow graph plots of generations of solutions, consistent with the present concepts. These graph plots show the objective space, where the y-axis is time and the x-axis is risk. In one implementation, the time values in the y-axis may be in minutes, and the risk values in the x-axis may be in a unitless metric. The plots of solutions in these graph plots were obtained from experimental testing of the evolutionary algorithm, consistent with the present concepts. InFIG.12A, the initial population of solutions is shown using black circles. The 10th generation of solutions is shown using white circles. Generally, the 10th generation of solutions is better with respect to the two objectives than the initial generation of solutions. InFIG.12B, the 20th generation of solutions is shown using x's, and the 30th generation of solutions is shown using black squares. As can be seen in these graph plots, generations of solutions—created and survived through multiple iterations of evolution and selection, progressively improve with regard to deployment time and risk. Although not shown in these figures, during the experimental tests, the 40th and 50th generations of solutions were not significantly better than the 30th generation of solutions. Thus, the solutions converged, and the evolutionary process was stopped. At this point, the 50th generation of solutions may be considered the best approximations of the true Pareto-optimal solutions. In one implementation, a fixed number of evolution iterations (e.g., the 50th generation) may be set as the termination of the evolutionary algorithm. In an alternative implementation, the optimization model may be more efficient and/or more effective if the evolutionary algorithm terminated at a variable number of iterations. For example, the time and risk values associated with the current generation of solutions may be compared with previous generations (e.g., the past 5 or 10 generations) of solutions. If there is no significant improvement, then the evolutionary algorithm may be terminated. Conversely, if there is a substantial improvement, then the evolutionary algorithm may be continued. Accordingly, the evolutionary algorithm may stop early if convergence is achieved quickly without wasting time and resources on further generations that would not improve solution optimality. On the other hand, if convergence is not yet achieved, then the evolutionary algorithm may be allowed to continue to further improve solution optimality. As shown throughFIGS.12A and12B, the evolutionary algorithm may search for the optimal solutions through the evolutionary process. However, the evolutionary algorithm may not guarantee the true Pareto-optimal solutions. Nonetheless, the evolutionary algorithm has proven to be very effective in approximating the Pareto-optimal solutions. Moreover, the evolutionary algorithm may be efficient and run very quickly at finding the estimated Pareto-optimal front. Because the problem space may be reduced to the small set of solutions in a population, the complexity of the graph may be reduced to a manageable size. Thus, the evolutionary algorithm may not require constructing the entire graph upfront. Instead, the evolutionary algorithm may initialize a limited graph based on the initial population of solutions, and then modify the graph as new solutions are explored and selected through the evolutionary process. In some implementations, the optimization model may use different optimization algorithms for different AZs, regions, and/or SDP phases, depending on which optimization algorithm is better suited to handle the specific graph problems. In the experimental testing performed using the present concepts, the performance of multiple optimization algorithms was measured and compared for different sets of graph problems. TABLE 1Number ofBrute ForceGreedy-BruteEvolutionaryClustersTime (s)Force Time (s)Algorithm Time (s)81200.21.612—0.51.815—6.52.3316—162.2818—662.6 Table 1 above shows the computational time taken by the greedy-brute force method and by the evolutionary algorithm compared to the baseline brute force method for solving graph problems having varying number of clusters. The brute force method may generate the entire graph and exhaustively search all possible solutions to find the global optimum. Therefore, the brute force method may be computationally very expensive even for a small number of clusters. As such, the brute force method was conducted only for the cluster size of 8. Relatively, the greedy-brute force method and the evolutionary algorithm took far less time than the brute force method. The solutions output by the greedy-brute force method were very close to the actual global optimum solutions output by the brute force method. Although the greedy-brute force method reduces the computational complexity compared to the brute force method, the computational time still grew exponentially as the number of clusters grew. This was because the greedy-brute force method still checked all combinations after fixing the ranking of the clusters. Thus, the greedy-brute force method may be more useful for small AZs with fewer clusters than large AZs with many clusters. The solutions output by the evolutionary algorithm (NSGA-ii specifically in the experimental tests) were very close to the actual global optimum solutions output by the brute force method. Significantly, the computational time taken by the evolutionary algorithm grew more linearly and did not grow as fast as the greedy-brute force method as the number of clusters increased. This was because the evolutionary algorithm fixed the size of the population and the number of iterations. This reduction in computational time may be a tremendous advantage that may be amplified even more for very large AZs that have many clusters. The computational time needed to find solutions may be very important, because if it takes 10 minutes to compute solutions for a large AZ with 200 clusters, then it may take several hours to find solutions for hundreds of AZs. Waiting hours to generate deployment plans may be undesirable for many reasons, especially because the state of the deployment and the state of the cloud computing fleet may have changed significantly in that time. Furthermore, holding up one deployment for several hours can create a backlog and clog the deployment queue. Comparing the greedy-brute force method and the evolutionary algorithm, the estimated Pareto-optimal solutions output by both the greedy-brute force method and the evolutionary algorithm were close to each other and closely approximated the true Pareto-optimal solutions output by the brute force method. However, the solution optimality was slightly better with the greedy-brute force method over the evolutionary algorithm. Furthermore, the greedy-brute force method was faster for smaller numbers of clusters, whereas the evolutionary algorithm was faster for larger numbers of clusters. Accordingly, in one implementation, the greedy-brute force method may be used for availability zones with fewer than 15 clusters and the evolutionary algorithm may be used for availability zones with 15 or greater clusters, based on the computational times from the experimental testing shown in Table 1. In an alternative implementation, the greedy-brute force method may be used for availability zones with 16 or fewer clusters and the evolutionary algorithm may be used for availability zones with greater than 16 clusters, as a compromise between computational time and solution optimality. To illustrate how the optimization algorithms may be applied to solving a multi-layer graph problem, suppose there is a target fleet containing multiple SDP phases, each SDP phase containing multiple regions, each region containing multiple AZs, and each AZ containing multiple clusters. This hierarchical grouping structure of target clusters may be similar to the structure illustrated inFIG.2. Starting at the AZ layer, the optimization model may generate graphs where the vertices represent clusters and find optimal solutions for cluster ordering within each AZ. Thus, each of the AZs in the target broad SDP phase may be modeled by an AZ-level graph. For each AZ individually, either the greedy-brute force method or the evolutionary algorithm may be selected (e.g., depending on the number of clusters in the AZ) to find the non-dominated solutions of cluster orderings for the AZ. Next, at the region layer, the optimization model may generate graphs where the vertices represent AZs and find optimal solutions for AZ ordering within each region. Thus, each of the regions in the target broad SDP phase may be modeled by a region-level graph. The predicted deployment time and risk associated with each AZ may be calculated by aggregating the predicted deployment time and risk associated with the clusters within that AZ. These aggregated deployment time and risk values associated with AZs may be used in the edges of the region-level graphs. For each region individually, either the greedy-brute force method or the evolutionary algorithm may be selected (e.g., depending on the number of AZs in the region) to find the non-dominated AZ orderings for the region. Thus, at this region layer, mutation and/or crossover evolution may explore new offspring sequences of AZs within the region. Next, at the SDP phase layer, the optimization model may generate graphs where the vertices represent regions and find optimal solutions for region ordering within each SDP phase. Thus, each of the SDP phases in the target fleet may be modeled by an SDP phase-level graph. The predicted deployment time and risk associated with each region may be calculated by aggregating the predicted deployment time and risk associated with the AZs within that region. These aggregated deployment time and risk values associated with regions may be used in the edges of the SDP phase-level graphs. For each SDP phase individually, either the greedy-brute force method or the evolutionary algorithm may be selected (e.g., depending on the number of regions in the SDP phase) to find the non-dominated region orderings for the SDP phase. Thus, at this SDP phase layer, mutation and/or crossover evolution will explore new offspring sequences of regions within the SDP phase. Deployment Plan Output The above-described methods may be applied from the most granular layer to the least granular layer to solve for the Pareto solutions for the entire rollout. In some implementations, deployment plans that correspond to the graph problem solutions may be generated and then outputted as deployment plan recommendations. The deployment plans may include the optimal deployment order, other optimal deployment parameters, the overall deployment time, and/or the overall deployment risk. A deployment administrator may choose a preferred deployment plan from the selection of deployment plan recommendations based on the desired level of speed and risk. In some implementations, the selected deployment plan may be sent to a deployment orchestrator to carry out the deployment plan. Deployment plans may be formatted according to a schema that can be consumed by a deployment orchestrator. Experimental tests were performed to compare the performance of deployments using the present concepts (i.e., deployment plans generated using the smart deployment method described above) versus conventional deployments. In 94% of the cases, the smart deployment methods reduced the deployment times and/or the deployment risks (measured in AIR). Furthermore, the present concepts were able to reduce the deployment times by 1/100 or better in many cases. As mentioned above, the deployment time values and deployment risk values associated with the selection of deployment plans output by the optimization model may be estimates calculated by the prediction models. In some implementations, the actual deployment time and the actual impact on deployment risk (e.g., AIR) may be measured and compared with the estimates. Any discrepancies between the estimates and the actuals may be used to improve the prediction models and/or the optimization model. In some implementations, the deployment orchestrator may provide status reports and/or progress updates during and after the deployment. Such reports and updates may include errors as well as current deployment time and current impact on deployment risk. In some implementations, the administrator may be able to query the orchestrator regarding any details related to current and past deployments. Realtime insight into the current deployment may be beneficial, especially where deployment parameters and constraints could be modified on the fly. For example, if an administrator is uncertain about the quality of an update, she may initially set a low batch size limit so that not too many clusters are updated at the same time. However, if the early phases of deployment (e.g., to the canary and pilot phases) are very successful, then the administrator may increase the batch size limit in the middle of deployment to confidently rollout the proven update to many clusters concurrently for later phases (e.g., the broad phase) of the deployment. The orchestrator may also be configured with mechanisms to deal with problems, such as certain deployment parameters in the deployment plan that cannot be followed. For example, the orchestrator may be configured to fall back to a default deployment plan using default deployment parameter in response to errors or failures. Process FIG.13shows a flow diagram of a smart deployment method1300, consistent with the present concepts. The smart deployment method1300is presented for illustration purposes and is not meant to be exhaustive or limiting. The acts in the smart deployment method1300may be performed in the order presented, in a different order, serially or concurrently, or may be omitted. Any or all of the acts in the smart deployment method1300may be performed by software, hardware, or a combination of both. In box1302, one or more prediction models may be trained to predict one or more objectives for target clusters. Historical deployment data may be used as training data. Historical deployment data may include details about payloads and details about target clusters. Examples of objectives may include deployment time and deployment risk. In one implementation, the prediction models may be machine-learning models that use a gradient boosting algorithm. In box1304, a deployment request may be received from a developer, an administrator, and/or a team. In some implementations, the deployment request may include information about the payload, an identification of target clusters, deployment parameters, and/or deployment constraints. The target clusters may be grouped and/or hierarchically arranged, for example, into AZs, regions, and/or SDP phases. The target clusters, AZs, and/or regions may be redundant backups for one another and therefore restricted from being deployed concurrently. The deployment parameters and constrains may include, for example, cluster orders, batch sizes, bake times, completion percentages, etc. In box1306, a shortest path graph problem that models the space of possible deployment plans may be generated. In one implementation, the graph may include state vertices that contain a list of clusters that have been deployed to up to that point. The state vertices may be connected by directed edges containing a deployment action (e.g., a list of clusters deployed to in order to transition from one state to another state), a predicted deployment time for the deployment action, and a predicted deployment risk for the deployment action. The predicted deployment time values and the predicted risk values may be calculated by the prediction models. The graph may also include sink vertices and/or transition vertices that include completion percentage values and/or bake time values. The entire graph, from the most granular AZ layer to the least granular SDP phase layer, that models every possible deployment plan may be generated ahead of time. However, such a graph may be too massive if the number of target clusters is high. Furthermore, the computational complexity of solving the shortest path problem for such an enormous graph may be NP complete, which may not be feasible in most real-life circumstances. Accordingly, consistent with some implementations of the present concepts, only a partial graph may be initially generated. For example, for a small AZ with only a few clusters, the greedy-brute force method may be selected. Applying the greedy-brute force method, each cluster may be assigned a dominance rank based on the number of other clusters that dominate it with respect to both time and risk. Then, a limited graph may be generated that contains paths that conform to an ascending ordering of the clusters based on the dominance ranking. Alternatively, for a large AZ with a greater number of clusters, the evolutionary algorithm may be selected. Applying the evolutionary algorithm, an initial population of parent solutions may be randomly generated. Then, a limited graph may be generated to model the initial population of parent solutions. The graph may then be modified as additional solutions are evaluated. In box1308, optimal solutions to the shortest path graph problem may be found. A brute force method may be used to evaluate every possible path in the entire graph to find the optimal solutions. However, because the computational complexity of the graph problem is NP complete, this exhaustive technique may not be feasible in real-life scenarios. Alternatively, an optimization algorithm, such as the greedy-brute force method and/or an evolutionary algorithm, may be used to estimate Pareto-optimal solutions to the graph problem. Because these optimization algorithms do not require generating the entire graph ahead of time, a limited graph may be generated in box1306. In one implementation, the greedy-brute force method may be used in smaller graph problems, and the evolutionary algorithm may be used in larger graph problems. Thus, a set of non-dominated solutions that differ in associated time and risk may be found. Furthermore, boxes1306and1308may be repeated from most to least granular layers. That is, boxes1306and1308may be performed by generating graphs that model AZs of clusters and finding optimal solutions of cluster sequences for each AZ. Then, boxes1306and1308may be performed again at the less granular layer by generating graphs that model regions of AZs and finding optimal solutions of AZ sequences for each region. Then, boxes1306and1308may be performed again at the less granular layer by generating graphs that model SDP phases of regions and finding optimal solutions of region sequences for each SDP phase. In box1310, a set of deployment plan recommendations may be output. The set of deployment plan recommendations may correspond with the set of non-dominated solutions found in box1308. Each deployment plan recommendation may include, for example, deployment order, batch size, completion percentage, bake time, estimated overall deployment time, estimated overall deployment risk, and/or any other details pertinent to the deployment request. In one implementation, the set of deployment plan recommendations may be output to an administrator for review and selection. In box1312, a selection of a preferred deployment plan may be received from the administrator. For example, the administrator may evaluate the deployment plan recommendations and choose the preferred one based on her preferred deployment time and/or deployment risk. In box1314, the selected deployment plan may be performed. In one implementation, the deployment plan chosen by the administrator may be executed by a deployment orchestrator by rolling out the payload to the target clusters according to the selected deployment plan. After the payload has been deployed, the target clusters may continue to operate (e.g., provide cloud computing services). For example, an application updated by a payload may be executed to run and provide services, an operating system patched up by a payload may be executed, a network device or a computer device whose networking configurations have been modified by a payload may communicate over networks. Environment FIG.14shows an environment1400, consistent with some implementations of the present concepts. For purposes of explanation, the environment1400may include client devices1402. Examples of the client devices1402may include personal computers, desktop computers, servers, notebook computers, cellular phones, smartphones, personal digital assistants, tablets or pad type computers, mobile computers, cameras, appliances, virtual reality headsets, video game consoles, controllers, smart devices, IoT devices, vehicles, watches, wearables, set-top boxes, game systems, automobile entertainment or navigation consoles, etc., and/or any of a myriad of ever-evolving or yet-to-be-developed types of electronic devices. In the example shown inFIG.14, the client devices1402may include a laptop1402(1), a tablet1402(2), and a smartphone1402(3). The environment1400may include a cloud computing fleet1404. In one implementation, the cloud computing fleet1404may include a fleet of server computers that provide one or more cloud services, such as email, photos, videos, streaming, documents, social media, apps, virtual machines, websites, etc. The term “device,” “computer,” or “computing device” as used herein can mean any type of electronics that has some amount of processing capability and/or storage capability. Processing capability can be provided by one or more hardware processors that can execute data in the form of computer-readable instructions to provide a functionality. Data, such as computer-readable instructions and/or user-related data, can be stored on storage, such as storage that can be internal or external to the device. The storage can include any one or more of volatile or non-volatile memory, hard drives, flash storage devices, optical storage devices (e.g., CDs, DVDs etc.), and/or remote storage (e.g., cloud-based storage), among others. As used herein, the term “computer-readable media” can include transitory propagating signals. In contrast, the term “computer-readable storage media” excludes transitory propagating signals. Computer-readable storage media may include computer-readable storage devices. Examples of computer-readable storage devices may include volatile storage media, such as RAM, and non-volatile storage media, such as hard drives, optical discs, and flash memory, among others. Consistent with some implementation of the present concepts, the environment1400may include a smart deployment system1406. The smart deployment system1406may include software, hardware, or a combination that implements the present concepts described above. In some implementations, the smart deployment system1406may be hosted on one or more devices (e.g., a single computer or multiple computers, such as a cluster of servers). The smart deployment system1406may include machine-learning prediction model, such as a time prediction model and/or a risk prediction model. The smart deployment system1406may also include an optimization model capable of using one or more optimization algorithms to solve graph problems that model deployment planning problems. In some implementations, the smart deployment system1406may perform all or parts of the smart deployment method1300, as well as other acts described herein, to deploy updates to the cloud computing fleet1404. The client devices1402, the cloud computing fleet1404, and the smart deployment system1406may communicate with one another via one or more networks1408. The networks1408may include the Internet or be used to access the Internet. FIG.14shows two example device configurations1410(1) and1410(2) that can be employed by one or more devices in the smart deployment system1406. The devices in the smart deployment system1406can employ either of the configurations1410(1) or1410(2), or an alternate configuration. One instance of each configuration1410is illustrated inFIG.14. The configuration1410(1) may represent an operating system (OS) centric configuration. The configuration1410(2) may represent a system-on-chip (SoC) configuration. The configuration1410(1) can be organized into one or more applications1412, an operating system1414, and hardware1416. The configuration1410(2) may be organized into shared resources1418, dedicated resources1420, and an interface1422therebetween. In some implementations, the applications1412may include software for performing the smart deployment method1300, in whole or in part. In either configuration1410, the device can include a storage1424and a processor1426. The device can also include a smart deployment system1428, consistent with the present concepts. In one implementation, the storage1424may store software for implementing the smart deployment method1300, and/or store one or more graphs used in the present concepts. As mentioned above, the configuration1410(2) can be thought of as an SoC type design. In such a case, functionality provided by the devices in the smart deployment system1406can be integrated on a single SoC or multiple coupled SoCs. The one or more processors1426can be configured to coordinate with the shared resources1418, such as the storage1424, etc., and/or the one or more dedicated resources1420, such as hardware blocks configured to perform certain specific functionality. Thus, the term “processor” as used herein can also refer to central processing units (CPUs), graphical processing units (GPUs), controllers, microcontrollers, processor cores, or other types of processing devices. Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed-logic circuitry), or a combination of these implementations. The term “component” or “module” as used herein generally represents software, firmware, hardware, whole devices or networks, or a combination thereof. In the case of a software implementation, for instance, these may represent program code that performs specified tasks when executed on a processor (e.g., a CPU or CPUs). The program code can be stored in one or more computer-readable memory devices, such as computer-readable storage media. The features and techniques of the component or module may be platform-independent, meaning that they may be implemented on a variety of commercial computing platforms having a variety of processing configurations. The environment1400illustrated inFIG.14is merely one example. The environment1400need not include all the example elements described in connection withFIG.14, and the environment1400may also include additional elements not explicitly described in connection withFIG.14. Advantages and Applications The technical effects of the present concepts described above may include efficiently solving for optimal deployment plans using, for example, machine-learning prediction models and fast optimization algorithms. Conventional deployment planning primarily relied on random ordering of target clusters or manual effort in ordering the target clusters without relying on data. The technical solutions explained herein may leverage historical deployment data to accurately predict multiple objectives for future deployments. Furthermore, the technical solutions may use artificial intelligence to accurately and efficiently provide metrics to enable finding optimal deployment solutions with minimal human effort. Moreover, the technical effective of the present concepts may further include quickly finding optimal solutions to massively large graph problems by incorporating efficient optimization algorithms. By reducing the computational complexity of graph problems, finding optimal deployment solutions can be performed very quickly. For example, by leveraging the greedy-brute force method and/or the evolutionary algorithm, the optimization model can efficiently estimate the Pareto-optimal front of solutions that optimize both speed and risk without generating an enormous graph problem. Therefore, deployments can be performed faster, the deployment queue can be reduced, and security risks or bugs can be fixed quickly. Although the present concepts have been explained above in the context of deploying updates to the cloud, the present concepts may have wide applications in different contexts. For example, a retailer who wants to sell a product to target customers in multiple countries (e.g., United States, China, India, Canada, and Japan) may use the present concepts to determine deployment plans that optimize the retailer's objectives. For example, the target customers in the target countries may be hierarchically grouped into various geographical regions, such as cities, counties, states, provinces, and/or territories, etc. Consistent with the present concepts, the retailer may train machine-learning prediction models to predict the retailer's objectives, such as revenue, profits, customer satisfaction, brand recognition, manufacturing costs, manufacturing time, shipping costs, employee costs, deployment time, etc. The prediction models may be trained based on past historical data derived from how these objectives were affected by rolling out similar products to the target countries or subregions. Feature engineering relating these objectives may involve developing quantifiable metrics for some of the objectives that are typically more quality-based, such as customer satisfaction and/or brand recognition. The product rollout problem may be modeled by a shortest path graph problem. An optimization model may use one or more optimization algorithms to find optimal solutions that include an optimal ordering of the countries (and an ordering of more granular geographical regions) and estimated objective values (e.g., estimated deployment time, revenue, costs, profits, etc.). Many other parameters may be optimized, such as an optimal price, optimal bargain discounts (e.g., sales and/or marketing campaigns), an optimal batch size (i.e., introduce the product in multiple countries concurrently), an optimal bake time (i.e., time gaps between countries), optimal inventory routing, etc. It should be apparent from these examples that the present concepts may be used for any optimization problem and may be especially advantageous where the complexity of the problem would conventionally grow exponentially. By predicting objectives and modeling the problem with a graph problem, the complexity of the modeled optimization problem may be reduced and solved more efficiently. Various examples are described above. Additional examples are described below. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims and other features and acts that would be recognized by one skilled in the art are intended to be within the scope of the claims. One example includes a system comprising a processor, a storage having instructions which, when executed by the processor, cause the processor to receive a request to deploy an update to target clusters of computers, the target clusters being grouped in multiple levels, generate multiple layers of graphs having multiple paths that model multiple deployment plans, the graphs including vertices having completed clusters and edges having predicted deployment time values and predicted deployment risk values, finding solutions to shortest path problems presented by the graphs using at least one optimization algorithm, outputting deployment plan recommendations corresponding to the solutions, receiving a selection of a preferred deployment plan from among the deployment plan recommendations, and deploying the update to the target clusters based on the preferred deployment plan. Another example can include any of the above and/or below examples where the instructions further cause the processor to train a time prediction model and a risk prediction model based on past deployment data, calculate the predicted deployment time values using the time prediction model, and calculate the predicted deployment risk values using the risk prediction model. Another example can include any of the above and/or below examples where the solutions are approximations of Pareto-optimal solutions that optimize deployment time and deployment risk. Another example can include any of the above and/or below examples where the preferred deployment plan includes one or more of: a deployment order of the target clusters, a batch size, a completion percentage, a predicted overall deployment time value, and a predicted overall deployment risk value. Another example can include a computer-readable storage medium storing instructions which, when executed by a processor, cause the processor to receive a deployment planning problem to optimize deployment time and deployment risk while deploying a package to target computers, generate a shortest path graph problem that models the deployment planning problem, the shortest path graph problem including a graph having paths, find solutions that approximate Pareto-optimal solutions to the shortest path graph problem using an optimization algorithm, and output deployment plans that correspond to the solutions, the deployment plans including sequences of the target computers. Another example can include a method comprising receiving a request to deploy a package to targets, generating a graph that models a deployment planning problem presented by the request, finding a solution to a shortest path problem based on the graph, generating a deployment plan based on the solution, and deploying the package to the targets based on the deployment plan. Another example can include any of the above and/or below examples where the package is a software update and the targets are clusters of computers. Another example can include any of the above and/or below examples where the targets are grouped into multiple levels, the method further comprising generating multiple layers of graphs that model the deployment planning problem of deploying the package to the targets in the multiple levels. Another example can include any of the above and/or below examples where the graph includes state vertices that contain lists of completed targets. Another example can include any of the above and/or below examples where the graph includes directed edges between the state vertices, the edges are weighted with time values and risk values. Another example can include any of the above and/or below examples where the method further comprises predicting the time values and the risk values using machine-learning prediction models. Another example can include any of the above and/or below examples where the method further comprises training the machine-learning prediction models based on features from past deployments of similar packages to similar targets. Another example can include any of the above and/or below examples where the method further comprises finding multiple solutions to the shortest path problem, the multiple solutions being approximations of Pareto-optimal solutions, generating multiple deployment plans based on the multiple solutions, and outputting the multiple deployment plans. Another example can include any of the above and/or below examples where the multiple solutions are non-dominated solutions with respect to multiple objectives. Another example can include any of the above and/or below examples where the multiple objectives include time and risk. Another example can include any of the above and/or below examples where using at least one of a greedy-brute force algorithm or an evolutionary algorithm to find the multiple solutions. Another example can include any of the above and/or below examples where using the greedy-brute force algorithm comprises sorting the targets based on a dominance rank with respect to multiple objectives. Another example can include any of the above and/or below examples where using the evolution algorithm comprises using at least one of mutation evolution or crossover evolution. Another example can include any of the above and/or below examples where the method further comprises receiving a selection of a preferred deployment plan from among the multiple deployment plans and deploying the package to the targets based on the preferred deployment plan. Another example can include any of the above and/or below examples where the method further comprises executing an application modified by the package on the targets.
129,263
11861353
DESCRIPTION OF EMBODIMENTS Before describing example embodiments of the present invention, the state element model and the like will first be described.FIG.1is a schematic diagram showing an example of a model that models a system to which an automatic planning problem is applied. The model that models the system to which the automatic planning problem is applied in the form illustrated inFIG.1is called the “state element model. InFIG.1, an example of a state element model corresponding to an update of a web service deployed on a virtual machine is illustrated. The state element model described inFIG.1comprises two parts, Model600and Constr601. Model600comprises a plurality of rectangles that conceptually represent parts of the system or updates thereof. Each of these multiple rectangles is referred to as a “state element”. Each state element has a unique identification (ID) in the state element model. Each state element contains a system of so-called state transition systems. That is, each state element contains a system comprising “states” indicated by rounded squares and “transitions” indicated by arrows connecting “states” and “states”. These states and transitions have unique IDs in the state elements. There may be duplication in the IDs of the “states” and the “transitions” among different state elements. The string shown in the rounded square is the ID of the “state”. InFIG.1, the ID of “transition” is omitted. Note that a “state” is not necessarily indicated by a rounded rectangle, and in the example described below, a “state” may be indicated by a circle or an oval. Each rectangle (in other words, each state element) contains one “initial state” corresponding to the state before the update, and one “target state” corresponding to the state after the update. In the following description, the “initial state” is illustrated by a double-lined rounded rectangle or the like, and the “target state” is illustrated by a black-filled rounded rectangle or the like. However, these two types of states are not necessarily different states, and the initial state and the target state may be identical. In addition, each state element has a “current state” that indicates what state the part that the state element represents is currently in. In the following description, the state in which an arrow with a black circle is added to the end opposite to the arrowhead is illustrated as the “current state”. The arrow representing the current state (the arrow with the black circle at the end opposite to the arrowhead) is different from the arrow representing a transition. The current state is always the same as the initial state at the beginning, and it is moved by state transitions during the automatic planning procedure. A concrete example of a state element is shown below. InFIG.1, the sign e602indicates a state element (ID: AppConf) representing a configuration file of an application and its update. The state element AppConf takes three states, “does not exist (none),” “a configuration file before update (old) exists (old),” and “a configuration file of the latest version exists (new),” and can go back and forth between them. In other words, AppConf is a state element that expresses the update of the configuration file of the application. As mentioned earlier, the current state of AppConf is the same as the initial state “old”, but the current state can be moved to “new” by using the transition extending to “new”. Model600also includes labels attached to the transitions contained in the state elements, as shown by the sign d603inFIG.1. The content described in the label is referred to as a “dependency” on the transition to which the label is attached. The label contains one or more records of the form “e: {s[1], s[2], . . . , s[k]}”. Here, e is the ID of the state element, and s[1], s[2], . . . , s[k] are all IDs of the states included in the state element e. In the following, “e: {s[1], s[2], . . . , s[k]}” contained in the label is referred to as a “dependency record” for state element e. The “dependency” is a set of dependency records of the form “e: {s[1], s[2], . . . , s[k]}”. If a dependency is attached to a transition, the state cannot be moved by the transition unless all the dependency records contained in the label are satisfied. Here, the dependency record “e: {s[1], s[2], . . . , s[k]}” is satisfied means that the current state of state element e is consistent with one of the states s[1], s[2], . . . , s[k]. The dependency is said to be satisfied when all the dependency records listed in the label are satisfied. The dependency indicates a sufficient condition for the state transition to be executed normally. In other words, under the condition that the dependency is satisfied, it is theoretically expected that the execution of the transition will be completed normally. A specific example of dependency is shown. The label d603shown inFIG.1is attached to a transition extending from “stop” to “new” contained in a state element (ID: Service) representing the startup state of the service, and contains two dependency records “AppConf: {new}” and “AppPkg: {new}”. This dependency prevents the service from transitioning its state from the initial state “old” through “stop” to “new”. The state transition from “stop” to “new” can only be performed when both the states of AppConf and AppPkg are set to “new”. Next, Constr601is described. Constr601is a conditional expression for specifying the states which may be passed during a system update, and is referred to as a “transient requirement”. The constraint expression uses a variable of the form “e.s” using e, which is the ID of the state element, and s, which is the ID of the state, and this variable changes its value according to the current state of the system, as shown in (1) and (2) below. (1) If the current state of a state element e is s, then e.s.=1. (2) If the current state of a state element e is not s, then e.s.=0. Due to the behavior of this variable, the transient requirement acts as a constraint expression on the current state of the system. Specific examples of transient requirements are shown. Constr601illustrated inFIG.1is a constraint expression “Service.stop+Service2.stop≤1” that expresses a transient requirement for the state elements Service and Service2, which represent the startup states of two services. This constraint expression is satisfied if either Service or Service2 is not a stop. Conversely, the constraint expression is violated when both are a stop (in other words, when both services have stopped). In other words, Constr601expresses the transient requirement that “during an update, one of the services should remain running”. The above is an overview of the state element model. Next, it is described about the automatic planning on this state element model and the system update procedure planning by it. Automatic planning in the state element model is the problem of planning the route to be reached by transitions from an initial state to a target state on the state element model. In the state element model, the sufficient conditions for a transition to be feasible are modeled as the dependencies, and the states that can be passed through during the update are modeled as transient requirements. Therefore, if the automatic planning problem is solved for a route that satisfies all the dependencies correctly and does not violate the transient requirements, the obtained route can be used as an “update procedure,” which is the solution of the system update procedure planning. In this way, the automatic generation of correct and safe system update procedures can be achieved by the automatic planning problem. When solving an automatic planning problem on a certain model, a route search is generally performed in the state space comprising the states of the entire system (global state). The information that shows the state of each state element (each rectangle shown inFIG.1) in the state element model that represents the system to be updated is called the global state.FIG.2shows the transitions of the global state of the system represented by the state element model illustrated inFIG.1. When solving an automatic planning problem on the state element model, the state space comprising the global states is considered by expressing the state of the entire system as a combination of each state element, and the route of the transition of the global state is searched. A system is defined as a model consisting of parts. Then, the system defined as a model is expanded and converted into a transition system concerning the state of the whole system when it is solved as an automatic planning problem. In general, the size of this expanded state transition system increases in the exponential order of the number of parts. The method described in PTL 1 limits the input to the automatic planning engine to the “differences” in the system that need to be updated. Since the amount of changes required to the system in a single update is often not so large in reality, focusing only on the differences made it possible to automatically generate procedures for large systems in a realistic time. However, there are practical cases in which large-scale updates are performed in a single system update. One such case is “an operation to add (hereinafter called “scale-out”) or update (hereinafter called “bulk update”) a large number of identical subsystems”. For example, a “bulk update” is to update the operating system (OS) of all terminals connected to a network, and a “scale-out” is to expand the scale of a system that is responsible for service operation. Since such updates are simple, it is natural to treat them as a “single update”, but the automatic planning problem that arises from this is a large-scale one. Homogeneous subsystems that are added or updated by scale-out or bulk update are hereinafter referred to as “units”. FIG.3shows an example of a state element model of a system similar toFIG.1. In the example shown inFIG.1, the number of services is 2, whereas in the example shown inFIG.3, the number of services is 100. Also, in the example shown inFIG.1, the number of configuration files associated with one service is one, whereas in the example shown inFIG.3, the number of configuration files associated with one service is two. When a single service is viewed as a unit, the example shown inFIG.1has two units, whereas the example shown inFIG.3has 100 units. If this is simply expanded in the same way as inFIG.2, the state transition system after the expansion becomes an extremely large-scale problem with a number of states as large as 20 to the 100th power (=approximately 10 to the 130th power). In order to reduce such an explosive increase in the amount of computation, each unit can be simplified in some way, and the “number of states per unit” can be reduced. For example, a simple method would be to exclude all parts from each unit except those that are referenced from outside.FIG.4shows how the automatic planning problem was solved based on this idea. However, the resulting procedure is inappropriate as an update procedure. This is due to the fact that if the relationship between the parts referenced from outside the unit and the parts not referenced from outside the unit is completely severed, there will be no consistency between the “update procedure outside the unit (and the update procedure inside the unit for the referenced parts)” and the “update procedure inside the unit. Therefore, in order to simplify the model with the aforementioned policy of “reducing the number of states in one unit”, it is necessary to convert the model so that the “update procedure on the simplified model” is consistent with the “update procedure inside the unit”. In the present invention, the state element model is simplified and a route in the simplified state element model is derived. Then, the route is converted to the route in state element model before simplification to prevent an explosive increase in computational complexity. Unit is described in more detail. As mentioned above, homogeneous subsystems that are added or updated by scale-out or bulk update are called “units”. In more detail, each unit satisfies the following conditions. The combination of state elements (rectangles illustrated inFIG.1and the like) contained in individual units is common to each unit, and the labels contained in each unit are similar labels. Furthermore, the “initial states” of the corresponding state elements contained in the individual units are common, and similarly, the “target states” of the corresponding state elements contained in the individual units are also common. The “state” within a unit that is referenced from outside the unit is common for each unit. However, the reference source of the state may be different for each unit. Similarly, the “state” within a unit that is referred to from a constraint expression is common to each unit. For example, in the examples shown inFIG.1andFIG.3, the “state” in the unit referenced by the constraint expression is common to the “stop” in the state element “Service”. If a label contained in a unit refers to the “state” of an element outside the unit, the “transition” to which such a label is attached is common to each unit. These conditions are met by each unit. The information that indicates the state of each state element contained in a unit is referred to as the “unit global state”. An example embodiments of the present invention are described below with reference to the drawings. Example Embodiment 1 [Description of the Configuration] FIG.5is a block diagram showing an example of a system update procedure planning apparatus of a first example embodiment of the present invention. The system update procedure planning apparatus100of the first example embodiment includes a state element model simplification section101, an automatic planning section102, and an internal transition completion section103. The system update procedure planning apparatus100shown inFIG.5receives as input a state element model and unit affiliation information indicating whether each state element included in the state element model “corresponds to a part E[i, j] included in a unit C[i]” or “is not included in any unit”, and outputs the update procedure that is the solution of the state element model. The unit affiliation information can be said to be information that indicates to which of a plurality of units each state element in the state element model belongs. By using this unit affiliation information, the following operations (1) and (2) are possible. (1) The state element model simplification section101obtains information from the state element ID that “corresponds to the part E[i, j] included in the unit C[i]” or “is not included in any unit”. (2) The state element model simplification section101specifies C[i] and extracts all the included state elements. The state element model simplification section101receives a state element model and unit affiliation information as input, and outputs a simplified model in which each unit included in the state element model has been simplified, and an internal route retention table. The internal route retention table is a table that can refer to a route from Sa to Sb using only the internal state as a key for two different states Sa and Sb belonging to the same unit. An example of an internal route retention table will be described later. In addition, as described below, the internal route retention table is input to the internal transition completion section103that performs the operation of restoring the solution of the original state element model from the solution of the simplified model. The automatic planning section102receives the simplified model, which is an output of the state element model simplification section101, as an input, and outputs a simplified update route that is a solution of the simplified model by a route searching algorithm. The operation of the automatic planning section102to derive a route (simplified update route) that is a solution of the simplified model may be realized by a known method. For example, the automatic planning section102may derive the route that is the solution of the simplified model by the method described in PTL 2. The internal transition completion section103receives as input the state element model, the simplified update route, which is an output of the automatic planning section102, and the internal route retention table, which is an output of the state element model simplification section101, and recovers the update route on the state element model before being simplified from the simplified update route. With the above configuration, the system update procedure planning apparatus100efficiently generates an update route on the original state element model by obtaining and processing the results of the route search on the simplified model. The state element model simplification section101, the automatic planning section102, and the internal transition completion section103are realized, for example, by a CPU (Central Processing Unit) of a computer that operates according to a system update procedure planning program. For example, the CPU may read the system update procedure planning program from a program recording medium such as a program storage device of the computer, and operate as the state element model simplification section101, the automatic planning section102, and the internal transition completion section103according to the program. [Operation Description] The following describes, with reference to the drawings, a processing process in which the system update procedure planning apparatus100of the present example embodiment outputs an update route. First, the process of the state element model simplification process by the state element model simplification section101is described.FIG.6is a flowchart showing an example of the process of simplifying the state element model by the state element model simplification section101. In the following example, the case where a state element model containing N units shown inFIG.7is input is used as an example. It is also assumed that the constraint expression included in the state element model is Equation (1) shown below. Server⁢〈1〉.Startup+…+〈N〉.Startup≥N/2(1) The i shown inFIG.7is a value for distinguishing individual units included in the state element model input to the system update procedure planning apparatus100. The unit identified by i (e.g., the i-th unit) will be denoted as C[i]. In the drawings below, the value (i) for distinguishing the units may be omitted. First, the state element model simplification section101arbitrarily selects one unit based on the unit affiliation information φ. Since all units are the same, any unit may be selected. Here, it is assumed that C[i] is selected (step S200). Next, the state element model simplification section101extracts, from the input state element model (denoted by the sign M), all the state elements belonging to the unit C[i], and generates a partial state element model of the unit including all the dependencies among these state elements (step S201). This partial state element model is denoted by the sign μ. The state element model μ of unit C[i] is represented as shown inFIG.7. Next, the state element model simplification section101determines whether all the transitions included in the state element model μ are observable from the outside, and marks all the observable transitions with an “observable mark” (step S202). An observable transition of a state element (denoted as e) in the state element model meets one of the three criteria [OBS-1], [OBS-2], or [OBS-3] described below. [OBS-1] It has at least one dependency record regarding a state element that is not included in C[i]. [OBS-2] At least one state s, which is depended on by a state element not included in C[i], is a transition source or a transition destination. [OBS-3] At least one state s to which the transient requirement refers, is a transition source or a transition destination. Here, the fact that a transient requirement refers to a state s of e simply means that “the constraint expression contains e.s”. Due to the data structure of the state element model, it is easy to determine whether or not each of the three criteria [OBS-1], [OBS-2], and [OBS-3] are met by simply looking at each transition. In this example, in the constraint expression (Equation (1) above), reference is made to the state “Server <i>. Startup”. Accordingly, the state element model simplification section101attaches observable marks to three transitions: “Stop→Startup”, which is a transition toward the “Startup” state of the state element “Server <i>”, and “Startup→Stop” and “Startup→Restart Required”, which are transitions away from the “Start” state in the opposite direction. In the following drawings, the transitions marked with the observable mark are represented by dashed arrows. An example of the state element model μ after step S202is shown inFIG.8. Next, the state element model simplification section101expands the state element model μ (seeFIG.8) into a state transition graph (this graph is denoted by the sign G) showing the transitions of the unit global state of the unit C[i] (step S203). As described above, the unit global state is information that indicates the state of each state element contained in a single unit. Then, the state transition graph G represents the transition of the unit global state. An example of the state transition graph G obtained in step S203is shown inFIG.9. When the state element model simplification section101develops the state transition graph G (seeFIG.9) from the state element model μ (seeFIG.8), the observable marks are inherited by the state transition graph G. Dependency records that refer to states inside the unit do not remain in the state transition graph G. Dependency records that refer to states outside the unit remain in the state transition graph G. The state element model simplification section101determines for each dependency record included in each label whether or not to leave the dependency record in the state transition graph G. If it is determined not to leave the dependency record for each dependency record included in one label, the label can be deleted. In the present example (seeFIG.8), since there is no dependency record referring to a state external to the unit, no label is left in the state transition graph G. Next to step S203, the state element model simplification section101generates a state transition graph (hereinafter, this graph is denoted by the sign G′) in which observable transitions (in other words, transitions marked as observable) are deleted from the state transition graph G. Then, the state element model simplification section101performs the procedure of decomposition to strongly connected components on the state transition graph G′ to obtain the strongly connected components SCC[1], . . . , SCC[m]. At this time, the state element model simplification section101maintains information indicating to which SCC[i] (i=1, . . . , m) each unit global state belongs. In addition, the state element model simplification section101sequentially registers in the internal route retention table T the routes (by internal transitions) between unit global states included in the same strongly connected component, which were discovered during the procedure of the decomposition to strongly connected components (step S204). The state transition graph G′ with the observable transitions removed is represented as shown inFIG.10. The state transition graph G′ shown inFIG.10contains 12 unit global states. In addition, in this example, six strongly connected components are obtained. The strongly connected components have the unit global state as an element. In the example shown inFIG.10, the first strongly connected component contains element C1; the second strongly connected component contains element C2; the third strongly connected component contains element C3; the fourth strongly connected component contains element C4; in C1to C4, the state of the state element “Server” is “Startup” in all cases. The fifth strongly connected component includes elements A1to A4, wherein the state of the state element “Server” is “Restart Required”; the sixth strongly connected component includes elements B1to B4, wherein the state of the state element “Server” is “Stop”. As described above, the state element model simplification section101registers the routes between unit global states included in the same strongly connected component in the internal route retention table T.FIG.11is a schematic diagram showing an example of the internal route retention table. The internal route retention table is obtained for each strongly connected component that contains a plurality of elements. The internal route retention table illustrated inFIG.11is a table obtained on the basis of strongly connected components containing elements (unit global states) A1to A4. For a strongly connected component that contains only one element, the internal route retention table need not be generated. As shown inFIG.11, the internal route retention table is a set of combinations of information corresponding to a “key” and information corresponding to a “value”. The information corresponding to a “key” is a combination of a starting point and an end point, and the information corresponding to a “value” is information indicating a specific route from the starting point to the end point. If there are multiple routes leading from the starting point to the end point, any one route from among the multiple routes can be selected as the “value”. For the strongly connected components including elements B1to B4(seeFIG.10), an internal route retention table is similarly generated. As described above, for the strongly connected component containing only one element, the internal route retention table need not be generated. Next to step S204, the state element model simplification section101constructs the simplified unit C[i]′ for the unit C[i] based on the strongly connected component SCC[i] (i=1, . . . , m) obtained in step S204. At this time, the state element model simplification section101summarizes the individual strongly connected components SCC[i] (i=1, . . . , m), respectively, into a single element. In this example, six strongly connected components are obtained, and the “states” that summarize the individual strongly connected components are denoted as S1 to S6. When summarizing the strongly connected components having only one unit global state, that one unit global state can be used as one “state”. In this example, the aforementioned elements C1, C2, C3, C4(seeFIG.10) are used as they are, S1, S2, S3, S4. In addition, the “state” that summarizes the strongly connected components (seeFIG.10) including the aforementioned elements A1to A4is designated as S6, and the “state” that summarizes the strongly connected components (seeFIG.10) including the aforementioned elements B1to B4is designated as S5. In the following description, any state among the states S1 to S6 is denoted by S[i], S[j]. The state element model simplification section101puts a transition t between two states S[i] and S[j] only when “a state transition can be made from any unit global state contained in the strongly connected component SCC[i] that is the basis of S[i] to any unit global state contained in the strongly connected component SCC[j] that is the basis of S[j]”. In addition, for the internal transition interpolation process described below, the transitions from S[i] to S[j] are explicitly tagged with the transition t to indicate from which unit global state in SCC[i] they can be executed. In other words, the unit global state included in the strongly connected component is assigned as a tag to the start and end points of the transition between states S1 to S6. Then, the state transition system S1 to S6 is treated as a single state element with an ID of “C[i]′”, which is the simplified unit C[i]′ for unit C[i] (Step S205). FIG.12is a schematic diagram of the simplified unit C[i]′ obtained in the above manner. This state element C[i]′ contains only six states S1 to S6. This is half the number of states (unit global states) contained in the state transition graph G shown inFIG.9. The simplified unit C[i]′ consists of one state element. Next to step S205, the state element model simplification section101duplicates the original state element model M, and constructs the simplified model R by replacing the part corresponding to the unit C[i] in the duplicated state element model with the simplified unit C[i]′ (step S206). This simplified model is denoted by the sign R. When making this replacement, if there is a reference to C[i] from outside C[i] or from a transient requirement, it is necessary to reconcile reference by reassigning the appropriate reference, which can be solved as follows, respectively. If the external transition has a dependency on the state s of the state element e inside C[i], then the state element model simplifier101reads this as “the dependency of on the global state S1, . . . , Sk such that the state of the state element e is s”, and further “the dependency on the strongly connected components SCC[1], SCC[m] that contain at least one of S1, . . . , Sk in the simplified unit”, and rewrite the dependency record. If the dependency on multiple states in C[i] can be resolved by reading the dependency as dependency on a global state such that all dependencies are eliminated. On the other hand, if there is a reference e.s. to state s of state element e inside C[i] from a transient requirement, similarly, the state element model simplification section101will reads this as a “a reference to the global state S1, Sk such that the state of the state element e is s”, and further “a reference to the strongly connected components SCC[1], . . . , SCC[m] that contain at least one of S1, . . . , Sk in the simplified unit”, and replace the term corresponding to e.s. with (C[i]′.SCC[1]+ . . . , +C[i]′.SCC[m]). Since this replaced term is “1 when the state element e is in state s, 0 otherwise”, it is possible to impose the same constraint on the simplified unit as the condition indicated by the original transient requirement. In the example shown inFIG.12, a reference to a state “Server.Startup” in the transient requirement is converted to “C[i]′.S1+C[i]′.S2+C[i]′.S3+C[i]′.S4”. If the state element “Server” inFIG.7is in the “Startup” state, it is in one of the states S1, S2, S3, or S4 (seeFIG.12), so this conversion allows the conditions to remain exactly the same as the original transient requirements after simplification. Finally, the state element model simplification section101replaces other units in the simplified model R with the simplified unit by the same conversion as the conversion from unit C[i] to the simplified unit C[i]′. The only difference between C[i]′ and the simplified unit for other units is the ID of the corresponding state element. By referring to the unit affiliation information φ and replacing it as appropriate, the conversion from the unit to the simplified unit can be performed without repeating steps S201to S205performed for unit C[i]. In addition, the internal route information replaced with the IDs of other units is added to the internal route retention table T. When the replacement of all units has been completed, the simplified model R and the internal route retention table T are output (steps S207to S209). In the above steps S202to S205, the state element model simplification section101replaces the unit with another structure such that the behaviors viewed from outside the unit are equivalent. More specifically, the state element model simplification section101converts the unit into a state transition system comprising global states viewed from the entire unit, and then summarizes the states that can come and go by transitions that are not observed from outside. In the above step S206and steps S207to S209, the state element model simplification section101simplifies the state element model M before simplification by replacing each unit with a different structure that is more simplified. Next, the automatic planning section102will be described. The automatic planning section102is capable of generating a route on the state element model that satisfies the transient requirements. The method by which the automatic planning section102generates the route may be a known method, for example, the method described in PTL 2. Since the simplified model generated by the state element model simplification section101has a form as a state element model in itself, the automatic planning section102can output a route on the simplified model. Next, the process of the internal transition completion process by the internal transition completion section103is described. The route output by the automatic planning section102is a route on the simplified model, and the internal transition completion section103converts this route into a route on the original state element model. It is assumed to denote each transition in the route on the simplified model by t[i] (i=1, . . . , n). In addition, as described below, the global state on the simplified model that is the “destination” of each transition t[i] is denoted by R[i].FIG.13is a schematic diagram showing an example of a route on the simplified model, represented using t[i] and R[i].FIG.14is a schematic diagram showing a specific example of a part of the internal route retention table. FIG.15is a flowchart showing details of the operation of the internal transition completion section103. The internal transition completion process of the internal transition completion section103operates by receiving the original state element model M, the simplified update route which is an output of the automatic planning section102, and the internal route retention table which is an output of the state element model simplification section101. Hereinafter, for the purpose of explanation, the simplified update route is referred to by “transitions t[1], t[2], . . . , t[n] used in the state transition”. In other words, the simplified update route is a route that reaches the target state of the simplified model by sequentially executing and transitioning transitions t[i] from the initial state. For the sake of explanation, the global state on the simplified model that is the “destination” of each transition t[i] is denoted by R[i]. First, the internal transition completion section103sets σ[1] as the initial state of the state element model M (step S300). Here, the initial state of the state element model M means a global state in which the current state points to the initial state in all state elements included in the state element model, respectively. Similarly, the target state of the state element model M means the global state in which the current state points to the target state, respectively, in all state elements included in the state element model. Next, the internal transition completion section103repeats steps S302to S304while sequentially extracting the transitions included in the simplified update route from t[1] to t[n]. The update route is sequentially assembled in this iterative process. The update route to be obtained is denoted by the sign π. In each iteration that operates by taking out t[i], σ[i] has already been identified by the previous iteration (Step S301). For each t[i] and σ[i], the internal transition completion section103determines whether or not t[i] is executable in σ[i], and if it is executable, determines whether or not the destination σ[i]$t[i]is included in R[i] (Step S301a). As described above, the destination when the transition t[i] is executed at σ[i] is denoted by the sign σ[i]$t[i]. Here, the global state of the simplified model represented by R[i] corresponds to the set of one or more global states of the original state element model. In other words, “σ[i]$t[i]is included in R[i]” shall be said to refer to the fact that σ[i]$t[i]is included in the set of global states of the original model corresponding to R[i]. If the transition t[i] is executable in the state σ[i] and σ[i]$t[i]is included in R[i] (True in step S301a), then the internal transition completion section103sets σ[i+1] to σ[i]$t[i], adds t[i] to the update route π, and then returns to the iterative procedure at the back to the beginning (step S302). On the other hand, if the transition t[i] is inexecutable in the state σ[i], or if σ[i]$t[i]is not included in R[i] even if it is executable (false in step S301a), the internal transition completion section103moves to a procedure of transitioning to state in which t[i] is executable and the state after execution is included in R, by supplementing the internal transition. In the following, the condition that “t[i] is executable and the state after execution is included in R [i]” for the global state σ will be described as “t[i] is regularly executable in σ”. Since the simplified update route is the correct route on the simplified model, the transition t[i] is executable on the simplified model. Furthermore, the transition t[i] is tagged with a tag that indicates in which unit global state it is executable. Let the unit global state indicated by this tag be σp_tgt. If the current unit global state of the unit that contained the transition t[i] (denoted by C[j]) is consistent with the unit global state σp_tgt above, it can be said that t[i] is executable and its state after execution is contained in R[i]. Therefore, in order to transition, from σ[i], t[i] to a regularly executable state, it is only needed to transition the part of σ[i] corresponding to C[j] to the state represented by σp_tgt. Accordingly, the internal transition completion section103first extracts the part σp_src corresponding to σ[i], retrieves “the route by the internal transition from σp_src to σp_tgt” from the internal route retention table, and adds all the retrieved routes to π (step S303). Thereafter, the internal transition completion section103makes the state to which the state σ_tgt is transitioned by t[i] σ[i+1], the state σ_tgt is a state in which the part corresponding to C[j] of the state σ[i] is replaced by the state represented by σp_tgt. The internal transition completion section103adds t[i] to the update route π and then returns to the beginning of the iterative procedure (step S304). After completing the iterative procedure (step S305), the internal transition completion section103checks whether a part corresponding each unit C[i] (i=1, . . . , n) of the global state σ[n+1] is consistent with the target state. If there is no match, the route that fill the difference between σ[n+1] and the target state are taken from the internal route retention table in the same manner as in step S303, and all are added to π (step S306). After all the processing is done, the update route π is the correct update procedure on the state element model M, so it is output. A specific example of the operation of the internal transition completion section103will be described using the drawings.FIG.16is an example of the problem where two units shown in FIG.7exist and requiring one of the two servers to be in a startup state as a transient requirement. To distinguish between the two units, the respective state elements are distinguished as server <1>, server <2>, and states as S1 <1>, S1 <2>, and the like. The unit in this problem is similar to the unit shown inFIG.7and can be simplified to the state elements (simplified units) shown inFIG.12. The simplified unit obtained from the two units shown inFIG.16is distinguished as C[1]′, C[2]′. The aforementioned details shown inFIG.13are consistent with the specific examples shown below. When the problem is solved on this simplified model, the route that results in the solution is t[1], t[2], t[3], and t[4] shown inFIG.13. t[1] corresponds to “Server <1>: Startup→Stop”. t[2] corresponds to “Server <1>: Stop→Startup”. t[3] corresponds to “Server <2>: Startup→Stop”. t[4] corresponds to “Server <2>: Stop→Startup”. Also, let R[i] be the destination state of t[i]. As shown inFIG.13, R[1] becomes “C[1]′: S5, C[2]′: S2”; R[2] becomes “C[1]':S3, C[2]′: S2”; R[3] becomes “C[1]′:S3, C[2]′ S5”; R[4] becomes “C[1]′:S3, C[2]′:S3”. FIG.17illustrates how the feasibility of the route shown inFIG.13is sequentially examined and transformed into the route on the original state element model M shown inFIG.16. The process is described in the following. First, σ[1] is set to the initial state of M “Server <1>: Startup, configuration file 1 <1>: old, configuration file 2 <1>: old, Server <2>: Startup, configuration file 1 <2>: old, configuration file 2 <2>: old”, and the sought update route π is set to an empty list. The σ[1]$t[1]obtained by applying t [1] is “Server <1>: Stop, configuration file 1 <1>: old, configuration file 2 <1>: old, Server <2>: Startup, configuration file 1 <2>: old, configuration file 2 <2>: old”. Since this is included in R[1], σ[2] is set to σ[1]$t[1]and t[1] is added to π (seeFIG.17). Next, t[2] can be executed in 6[2]. However, the destination σ[2]$t[2]is “server <1>: Startup, configuration file 1 <1>: old, configuration file 2 <1>: old, Server <2>: Startup, configuration file 1 <2>: old, configuration file 2 <2>: old”, which is not included in R[2]. Therefore, the internal transition completion section103refers to the tag of t[2]. Here, the unit global state of C′[1] that can be assigned as the tag of t[2] is “Server <1>: Stop, configuration file 1 <1>: new, configuration file 2 <1>: new”. Therefore, in the internal route retention table for C′[1], when the route from “Server <1>: Stop, configuration file 1 <1>: old, configuration file 2 <1>: old” to “Server <1>: Stop, configuration file 1 <1>: new, configuration file 2 <1>: new” is queried, the result is “configuration file 1 <1>: old to new” and “configuration file 2 <1>: old to new” are obtained, the internal transition completion section103adds these to the π. Then, the internal transition completion section103sets “Server <1>: Stop, configuration file 1 <1>: new, configuration file 2 <1>: new, Server <2>: Startup, configuration file 1 <2>: old, configuration file 2 <2>: old” to σ[3] and adds t[2] to π. In the following, σ[4] and σ[5] can be obtained in the same way. σ[5] is consistent with the target state of the original model. Therefore, the above procedure can finally obtain the route π on the state element model M as shown inFIGS.18and19. In the above steps S301to S305, the internal transition completion section103completes, for the route on the simplified state element model, the internal transitions necessary to execute the route on the state element model before the simplification, thereby converting the route into a route that can be executed on the state element model before the simplification. More specifically, the internal transition completion section103checks whether all the transitions included in the route on the simplified state element model can be executed on the state element model before the simplification, and for the transitions that cannot be executed, it completes the route to the executable state corresponding to the said transition by referring to the internal route retention table. [Description of Effect] The system update procedure planning apparatus100of this example embodiment applies simplification based on unit configuration information to a state element model representing a system update, and solves an automatic planning problem on the simplified state element model. Then, the system update procedure planning apparatus100converts the solution to the solution of the original problem with little effort and outputs it. This realizes efficient automatic generation of update procedures for a large-scale problem that involves updating a large number of similar configurations. Therefore, an explosive increase in the amount of computation can be prevented when solving the automatic planning problem in system update. The state element model simplification section101realizes a simplification process of a state element model representing a system update based on the external observability of transitions in the model. Also, by using the strong connectivity due to unobservable transitions, it is guaranteed that the solution of the model after simplification can be restored to the solution of the original model by appropriately completion for the route omitted by the simplification. The internal transition completion section103converts the solution on the simplified model to the solution of the original problem in a simple way without procedure generation or other search processes, by using the information obtained during the model simplification process of the state element model simplification section101. Example Embodiment 2 [Description of the Configuration] A second example embodiment of the present invention will be described below with reference to the drawings. The second example embodiment is one in which some changes are made to the configuration of the first example embodiment so as to add a unit configuration detection function that can detect unit affiliation information from a state element model, which was input to the system update procedure planning apparatus100in the first example embodiment. FIG.20is a block diagram showing an example of a system update procedure planning apparatus of a second example embodiment of the present invention. Elements similar to those provided by the system update procedure planning apparatus100in the first example embodiment are marked with the same sign as inFIG.5and explanation of these elements are omitted. The system update procedure planning apparatus400of the second example embodiment includes a state element model simplification section101, an automatic planning section102, an internal transition completion section103, and a unit configuration detection section401. The state element model simplification section101, the automatic planning section102, and the internal transition completion section103are the same as the state element model simplification section101, the automatic planning section102, and the internal transition completion section103in the first example embodiment. However, whereas in the first example embodiment, the input to the state element model simplification section101is a state element model and unit affiliation information given externally, in the second example embodiment, the input to the state element model simplification section101is a state element model given externally and unit affiliation information output by the unit configuration detection section401. In this example embodiment, the only input to the system update procedure planning apparatus400is the state element model. The unit configuration detection section401takes a state element model as an input, calculates unit affiliation information on the given state element model, and outputs the information. As already described, the unit affiliation information can be said to be information indicating to which of a plurality of units each state element in the state element model belongs. The unit configuration detection section401, the state element model simplification section101, the automatic planning section102, and the internal transition completion section103are realized, for example, by a CPU of a computer that operates according to a system update procedure planning program. For example, the CPU may read the system update procedure planning program from a program recording medium such as a program storage device of the computer, and operate as the unit configuration detection section401, the state element model simplification section101, the automatic planning section102, and the internal transition completion section103according to the program. [Operation Description] Since the operations of the state element model simplification section101, the automatic planning section102, and the internal transition completion section103are the same as those in the first example embodiment, they will not be described here. In the following, the processing process of the unit configuration detection section401is described.FIG.21is a flowchart showing an example of the processing process of the unit configuration detection section401. The unit configuration detection section401operates by receiving a state element model given from outside as an input. First, the unit configuration detection section401considers the state element model to be a graph such that “the vertices are state elements and the edges are dependency records”. Further, the unit configuration detection section401colors each vertex by “the form of the internal state transition system and the reference from the transient requirement” and colors the edges by “the transition to which the dependency record is attached and the value of the dependency record itself' to construct a coloring graph G (step S500). Here, the coloring graph obtained in step S500is denoted by the sign G. The unit configuration detection section401performs a stable subdivision (stable refinement, or naive vertex refinement) procedure for colors of a coloring graph G. Specifically, the unit configuration detection section401repeats a procedure called a subdivision round (steps S501to S503) until no further subdivision of colors (nodes that were originally the same color become different colors after the subdivision round) occurs. The above “subdivision round” is defined as “a subdivision round of generating a label that summarizes the color c of a vertex and the adjacent vertex colors c[1], c[2], . . . , c[n], and reassigning a new color to each vertex based on the label”. The unit configuration detection section401selects one color in the coloring graph G that two or more state elements have in common (step S504). The unit configuration detection section401defines the state elements corresponding to the selected colors as initial unit candidates C[1], . . . , C[n], respectively. In subsequent procedures, the unit configuration detection section401aims to expand these units as much as possible by adding another state element to them. The unit configuration detection section401examines, for the current unit candidates C[1], . . . , C[n], elements to which a state element contained therein refers or are referred to by a dependency record (hereinafter simply referred to as an “related element”). When all of them have been examined, if there is an element that is related to a unit C[i] but not to any other unit, the unit configuration detection section401adds the element to the unit C[i] (steps S505to S507). At the end of step S506, if even one element has been added to the unit candidate, the unit configuration detection section401repeats step S506once more. Otherwise, the unit configuration information is generated from the current unit candidates and output. There are multiple colors that can be selected in step S504. However, no matter which color is selected from the multiple candidates, the following procedures (including later procedures such as the graph simplification process and automatic planning) will operate correctly. Therefore, there is no particular restriction on how to select a color here. Also, after the system update procedure planning apparatus400tries each of the plurality of colors and confirms the effect of the simplification, the unit configuration detection section401may finally select the color that has the best effect of the simplification. In the above steps S500to S507, the unit configuration detection section401detects two or more structurally identical subconfigurations in the state element model, interprets the two or more structurally identical subconfigurations as units, and then outputs as the unit affiliation information which unit each state element belongs to. More in detail, the unit configuration detecting unit401considers the state element model as a coloring graph in step S501, and detects the state elements which can be regarded as structurally identical by obtaining stable subdivision in steps S501to S503. Then, in steps S505to S507, the unit configuration detection section401determines a unit to which each state element can be regarded as belonging based on the detection results. [Description of Effect] The present example embodiment is similar to the first example embodiment except that the unit configuration detection section401generates the unit affiliation information. Accordingly, the same effect as that of the first example embodiment is obtained. The unit configuration detection section401detects a plurality of “unit” structures having a similar configuration from the state element model and outputs a correspondence between the structures and the state elements. This allows the user to obtain the information necessary to simplify the state element model without explicitly specifying the information of the units. FIG.22is a schematic block diagram showing an example configuration of a computer for a system update procedure planning apparatus of each example embodiment of the present invention. The computer1000has a CPU1001, a main memory device1002, an auxiliary memory device1003, and an interface1004. The system update procedure planning apparatus of each example embodiment of the present invention is realized by a computer1000. The operation of the system update procedure planning apparatus is stored in the auxiliary memory device1003in the form of a system update procedure planning program. The CPU1001reads the system update procedure planning program from the auxiliary memory device1003, expands it to the main memory device1002, and executes the processes described in the above example embodiments according to the system update procedure planning program. The auxiliary memory device1003is an example of a non-transitory tangible medium. Other examples of a non-transitory tangible medium include a magnetic disk, an optical magnetic disk, a CD-ROM (Compact Disk Read Only Memory), a DVD-ROM (Digital Versatile Disk Read Only Memory), semiconductor memory, and the like. When the program is delivered to the computer1000by a communication line, the computer1000receiving the delivery may expand the program into the main memory device1002and execute the processing described in each of the above example embodiments according to the program. Some or all of each of the components may be realized by general purpose or dedicated circuitry, processors, or combinations thereof. These may comprise a single chip or a plurality of chips connected via a bus. Some or all of each component may be realized by a combination of the above-described circuitry, etc. and a program. When some or all of each component is realized by a plurality of information processing apparatuses, circuits, or the like, the plurality of information processing apparatuses, circuits, or the like may be centrally located or distributed. For example, the information processing apparatuses, circuits, and the like may be implemented as a client-and-server system, a cloud computing system, and the like, each of which is connected via a communication network. Next, an overview of the present invention will be described.FIG.23is a block diagram showing an overview of a system update procedure planning apparatus of an example embodiment of the present invention. The system update procedure planning apparatus of an example embodiment of the present invention outputs an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements, unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs. The system update procedure planning apparatus includes a state element model simplification section101, an automatic planning section102, and an internal transition completion section103. The state element model simplification section101outputs a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model. The automatic planning section102outputs an executable route, for the simplified state element model, from an initial state thereof to a target state. The internal transition completion section103converts the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. According to such a configuration, an explosive increase in the amount of computation can be prevented when solving an automatic planning problem in system update. FIG.24is a block diagram showing another example of an overview of a system update procedure planning apparatus of an example embodiment of the present invention. The system update procedure planning apparatus of this example outputs an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements. The system update procedure planning apparatus in this example includes a unit configuration detection section401, a state element model simplification section101, an automatic planning section102, and an internal transition completion section103. The unit configuration detection section401takes the state element model as input and outputs unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs. The state element model simplification section101, the automatic planning section102and the internal transition completion section103are the same as those elements shown inFIG.23. The example embodiments of the invention described above may also be described as in the following supplementary notes, but are not limited to (Supplementary note 1) A system update procedure planning apparatus for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements, unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs,the system update procedure planning apparatus comprising:a state element model simplification section that outputs a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;an automatic planning section that outputs an executable route, for the simplified state element model, from an initial state thereof to a target state; andan internal transition completion section that converts the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. (Supplementary note 2) A system update procedure planning apparatus for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements,the system update procedure planning apparatus comprising:a unit configuration detection section that takes the state element model as input and outputs unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs,a state element model simplification section that outputs a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;an automatic planning section that outputs an executable route, for the simplified state element model, from an initial state thereof to a target state; andan internal transition completion section that converts the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. (Supplementary note 3) The system update procedure planning apparatus according to supplementary note 1 or 2,wherein the state element model simplification sectionsimplifies the state element model before simplification by replacing each unit with a different, more simplified structure based on the unit affiliation information. (Supplementary note 4) The system update procedure planning apparatus according to supplementary note 3,wherein the state element model simplification sectionsimplifies the state element model before simplification by replacing each unit with a different structure such that a behavior viewed from outside the unit is equivalent. (Supplementary note 5) The system update procedure planning apparatus according to supplementary note 4,wherein the state element model simplification sectionsimplifies the state element model by converting each unit into a state transition system comprising global states viewed from entire unit, and then summarizing states that can come and go by transitions that are not observed from outside. (Supplementary note 6) The system update procedure planning apparatus according to any one of supplementary notes 1 to 5,wherein the internal transition completion sectioncompletes, for the route on the simplified state element model, internal transitions necessary to execute the route on the state element model before simplification, thereby converts the route into the executable route on the state element model before simplification. (Supplementary note 7) The system update procedure planning apparatus according to supplementary note 6,wherein the state element model simplification sectiongenerates internal route retention table that holds information on some of the simplified state transitions in addition to the simplified state element model, andthe internal transition completion sectionchecks whether all the transitions included in the route on the simplified state element model can be executed on the state element model before simplification, and completes, for the transitions that cannot be executed, route to the executable state corresponding to the transition by referring the internal route retention table, thereby converts the route into the executable route on the state element model before simplification. (Supplementary note 8) The system update procedure planning apparatus according to supplementary note 2,wherein the unit configuration detection sectiondetects two or more structurally identical subconfigurations in the state element model, interprets the two or more structurally identical subconfigurations as units, and outputs as the unit affiliation information which unit each state element belongs to. (Supplementary note 9) The system update procedure planning apparatus according to supplementary note 8,wherein the unit configuration detection sectionconsiders the state element model as a coloring graph, detects the state elements which can be regarded as structurally identical by obtaining stable subdivision, and determines a unit to which each state element can be regarded as belonging based on detection results. (Supplementary note 10) A system update procedure planning method for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements, unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs,the system update procedure planning method comprising:outputting a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;outputting an executable route, for the simplified state element model, from an initial state thereof to a target state; andconverting the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. (Supplementary note 11) A system update procedure planning method for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements,the system update procedure planning method comprising:taking the state element model as input and outputting unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs;outputting a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;outputting an executable route, for the simplified state element model, from an initial state thereof to a target state; andconverting the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. (Supplementary note 12) A computer-readable recording medium in which a system update procedure planning program is recorded, the system update procedure planning program being executed on a computer for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements, unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs,the system update procedure planning program causing the computer to perform:a state element model simplification process of outputting a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;an automatic planning process of outputting an executable route, for the simplified state element model, from an initial state thereof to a target state; andan internal transition completion process of converting the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. (Supplementary note 13) A computer-readable recording medium in which a system update procedure planning program is recorded, the system update procedure planning program being executed on a computer for outputting an executable route on a state element model as a system update procedure, based on the state element model describing a system update by a plurality of state elements and a constraint expression specified with the plurality of state elements,the system update procedure planning program causing the computer to perform:a unit configuration detection process of taking the state element model as input and outputting unit affiliation information indicating to which of a plurality of units each state element in the state element model belongs;a state element model simplification process of outputting a simplified state element model by applying a simplification transformation based on the unit affiliation information to the state element model;an automatic planning process of outputting an executable route, for the simplified state element model, from an initial state thereof to a target state; andan internal transition completion process of converting the executable route from the initial state to the target state on the simplified state element model into the executable route from an initial state to a target state on the state element model before simplification. While the present invention has been described with reference to the example embodiments, the present invention is not limited to the aforementioned example embodiments. Various changes understandable to those skilled in the art within the scope of the present invention can be made to the structures and details of the present invention. This application claims priority based on Japanese patent application 2019-102388 filed on May 31, 2019, the entire disclosure of which is hereby incorporated herein. INDUSTRIAL APPLICABILITY The present invention is suitably applied to automatic planning problems in system update. REFERENCE SIGNS LIST 100,400System update procedure planning apparatus101State element model simplification section102Automatic planning section103Internal transition completion section401Unit configuration detection section
69,733
11861354
DETAILED DESCRIPTION OF EMBODIMENTS An embodiment of a technique of the present disclosure will be described in detail with reference to the drawings. Overall Configuration of Update Control System The configuration of an update control system in the present disclosure will be described.FIG.1is a block diagram showing the overall configuration of the update control system. The update control system includes a center1, a maintenance shop2, and a vehicle3. The center1is a server that manages a software update on an electronic control unit in the vehicle3(to be more exact, the center1is a center system including such a server, but will be described as a server for convenience of description). The center1can communicate wirelessly with the vehicle3. The center1can communicate at least by wire (Internet, dedicated line, etc.) with the maintenance shop2(to be more exact, a predetermined server, information processing terminal, etc. of the maintenance shop2). The maintenance shop2is a facility for servicing the vehicle3. The maintenance shop2has an in-house network to which an information processing terminal for maintenance work (hereinafter referred to as “maintenance work terminal”), not shown, is connected. The maintenance work terminal can communicate with the center1over the in-house network. The vehicle3brought to the maintenance shop2can be connected by wire to the in-house network or the maintenance work terminal to perform an update process of updating software on the electronic control unit of the vehicle3, based on a predetermined operation by a service technician. Hereinafter, the update process that is performed in the wired environment at the maintenance shop2will be referred to as “wired update.” In the present embodiment, the update process includes at least a sequence of “inquiring whether there is an update,” a sequence of “downloading a distribution package,” and a sequence of “writing update data.” The vehicle3can communicate wirelessly with the center1. The vehicle3can perform the update process by using the wireless communication. Hereinafter, the update process using wireless communication between the vehicle3and the center1will be referred to as “wireless update” (a service that provides such a wireless update will be referred to as “over-the-air (OTA) service”). It is also possible to perform the wired update when the vehicle3is brought to the maintenance shop2. In the following description, the state in which the vehicle3is not connected by wire and cannot perform the wired update but can perform the wireless update will be referred to as “first state.” The state in which the vehicle3is connected by wire at the maintenance shop2and can perform the wired update will be referred to as “second state.” Configuration of Center1 FIG.2is a block diagram showing a schematic configuration of the center1. As shown inFIG.2, the center1includes a processor11, a random access memory (RAM)12, a storage device13, and a communication device14. The storage device13includes a readable and writable storage medium such as a hard disk drive (HDD) or a solid state drive (SSD). The storage device13stores various kinds of programs and data necessary for processes according to the present embodiment. In the center1, the processor11performs a predetermined control process by executing programs read from the storage device13using the RAM12as a work area. The communication device14communicates with the maintenance shop2and the vehicle3over a network. Configuration of Vehicle3 FIG.3is a block diagram showing a schematic configuration of the vehicle3. As shown inFIG.3, the vehicle3includes at least an in-vehicle control device31, a communication module32, a plurality of electronic control units (ECUs)33ato33d, and a diagnostic connector36. The in-vehicle control device31is connected to the communication module32, the electronic control units33ato33d, and the diagnostic connector36via a bus35. The in-vehicle control device31can communicate wirelessly with the center1via the communication module32. The in-vehicle control device31sends and receives predetermined data to and from the center1to perform control for a software update process on each electronic control unit33ato33d. That is, the in-vehicle control device31has a software update function using the wireless update. The communication module32is communication equipment that can be connected wirelessly to a predetermined network (telephone network, Internet network, etc.). The in-vehicle control device31can also perform the wired update via the diagnostic connector36when the vehicle3is brought to the maintenance shop2. That is, the in-vehicle control device31has a software update function using the wired update. The in-vehicle control device31includes a microcomputer45and a communication device46. The microcomputer45includes a processor41, a RAM42, a read-only memory (ROM)43, and a storage device44. In the in-vehicle control device31, the processor41of the microcomputer45performs a predetermined process by executing programs read from the ROM43by using the RAM42as a work area. Specifically, the processor41performs a process related to the software update function using the wireless update or a process related to the software update function using the wired update. The communication device46communicates with the communication module32, the electronic control units33ato33d, and the diagnostic connector36(the maintenance work terminal connected by wire to the diagnostic connector36) via the bus35. The electronic control units33ato33dcontrol the operation of various parts of the vehicle3. The number of electronic control units shown inFIG.3is by way of example. Functional Block Diagram of Center1 FIG.4is a functional block diagram of the center1. The center1includes a storage unit16, a communication unit17, and a control unit18. The communication unit17and the control unit18are implemented by the processor11shown inFIG.2executing programs stored in the storage device13by using the RAM12. The storage unit16is implemented by the storage device13shown inFIG.2. The storage unit16stores programs and data to be used in the processes according to the present embodiment. The control unit18sends and receives predetermined data to and from the in-vehicle control device31via the communication unit17to perform a process for the wireless update. The control unit18sends and receives predetermined data to and from the maintenance shop2via the communication unit17to perform a process for the wired update. Functional Block Diagram of In-Vehicle Control Device31 FIG.5is a functional block diagram of the in-vehicle control device31shown inFIG.3. The in-vehicle control device31includes a storage unit47, a communication unit48, and a control unit49. The storage unit47is implemented by the storage device44shown inFIG.3. The communication unit48and the control unit49are implemented by the processor41shown inFIG.3executing programs stored in the ROM43by using the RAM42. The storage unit47stores various kinds of programs and data for execution of a software update process. The control unit49includes a wired update control unit491and an OTA master492. The wired update control unit491performs control for the wired update. The OTA master492performs control for the wireless update. In the second state, the wired update control unit491performs the control for the wired update based on instructions (diagnostic communication command etc.) from the maintenance work terminal. The forms of wired connection of the vehicle3at the maintenance shop2will be described. In the present embodiment, any form of connection may be used as long as the vehicle3(in-vehicle control device31) is connected by wire at the maintenance shop2and can communicate with the center1. For example, the following forms of connection are possible. The in-vehicle control device31may be connected to the in-house network of the maintenance shop2via the diagnostic connector36. In this case, the control unit49receives instructions from the maintenance work terminal connected to the in-house network. Based on the received instructions, the control unit49communicates with the center1via the in-house network and performs a process for the wired update. For example, based on the instructions from the maintenance work terminal, the control unit49sends data for inquiring of the center1whether there is an update (hereinafter referred to as “update inquiry”). When there is an update, the control unit49downloads a distribution package (collection of update data, will be described in detail later) from the center1. Based on the instructions from the maintenance work terminal, the control unit49also performs control to update software on a predetermined electronic control unit (one or more electronic control units out of the electronic control units33ato33d) using the downloaded distribution package. Alternatively, the maintenance work terminal may be directly connected to the in-vehicle control device31via the diagnostic connector36. In this case, for example, mainly the maintenance work terminal proceeds with the update process. For example, the following processing is possible. Based on instructions from the maintenance work terminal, the control unit49sends information on the vehicle3such as configuration information of the vehicle3to the maintenance work terminal. The maintenance work terminal (instead of the control unit49) makes an update inquiry together with the information to the center1. When there is an update, the maintenance work terminal downloads a distribution package and sends the distribution package to the storage unit47. Based on the instructions from the maintenance work terminal, the control unit49performs a process of updating software on a predetermined electronic control unit. Alternatively, when the maintenance work terminal is directly connected to the in-vehicle control device31via the diagnostic connector36, the maintenance work terminal may serve as a repeater. That is, the maintenance work terminal connected by wire may relay communication between the control unit49and the center1. The form of wired connection of the vehicle3at the maintenance shop2is not limited to the above, and the vehicle3may be connected by wire in any form at the maintenance shop2as long as the control unit49can perform the wired update in the second state. In the first state, the OTA master492communicates wirelessly with the center1via the communication unit48and performs various controls for the wireless update. Specifically, the control unit49makes an update inquiry to the center1. When there is an update, the control unit49downloads a distribution package for the update from the center1by using wireless communication. The OTA master492updates software on an electronic control unit based on the downloaded distribution package. In the present embodiment, the following control process is performed using the above configuration. When the state of the vehicle3is switched between the first state and the second state after the start of an update process for applying a predetermined distribution package and before completion of the update process, the vehicle3carries over the progress of the update process before the switching of the state and continues the update process in the switched state. The processes according to the present embodiment will be described in detail. Data for Use in Center1 First, data to be used in the processes of the center1will be described.FIG.6is a memory map showing an example of data stored in the storage unit16of the center1. The storage unit16stores an update control program101, a vehicle database102, and a distribution package103. Although not shown in the figure, the storage unit16also stores various kinds of data required for an update control process as appropriate. The update control program101is a program for controlling a process for software update according to the present embodiment in the center1. The vehicle database102is a database of vehicles3whose update control is to be managed by the center1.FIG.7shows an example of a data configuration of the vehicle database102. The vehicle database102is a database including at least the following items: a vehicle identification number (VIN)111, an update history112, and an update progress113. The vehicle identification number111is a unique number identifying each vehicle3. The update history112is a record of a history of update processes that have been completed for a predetermined vehicle3. The update history112is used to determine whether there is an update to be applied to the predetermined vehicle3. The update progress113is data showing how much of an update process for applying a predetermined distribution package103to the predetermined vehicle3(that is, a single update process) has been completed. That is, the update progress113is data showing the progress of an update process of applying a certain distribution package103to the predetermined vehicle3after the start and before completion of the update process. The distribution package103is a collection of update data for updating software on an electronic control unit. The distribution package103may include a plurality of pieces of update data for updating software on one or more electronic control units. In other words, it can be said that the distribution package103is a package of one or more pieces of update data. For example, when there are three electronic control units whose software is to be updated in a single update process (hereinafter referred to as “target ECUs”), update data for each of the target ECUs is collectively distributed as a single distribution package103. The one or more target ECUs may function as the OTA master492. The distribution package103further includes three types of data: OTA data104, wired update data105, and common data106. The OTA data104is data that is used only for the wireless update. The wired update data105is data that is used only for the wired update. For example, the OTA data104is predetermined update data processed into a format and content suitable for the wireless update. The wired update data105is predetermined update data processed into a format and content suitable for the wired update. The common data106is data that is used for both the wireless update and the wired update. Therefore, the content of the distribution package103that is sent to the vehicle3may vary depending on whether the vehicle3is in the first state or the second state. In other words, when the vehicle3is in the second state, the wired update data105and the common data106are sent to the vehicle3, and a process for the wired update is performed using the wired update data105and the common data106. When the vehicle3is in the first state, the OTA data104and the common data106are sent to the vehicle3, and a process for the wireless update is performed using the OTA data104and the common data106. Although only one distribution package103is shown inFIG.6for convenience of description, the storage unit16of the center1may store a plurality of the distribution packages103. Data for Use in In-Vehicle Control Device31 Next, data to be used in the in-vehicle control device31will be described.FIG.8is a memory map showing an example of data stored in the storage unit47of the in-vehicle control device31. The storage unit47of the in-vehicle control device31stores at least an update control program121, a wired update program122, an OTA program123, an update environment flag124, an update in-progress flag125, progress data126, and update work data127. The update control program121is a program to control the entire update control process according to the present embodiment in the in-vehicle control device31. Specifically, the update control program121is a program to perform control to switch between the wired update program122and the OTA program123that will be described below, etc. The wired update program122is a program to perform an update process for the wired update. The OTA program123is a program to perform an update process for the wireless update. The update environment flag124is a flag to determine whether the vehicle3is in the first state or the second state. The update in-progress flag125is a flag indicating whether an update process is currently in progress. In the present embodiment, the update in-progress flag125is set on when an update process of applying a predetermined distribution package103to a predetermined vehicle3is started, and is on until the update process is completed. The update in-progress flag125is set off (cleared) when the update process is completed. For example, the update in-progress flag125is set on when the distribution package103is being downloaded. Assuming that there are three target ECUs for the predetermined distribution package103, the update in-progress flag125is set on when an update on two of the three target ECUs has been completed but an update on the remaining one has not been completed yet. The update in-progress flag125is set off when the update on all of the three target ECUs is completed. The progress data126more specifically shows how much of an update process has been completed when the update process is in progress (the update in-progress flag125is on). For example, data indicating which sequence of the update process is currently being performed, how much data of the distribution package103has been downloaded when download of the distribution package103is not complete, and up to which memory block the update data has been written. The update work data127is data that is temporarily stored for use in the update process. Specifically, the update work data127is a data including the distribution package103downloaded from the center1. Details of Process that is Performed by Control Unit49of Vehicle3 Next, a process that is performed by the control unit49of the vehicle3will be described.FIG.9is a flowchart showing details of a control process that is performed by the control unit49of the vehicle3. InFIG.9, in step S1, the control unit49refers to the update in-progress flag125and determines whether an update process is currently in progress. When the control unit49determines that the update process is not in progress (NO in step S1), the routine proceeds to step S2. In step S2, the control unit49determines whether the vehicle3is in the second state, that is, whether the vehicle3is in the state where the vehicle3can perform a wired update. For example, the control unit49makes this determination by detecting whether a predetermined cable is connected to the diagnostic connector36. Alternatively, the control unit49may make this determination based on whether a diagnostic communication command is received (from the maintenance work terminal). When the control unit49determines in step S2that the vehicle3is not in the second state (NO in step S2), the routine proceeds to step S3. In step S3, the control unit49performs an OTA inquiry process. On the other hand, when the control unit49determines that the vehicle3is in the second state (YES in step S2), the routine proceeds to step S4. In step S4, the control unit49performs a wired inquiry process. Steps S3and S4will be described in detail. FIG.10is a flowchart showing details of the OTA inquiry process according to step S3. In this process, the control unit49performs a process of making an update inquiry to the center1(sequence of “inquiring whether there is an update”) by using wireless communication. The control unit49executes the OTA program123for this process. The control unit49thus functions as the OTA master492in this process. InFIG.10, the OTA master492first determines in step S11whether the timing has come to make an update inquiry. This timing may be any timing. In the present embodiment, an update inquiry is made every 10 days. Therefore, the OTA master492determines whether the timing has come to make an update inquiry by determining whether 10 or more days have passed since the previous update inquiry. When the OTA master492determines that the timing has not come to make an update inquiry (NO in step S11), the OTA master492ends the OTA inquiry process. On the other hand, when the OTA master492determines that the timing has come to make an update inquiry (YES in step S11), the routine proceeds to step S12. In step S12, the OTA master492sends an update inquiry to the center1by using wireless communication. Next, in step S13, the OTA master492determines whether a response to the update inquiry that received from the center1indicates that there is an update. When the OTA master492determines that there is no update (NO in step S13), the OTA master492ends the OTA inquiry process. On the other hand, when the OTA master492determines that there is an update (YES in step S13), the routine proceeds to step S14. In step S14, the OTA master492sets the update in-progress flag125on. Thereafter, in step S15, the OTA master492starts a process for the wireless update. Specifically, the OTA master492starts the sequence of “downloading a distribution package.” The OTA inquiry process is performed in the manner described above. FIG.11is a flowchart showing details of the wired inquiry process according to step S4. In this process, the control unit49performs a process of making an update inquiry to the center1by using wired communication. The control unit49executes the wired update program122for this process. The control unit49thus functions as the wired update control unit491in this process. InFIG.11, the wired update control unit491first determines in step S21whether it has received an instruction to make an update inquiry from the maintenance work terminal. When the wired update control unit491determines that it has not received an instruction to make an update inquiry (NO in step S21), the routine proceeds to step S22. Next, the wired update control unit491determines in step S22whether it has received an instruction to end the work by a service technician from the maintenance work terminal. When the wired update control unit491determines that it has received an instruction to end the work (YES in step S22), the wired update control unit491ends the wired inquiry process. On the other hand, when the wired update control unit491determines that it has not received an instruction to end the work (NO in step S22), the routine returns to step S21and the wired update control unit491repeats the process. On the other hand, when the wired update control unit491determines in step S21that it has received an instruction to make an update inquiry from the maintenance work terminal (YES in step S21), the routine proceeds to step S23. In step S23, the wired update control unit491sends an update inquiry to the center1by using the wired communication. Thereafter, the wired update control unit491determines in step S24whether a response to the update inquiry that is received from the center1indicates that there is an update. When the wired update control unit491determines that there is no update (NO in step S24), the wired update control unit491ends the wired inquiry process. On the other hand, when the wired update control unit491determines that there is an update (YES in step S24), the routine proceeds to step S25. In step S25, the wired update control unit491sets the update in-progress flag125on. Thereafter, in step S26, the wired update control unit491starts a process for the wired update. Specifically, the wired update control unit491starts the sequence of “downloading a distribution package.” The wired inquiry process is performed in the manner described above. Referring back toFIG.9, a process that is performed by the control unit49when the control unit49determines that the update process is in progress in step S1(YES in step S1) will be described. In this case, the control unit49performs an update control process in step S5. In this process, the control unit49performs a process of continuing either the process for the wired update or the process for the wireless update according to the state of the vehicle3. FIG.12is a flowchart showing details of the update control process in step S5. InFIG.12, in step S31, the control unit49refers to the update environment flag124and determines whether the vehicle3is currently in the first state. When the control unit49determines that the vehicle3is currently in the first state (YES in step S31), it can be said that the wireless update is in progress. In this case, the control unit49determines in step S32whether the state of the vehicle3has been switched from the first state to the second state. For example, the control unit49determines that the state of the vehicle3has been switched from the first state to the second state when the control unit49detects connection of the predetermined cable to the diagnostic connector36or when the control unit49detects a reception of a diagnostic communication command sent from the maintenance work terminal. When the control unit49determines in step S32that the state of the vehicle3has been switched from the first state to the second state (YES in step S32), the routine proceeds to step S34. In step S34, the control unit49performs a process of switching the control for the update process from the control for the wireless update to the control for the wired update. For example, the following process is performed as this switching process. First, the control unit49generates data indicating the progress of the update process at that time, and stores the data in the storage unit47as the progress data126. For example, this data is data indicating which sequence of the update process is currently being performed, or how much data of the distribution package103has been downloaded when download of the distribution package103is in progress. Next, the control unit49sends a switch notification indicating that the state of the vehicle3has been switched from the first state to the second state to the center1(in response to this notification, the center1performs control to change the content of the distribution package103to be sent from the center1, etc.). The control unit49then switches the control program to be executed. Specifically, the control unit49ends the process based on the OTA program123currently being executed. Thereafter, the control unit49sets the update environment flag124to a value indicating that the vehicle3is in the second state. The control unit49also starts the wired update program122. The control unit49then carries over the progress of the process for the wireless update and performs a process for the wired update, based on the progress data126. That is, the control unit49resumes the update process in the wired update environment, based on the progress data126. After then, the control unit49then ends the update control process. This switching control need not necessarily be performed by the above method, and may be performed by any method as long as the control for the update process can be switched from the control for the wireless update to the control for the wired update so as to carry over the progress of the process for the wireless update. When the control unit49determines that the state of the vehicle3has not been switched from the first state to the second state (NO in step S32), the control unit49continues the update process in the current environment in step S33. In this case, since the vehicle3is still in the first state, the control unit49continues the process for the wireless update. For example, when the control unit49has not finished the sequence of “downloading a distribution package,” the control unit49continues the download process. When the control unit49finishes the sequence of “downloading a distribution package,” the control unit49successively performs the sequence of “writing update data.” In the process for the wireless update, the control unit49functions as the OTA master492. Next, a process when the control unit49determines in step S31that the vehicle3is currently not in the first state (NO in step S31), will be described. In this case, it can be said that the process for the wired update is being performed. The control unit49determines in step S35whether the state of the vehicle3has been switched from the second state to the first state. For example, the control unit49determines that the state of the vehicle3has been switched from the second state to the first state when the control unit49detects disconnection of the cable from the diagnostic connector36or when the control unit49detects a work end command sent from the maintenance work terminal. When the control unit49determines in step S35that the state of the vehicle3has been switched from the second state to the first state (YES in step S35), the routine proceeds to step S36. In step S36, the control unit49performs a process of switching the control for the update process from the control for the wired update to the control for the wireless update. For example, the following process is performed as this switching process. First, the control unit49generates data indicating the progress of the update process at that time, and stores the data in the storage unit47as the progress data126. Next, the control unit49sends a switch notification indicating that the state of the vehicle3has been switched from the second state to the first state to the center1. The control unit49then ends the process based on the wired update program122currently being executed. Thereafter, the control unit49sets a data indicating that the vehicle3is in the first state the update environment flag124. The control unit49also starts the OTA program123. The control unit49then carries over the progress of the process for the wired update and performs a process for the wireless update, based on the progress data126. That is, the control unit49resumes the update process in the wireless update environment, based on the progress data126. This switching control need not necessarily be performed by the above method, and may be performed by any method as long as the control for the update process can be switched from the control for the wired update to the control for the wireless update so as to carry over the progress of the process for the wired update. On the other hand, when the control unit49determines that the state of the vehicle3has not been switched from the second state to the first state (NO in step S35), the control unit49continues the update process in the current environment in step S33. In this case, since the vehicle3is still in the second state, the control unit49continues the process for the wired update. In the process for the wired update, the control unit49functions as the wired update control unit491. The update control process is performed in the manner described above. Referring back toFIG.9, the control unit49then determines in step S6whether the update process for the wired update or the wireless update has been completed. When the update process has not been completed (NO in step S6), the routine returns to step S1and the control unit49repeats the process. On the other hand, when the update process has been completed (YES in step S6), the control unit49sets the update in-progress flag125off in step S7. In the subsequent step S8, the control unit49sends an update complete notification indicating that the update process has been completed to the center1. The routine then returns to step S1, and the control unit49repeats the process. Process in Center1 Next, a control process that is performed by the center1will be described.FIG.13is a flowchart showing details of a center-side control process that is performed by the center1. InFIG.13, the control unit18of the center1determines in step S51whether the control unit18has received such an update inquiry as described above from a predetermined in-vehicle control device31. When the control unit18determines that it has received an update inquiry (YES in step S51), the routine proceeds to step S52. In step S52, the control unit18determines whether there is an update to be applied (to be more exact, a distribution package103to be applied) to a vehicle3that sent the update inquiry. For example, the control unit18refers to the update history112of the vehicle database102and determines whether there is an update to be applied to this vehicle3. When the control unit18determines that there is no update to be applied to this vehicle3(NO in step S52), the control unit18sends a signal indicating that there is no update to this vehicle3, and then the routine then proceeds to step S54that will be described later. On the other hand, when the control unit18determines that there is an update to be applied to this vehicle3(YES in step S52), the routine proceeds to step S53. In step S53, the control unit18sends a signal indicating that there is an update to this vehicle3that sent the update inquiry, and starts an update process according to the state of the vehicle3(first state or second state). The state of the vehicle3can be determined by the following method. For example, when the control unit49of the in-vehicle control device31makes an update inquiry, the control unit49may send data including information indicating the state of the vehicle3(own state) to the center1. In this case, the control unit18of the center1determines the state of the vehicle3that sent the update inquiry, based on this information. When the control unit18determines that the vehicle3is in the first state, the control unit18determines that the wireless update is to be performed, and starts sending the OTA data104and the common data106. On the other hand, when the control unit18determines that the vehicle3is in the second state, the control unit18determines that a wired update is to be performed, and starts sending the wired update data105and the common data106. Thereafter, the control unit18determines in step S54whether it has received such a switch notification as described above from the predetermined in-vehicle control device31. When the control unit18determines that it has received a switch notification (YES in step S54), the routine proceeds to step S55. In step S55, the control unit18performs a process of switching the update control for the vehicle3that sent the switch notification between the control for the wired update and the control for the wireless update. Specifically, the control unit18first stores in the update progress113the progress of the update process for the vehicle3that sent the switch notification. Next, the control unit18changes the content of the distribution package103to be sent according to the content of the switch notification. That is, when the switch notification indicates that the state of the vehicle3has been switched from the first state to the second state, the control unit18sends the wired update data105and the common data106, or switches control so that a process for the wired update is performed. On the other hand, when the switch notification indicates that the state of the vehicle3has been switched from the second state to the first state, the control unit18sends the OTA data104and the common data106, or switches control so that a process for the wireless update is performed. In addition, the control unit18performs a process of switching the control for the update process between the control for the wired update and the control for the wireless update according to the content of the switch notification. The control unit18then performs a process of checking the progress before the switching based on the update progress113, carrying over this progress, and resuming the update process. The update process for the vehicle3that sent the switch notification is thus continued in the switched state. On the other hand, when the control unit18determines in step S54that it has not received a switch notification (NO in step S54), the routine proceeds to step S56. In step S56, the control unit18determines whether it has received an update complete notification from the predetermined vehicle3. When the control unit18determines that it has received an update complete notification from the predetermined vehicle3(YES in step S56), the routine proceeds to step S57. In step S57, the control unit18performs setting indicating completion of the update process on the vehicle3that sent the update complete notification. Specifically, the control unit18sets information indicating “update complete” in the update progress113. The control unit18also sets information on the update process completed this time in the update history112. In addition, the control unit18performs a process of ending the update control for the vehicle3as appropriate. The routine then returns to step S51, and the control unit18repeats the process. On the other hand, when the control unit18determines in step S56that it has not received an update complete notification from the predetermined vehicle3(NO in step S56), step S57is skipped. The routine then returns to step S51, and the control unit18repeats the process. The center-side control process is performed in the manner described above. As described above, in the present embodiment, even when the state of the vehicle3is switched between the first state in which the wireless update is possible and the second state in which the wired update is possible after the start and before completion of an update process for a predetermined distribution package103, the progress of the update process before the switching can be carried over and the update process for the switched state can be continued. As long as the wireless update is possible, the distribution package103etc. can be downloaded at any place and while the vehicle3is being used. In the case of a wired update, the distribution package103can be downloaded etc. in a stable communication environment. Therefore, the time required to complete an update process can be reduced while ensuring user convenience. Modifications The above disclosure illustrates an example in which both the sequence of “downloading a distribution package” and the sequence of “writing update data” are performed in the second state in which a wired update is possible. In another embodiment, only the sequence of “downloading a distribution package” may be performed when the vehicle3is in the second state. That is, when the vehicle3is in the second state, only a process of downloading the distribution package103from the center1and storing the downloaded data in the storage unit47of the in-vehicle control device31may be performed at the maintenance shop2. The sequence of “writing update data” may be performed when the vehicle3is in the first state after the vehicle3leaves the maintenance shop2. Regarding the sequence of “downloading a distribution package,” for example, the distribution package103may be downloaded in advance to the maintenance work terminal, and the distribution package103thus downloaded in advance may be transferred from the maintenance work terminal to the storage unit47as background processing while a process for the maintenance work other than the update process is being performed. For the electronic control units that are target ECUs in the predetermined distribution package103, the user may be able to designate for each electronic control unit whether to update the electronic control unit by the wireless update or the wired update. For example, it is assumed that target ECUs are three electronic control units A, B, and C. In such a case, when the user desires to complete an update process more reliably for the electronic control unit B, the user may be able to designate a wired update for the electronic control unit B and a wireless update for the other electronic control units by user setting. This configuration makes it possible for the user to have a service technician perform an update for the electronic control unit B at the maintenance shop2and to perform a wireless update on the other electronic control units, and thus can more flexibly meet user's needs. Although one embodiment of the technique of the present disclosure is described above, the present disclosure can be interpreted not only as an update control system but also as an update control method that is performed by a computer of the update control system, a control program for the update control method, a computer-readable non-transitory storage medium storing the control program, an in-vehicle control device, etc. The update control system may include one or more computers. The in-vehicle control device may include one or more computers. The center may include one or more computers. The in-vehicle control device may include one or more processors. The technique of the present disclosure can be used for a vehicle including an in-vehicle control device, an information processing terminal capable of communicating by wire with the vehicle, and an update control system including a center capable of communicating with the in-vehicle control device.
41,162
11861356
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION The systems and methods described herein may be employed in various combinations and in various embodiments for deploying feature processing units (FPUs) to implement consistent data processing features at a provider network and edge devices (e.g., continuous deployment and orchestration of FPUs in a managed cloud and a provider network for consistent data processing), according to some embodiments. A feature processing unit may include a model and/or compute logic to implement a data processing feature. In embodiments, a feature deployment service may deploy one or more feature processing units to an FPU engine at the provider network and to FPU engines at any number of remote IoT devices of a client (also referred to as edge devices). In embodiments, an FPU engine may execute a feature-independent portion of the model and/or compute logic using a data processing abstraction application programming interface (API) (also referred to herein as an “abstraction API”) of the FPU engine and may also execute a feature-specific portion of the model and/or compute logic using the abstraction API. In some embodiments, the feature-specific portion uses compute logic that is bundled along with the FPU as well as other common logic (e.g., data points extraction from a compressed data set and feeding the data points into feature-dependent compute logic.). The bundled compute logic may use the abstraction API to perform computations (data stream joins, etc.). In an embodiment, the abstraction API of FPU engines of different edge devices and the abstraction API of the FPU engine of the provider network (e.g., at the feature deployment service) conform to a common API specification. In embodiments, the data processing abstraction API may include different API components (model schema API, topology schema API, and/or data processing API). In some embodiments, an orchestration component(s) may coordinate various aspects of deployment and execution of FPUs at the cloud and the edge devices (e.g., deployment of FPUs and FPU updates, coordinating data flow between FPUs, etc.). In embodiments, a client may implement various data processing features on an edge device in order to obtain data insights for a system or process (e.g., overall equipment effectiveness (OEE)) of a production process or anomaly detection in the operation of equipment). In various embodiments, the deployment of individual FPUs or groups of FPUs may allow a user/client to add new data processing features or update individual data processing features at edge devices in a more rapid and flexible manner and with fewer errors/inconsistencies, compared to traditional techniques for upgrading edge software. Using traditional techniques for upgrading edge software, a new data processing feature released by a service provider (e.g., new computational logic/data insight features) may be available for use on the service provider's equipment/servers, but use of that new data processing feature at a client's edge devices may require a major software upgrade at the edge devices. In many cases, the edge devices may require a long qualification cycle for the software upgrade, the software upgrade must meet local regulations, compliances, and/or certifications that may take several months to complete, and the client may lack maintenance windows for the software upgrade. As a result, the data processing features at the edge devices may lag behind the updated data processing features at the service provider. This may lead to inconsistent results between the edge devices and the service provider. Using traditional techniques, a client may need to upgrade all of their edge software (e.g., execution engines, data processing/streaming framework, etc.), even though the client may not need or desire to use many of the new/upgraded data processing features of the upgraded version of the software. In various embodiments, the components illustrated in the figures may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components of the figures may be implemented by a system that includes one or more computing nodes, in one embodiment, each of which may be similar to the computer system embodiment illustrated inFIG.8and described below. This specification begins with a description of a system for deploying feature processing units to implement data processing features at a provider network and edge devices. A number of different methods and techniques for deploying feature processing units to implement data processing features at a provider network and edge devices are discussed, some of which are illustrated in accompanying flowcharts. Finally, a description of an example computing system upon which the various components, modules, systems, and/or techniques described herein may be implemented is provided. Various examples are provided throughout the specification. FIG.1is a logical block diagram illustrating a system for continuous deployment and orchestration of FPUs in a managed cloud and a provider network for consistent data processing, according to some embodiments. As shown, a provider network102provides a data processing service103that includes a feature deployment service104that implements deployment of feature processing units to implement data processing features at a provider network and edge devices. In the example embodiment, the provider network102also includes any number of provider resources106, such as compute services and/or storage services. In embodiments, any of the compute and/or storage functionality described for the feature deployment service104may be provided, at least in part, by one or more of the other services106. For example, another IoT service may maintain a database of all the registered edge devices for each client. In the depicted embodiment, the feature deployment service104may store any number of FPUs108. Each FPU may include a model110to implement a data processing feature and/or compute logic112(e.g., one or more functions/code) to implement the data processing feature. Any number of the FPUs may be provided/uploaded by the client and/or provided by the provider network or third-party. As described below, in various embodiments an FPU may also include metadata that describes various aspects of the FPU (e.g., configuration of the FPU, the name/identifier of model(s) and/or function(s) that implement compute logic, security credentials and roles with associated policies, types of input data sources for the model and/or compute logic to process/generate result(s), and/or types of targets to receive/process output data/result(s) generated by the model and/or compute logic). For example, a type of input data source or type of target for output data may be a type of FPU (e.g., an FPU that uses a particular type of model and/or compute logic) or a type of physical asset (e.g., a type of sensor or other physical asset). In embodiments, the metadata may indicate a type of input data to be processed (e.g., data of a particular format, data that describes an environmental condition such as temperature, etc.). In embodiments, the metadata of any given deployed FPUs may indicate the flow of data between the model and/or compute logic of the FPU and one or more other models and/or or compute logic of other FPUs (e.g., any number of sources of input data for the FPU and/or any number of targets of output data from the FPU). The metadata for a particular deployment of an FPU may indicate an identifier of a specific instance of another FPU or physical asset as an input data source or as a target for output data. In such cases, the orchestrator may perform runtime binding in order to enable data flow based on the instance-specific metadata. A source of input data may be from a sensor embodied in a physical asset at the client site/network or output data of another FPU implemented by the FPU engine or edge FPU engine. A target of output data may be a controller of a physical asset at the client site/network or another FPU implemented by the FPU engine or edge FPU engine. In embodiments, the feature deployment service104may store any number of different FPUs for any number of different clients of the service. As shown, a deployer114may deploy any number of FPUs (e.g., a particular FPU or a group of FPUs) to an engine at the provider network (e.g., FPU engine116of the data processing service103) as well as to edge FPU engines118of any number of edge devices120of a client network122. In some embodiments, the data processing service103and the feature deployment service104may be considered part of the same service or provided as part of the same service. In various embodiments, any components of the data processing service may instead be considered part of the feature deployment service or any components of the feature deployment service may instead be considered part of the data processing service. Therefore, in embodiments, reference to the feature deployment service may be replaced by the data processing service, and vice versa. In various embodiments, any of the components discussed herein may be implemented across one or more services/components of the provider network and/or edge devices. For example, an orchestrator component138,142may be implemented across any number of services and/or computing devices in order to coordinate deployment and execution of FPUs at the provider network and the edge devices, described in more detail below. In the depicted example, the FPUs124a-124nof the FPU engine116aare also deployed as FPUs126a-126nof the edge FPU engine118a. For example, the deployer114may deploy the same FPU to the FPU engine116a(shown as FPU124a) and to the edge FPU engine118a(shown as FPU126a). Therefore, the model110amay be the same as the model128aand compute logic112amay be the same as compute logic130a. In the depicted embodiment, a given client network (e.g., client network122a) may include any number of edge devices120, any number of physical assets132, and/or any number of management devices134. As shown, a given edge device may run/execute an edge FPU engine, which may implement FPUs126, an abstraction API136, and an orchestrator138. The abstraction API140and the orchestrator142of the FPU engine116at the provider network102may perform the same (or similar) functionality as the abstraction API136and the orchestrator138. In embodiments, any number of clients of the feature deployment service104may use the feature deployment service104by communicating with the provider network102from a remote network122of the corresponding client (e.g., via a wide area network144, such as the internet). As shown, the orchestrator138,142may receive FPUs/FPU updates that are deployed to an FPU engine116or edge FPU engine118, deploy the FPUs/FPU updates into the FPU engine116,118, and execute/enable execution of the FPUs/updated FPUs. The orchestrator138,142may identify any number of data flows to be coordinated between a newly deployed/updated FPU and any number of other FPUs as well as data flows from other data sources to the newly deployed FPU/updated FPU (e.g., from sensors or other physical assets) and from the newly deployed FPU/updated FPU to other data targets (e.g., to a remote FPU engine at the provider network or an edge device or to a physical asset as a control signal). During execution of the FPUs, the orchestrator138,142may coordinate the identified data flows between all of the FPUs, data sources, and data targets. The orchestrator138,142may coordinate the new data flows for the newly deployed FPU/updated FPU without interrupting the execution of the other FPUs (e.g., by implementing runtime binding, described below). In some embodiments, the orchestrator may receive an updated FPU (e.g., including an updated version of a model to implement a data processing feature and/or an updated version of compute logic to implement the data processing feature). The orchestrator may replace the previous version of the FPU with the updated FPU at the FPU engine and execute the updated FPU at the FPU engine without interrupting execution of any number of other FPUs at the FPU engine (e.g., by implementing runtime binding). The Data processing service103includes a management interface146(e.g., a management API) that may receive user input from a management device134of a remote client network (e.g., via a user interface of the management device). For example, a user may provide input via a graphical user interface or command line interface of a display device. In the example embodiment, the management interface146receives user input from the management device134of the client network122aof a client (e.g., a request to deploy the FPU108). In some embodiments, the management interface146may receive commands from an application running at the provider network, initiated by an operator of the remote client. In embodiments, a client may configure the feature deployment service to automatically deploy new FPUs and/or updates to FPUs (deployed to the cloud FPU engine and/or the edge FPU engines) in response to their release at the cloud (e.g., when they become available for deployment). For example, a client may send, via a management interface to the feature deployment service, a request for automatic deployment of new FPUs or updates to FPUs (e.g., client consent for automatic deployment). The request configures the feature deployment service to automatically deploy new FPUs and/or updates to FPUs (to the cloud FPU engine and/or the edge FPU engines) in response to being released/becoming available for deployment at the feature deployment service. Based on the request, one or more edge devices may receive, from the feature deployment service, a new FPU or an update to the FPU that has become available at the feature deployment service and deploy the new FPU or the update to the FPU to the FPU engine of the edge device. The orchestrator may then configure/execute the new FPU or updated FPU (e.g., identify/coordinate data flow between the new/updated FPU and one or more other FPUs executing at the FPU engine). A given physical asset132may be any type of equipment (machine, sensor, etc.) that may provide input data to the edge FPU engine118and/or FPU engine116(e.g., input to an FPU/model/compute logic) and/or receive output control signals from the edge FPU engine118and/or FPU engine116(e.g., output from an FPU/model/compute logic). The feature deployment service104may also store a record/data for any number of edge devices148as registered devices for use with the service (e.g., as edge devices120to deploy FPUs to). In embodiments, the feature deployment service104may deploy one or more FPUs to the FPU engine and/or to the edge FPU engines of any number of target edge devices in order to implement one or more new data processing features or to replace one or more FPUs with an updated FPU (to provide an updated data processing feature). Clients may use FPUs at an edge FPU for edge processing for latency sensitive applications that cannot tolerate the longer round trip time to cloud and/or for reducing network bandwidth to the cloud/provider network for cost savings (as well as for protection of raw data at the edge). In some embodiments, the data processing service (e.g., an FPU or other computation logic) may receive results from any number of different edge devices and generate an output/metric based on an aggregation of the results from different devices. For example, the data processing service may receive, from an edge device of a client, a result generated by implementation of an FPU at the edge device (e.g., processed temperature data and/or pressure data). The data processing service may also receive, from any number of other edge devices of the client, any number of other results generated by implementation of another FPU at the other edge devices. The data processing service may generate an output based at least on aggregation of the result and the other results (e.g., a metric such as an average). In various embodiments, any type/number of metrics may be computed in the same/similar manner by receiving data from edge device(s) and performing operation(s) on the data. For example, an FPU at the cloud FPU engine may receive data from FPU(s) at edge device(s) and perform operation(s) on the data to compute an overall efficiency metric for one or more physical assets/machines at the client site. In some embodiments, a new FPU may be deployed to implement a data processing feature that obtains temperature input from two different temperature sensors and averages the inputs to obtain an average temperature. At a later time, an updated FPU may be deployed to replace the FPU. For example, the updated FPU may have a model and/or compute logic that provides more accurate results and/or generates results in less time. By using FPUs to implement data processing features, a client has the ability to add (or update) one or more particular data processing features to particular edge devices and/or the provider network, without the need to perform a software update for an entire data processing engine across every edge device of the client (different edge devices may be in different stages of use (production/testing/etc.) that may disallow updates). For example, the deployer114may deploy a new FPU (that implements a new data processing feature) to the FPU engine116a(shown as FPU124a) and to the edge FPU engine118a(shown as FPU126a) as part of the same deployment for a new data processing feature. Furthermore, When the new FPU is deployed to the FPU engine116, a feature-independent portion of the model110aand/or the compute logic112amay be executed using an abstraction API of the FPU engine116and a feature-specific portion of the model and/or the compute logic may be executed using the abstraction API of the local FPU engine. When the new FPU is deployed to the edge FPU engine118aof the edge device120, a feature-independent portion of the model128aand/or the compute logic130amay be executed using an abstraction API of the edge FPU engine118aand a feature-specific portion of the model and/or the compute logic may be executed using the abstraction API of the local FPU engine. In embodiments, feature-specific portions of the model and/or compute logic may include feature compute logic bundled with the FPU and feature-independent portions of the model and/or compute logic may use generic logic (e.g., common functions available to any/all FPUs/engines). In both cases, the abstraction API is used to implement the logic (e.g., through API calls). An example of feature-independent compute logic may be a particular metric computation/function for a given dataset that is used across all FPU engines (e.g., the same function code is used when called by an FPU, regardless of the specific type of FPU). Another example is a particular quality metric computation/function for a given dataset. In embodiments, a change to feature-dependent compute logic of an FPU would only require a restart on FPU engines of the particular FPU(s) that implement the feature-dependent logic. However, a change to feature-independent compute logic may require a restart on FPU engines of all FPUs (although the FPU engine itself may not need to be restarted/updated). In embodiments, the model and/or the compute logic may include feature independent logic/code that is part of the FPU engine (e.g., implemented by making API calls to the abstraction API of the FPU engine) and feature-specific logic that is part of the FPU bundle (e.g., lightweight feature-specific compute logic/code implemented as java functions or other language functions). In an embodiment, the abstraction API may provide a set of data processing primitives (e.g., filter, transform, interpolate, group, map, aggregate) and a set of data stream management primitives (e.g., join, branch, align). In embodiments, this architecture may enable a client to develop new data processing features using the abstractions provided by an FPU engine. The client may then immediately deploy these new data processing features to edge devices and/or the provider network, avoiding the issues discussed above that are associated with traditional techniques (e.g., delayed feature release to due to software qualification cycles, meeting local regulations, compliance, and/or certifications). FIG.2illustrates the operation of a feature processing unit, according to some embodiments. In the depicted example, the feature processing unit202includes compute logic204(e.g., “feature processing logic”), a model206(e.g., “feature model”), and metadata208. In embodiments, the metadata includes data that describes/indicates various aspects of the FPU (e.g., configuration, inputs/outputs for the model and/or compute logic, model name, security constructs such as encryption/authentication keys, roles, and/or associated authorization policies). In embodiments, the compute logic204and/or the model206receive a data set from one or more sources (e.g., different temperature sensor readings taken over a period of time, or output of one or more FPUs) and/or context (e.g., data describing a state of equipment and/or environment), process the data set and/or context, and output the processed data set and/or insights/events. For example, a model may process different temperature readings of a data set and output a prediction that a machine will fail due to overheating withing 10 minutes. The compute logic may perform various operations on data (e.g., formatting, averaging) before sending the data to the model for processing. The compute logic may perform any other operations to implement a data processing feature, such as combining, aggregating, or averaging data input from multiple sources (e.g., different sensors, different FPUs) before sending the results to a model for processing. FIG.3is an illustration of feature processing units that have been deployed to an FPU engine, according to some embodiments. In the depicted embodiment, the edge FPU engine302provides an abstraction API304that may implement a model and/or compute logic of each FPU that has been deployed to the FPU engine302. In the example, 11 FPUs are shown as deployed to the FPU engine302. As shown, the edge FPU engine may be implemented by a data/stream processing framework306(e.g., a stream processing engine/framework such as Apache Flink). In embodiments, the FPU engine302and the data/stream processing framework306that implements the FPU engine302may be considered a “stream processing engine.” The abstraction API (e.g., API calls) may be provided by the stream processing engine and the abstraction API may map to the stream processing engine. Although an edge FPU engine is depicted in the example, any of the discussed functionality/aspects may also apply to an FPU engine running at a provider network (e.g., FPU engine116). As shown, data from source308(e.g., a physical asset, such as a temperature sensor) is processed by two FPUs of an FPU group309and other data from source310(e.g., another temperature sensor) is processed by two other FPUs of the FPU group309before the data and other data is combined at another FPU of the FPU group309(e.g., performing sensor fusion). The combined data is sent to another FPU of the FPU group309for further processing before the data is sent to a target312(e.g., the cloud FPU engine at the feature deployment service or another physical asset as a control signal). In embodiments, each FPU may include metadata that indicates a source(s) of input data for the FPU (e.g., a physical asset or a model/compute logic of another FPU) and a target(s) of output data of the FPU (e.g., a physical asset, a model/compute logic of another FPU, or the feature deployment service). Therefore, the metadata of each FPU may be used to configure and control the flow of data between different FPUs of a group. In embodiments, a common model may be used by some or all of the FPUs of an FPU group in order to processes data. Therefore, the model may only need to be downloaded from the service once, and then shared and/or duplicated for each FPU that needs to use it. In the depicted example, five FPUs of an FPU group314have been deployed to the FPU engine302as a group. As shown, an initial FPU receives data from a source316(e.g., a pressure sensor), processes the data, and sends different portions of data to different FPUs for further processing. Another FPU combines the different portions of processed data and sends the combined data to another FPU, which processes the data and sends it to a target318. In some embodiments, a given FPU (the model and/or compute logic) may perform any number of functions to process data collected and/or generated from one or more data sources (e.g., sensors and/or other data sources of an industrial environment, such as a manufacturing facility). For example, an FPU may perform one or more filtering operations, aggregation operations, and/or other calculations/operations on the data to process the data before it is sent to a target FPU and/or other destination. In embodiments, any number of FPUs may be chained and/or combined together to realize more complex data processing flows. In embodiments, a user may configure the topology of FPUs using the feature deployment service and deploy the FPUs (e.g., via the management interface). For example, a chain of FPUs may be graphically represented (e.g., via the user interface of the management device134) using images that represent different FPUs and arrows that represent the flow of data (similar to the depicted example). In embodiments, any suitable representation may be used to define a workload (e.g., a language or graph). After the user indicates the desired topology of the FPU group (e.g., FPU locations, inputs/outputs for each FPU), then the user may request deployment of the FPU group (e.g., deployment to the FPU engine116and to one or more target edge devices120). FIG.4is an illustration of feature processing units that have been deployed to an FPU engine, according to some embodiments. As inFIG.3, 11 FPUs are shown as deployed to an FPU engine402of an edge device. The edge FPU engine may be implemented by a data processing framework406(e.g., a stream processing framework such as Apache Flink). As inFIG.3, the FPU engine302and the data/stream processing framework306that implements the FPU engine302may be considered a “stream processing engine.” The abstraction API (e.g., API calls) may be provided by the stream processing engine and the abstraction API may map to the stream processing engine. Although an edge FPU engine is depicted in the example, any of the discussed functionality/aspects may also apply to an FPU engine running at a provider network (e.g., FPU engine116). In the depicted example, the data processing abstraction API includes a model schema API to define data processing models, a topology schema API to define inputs and outputs for different FPUs, and/or a data processing API to perform data processing functions, described in more detail below. The FPUs in the depicted example and the sources and targets are in the same or similar configuration as inFIG.4. Therefore, the sources408and410and the target412may be used with FPUs (e.g., FPU group413) in the same way as described for the sources308and310and the target312ofFIG.3. Similarly, the FPU group414may be used with the source416and target418in the same was as described for the FPU group314used with the source316and target318ofFIG.3. In the example embodiment, the Abstraction API is made of three API components: 1) an FPU model schema420abstraction API, 2) an FPU data processing424abstraction API, and 3) an FPU topology schema422abstraction API. Using the FPU model schema420, a client (e.g., customer of the service provider network) can design their own model for data processing (e.g., using complex parent child relationships that do not exist using traditional techniques and that may be applicable to their use cases). Using the FPU data processing424, a customer can write FPU compute logic that can use the underlying engine402compute functions (e.g., metric computations over a sliding window). The FPU topology schema422may provide a customer the topology API for splitting and joining different inputs/outputs of FPUs to achieve a processing topology that fits the customer's use case. In the illustrated example, a client has specified a graph topology for each FPU group413,414that shows inputs and outputs for each FPU (e.g., each node of the graph). Clients can develop FPUs using any combination of the APIs1,2, and/or3discussed above (e.g., via API calls to the APIs from models and/or compute logic) to achieve a true and/or more accurate representation of their data processing system and associated insights computation not available using traditional techniques. Using traditional techniques, a client would require much more time develop a complex data processing system (e.g., using many man-years to achieve a specific use case). Using FPUs and the abstraction APIs, customers may achieve their desired data processing with much less effort/time, allowing them to focus on their problem domain rather than on heavy/complex infrastructure. In embodiments, the FPU model schema420defines schema elements through a schema definition language that enables the FPU Engine402to perform data processing in a model-independent manner. Therefore, the FPU model schema definition language may provide the abstraction that separates the model definition from the processing engine (e.g., FPU engine). In some embodiments, a user of a client (e.g., client company or other organization) may understand the schema elements (e.g., transforms) but may not be aware of the FPU model schema language. The introduction of a new model schema element would affect the FPU engine and may occur much less frequently compared to the introduction of new data processing features (via deployment of new FPUs). In such embodiments, an FPU may be a bundle that includes the data processing features expressed using the FPU model schema, the compute logic for the feature, and/or metadata. In various embodiments, the FPU topology schema422may define the data sources for FPU inputs and the data targets for FPU outputs (e.g., based on extracting metadata from the FPUs that describes inputs/outputs). In embodiments, the FPU topology schema422may define the wiring of FPUs with different input data flows within the “source” data streams, the wiring between other FPUs, the tagging of the output data flows, and/or the wiring of the output data flows to the “sink” data streams. In embodiments, the separation of the FPU feature from the engine may require a topology description that enables the engine to load and operate the data processing function in an FPU model instance context. In various embodiments, an FPU model instance may be an instantiation of an FPU model, whereas an FPU model may be defined using a modeling language to describe a system or a process, to describe the streaming data generated by the system or process, and to define the data filters, extraction, transformations, and logic to compute data insights, predictions, or other results. Using traditional techniques, models and/or code/compute logic are implemented using compile time binding. However, in various embodiments, the abstraction API (including any of the API components) may enable runtime binding of FPUs and/or any changes to FPU models and compute logic, which provides another advantage over traditional data processing techniques. For example, to implement a new model and/or code/compute logic at and edge device using traditional techniques, the entire software stack (e.g., framework406, engine402, etc.) needs to be changed/replaced and compiled. However, using embodiments herein, a new FPU or a change to an existing FPU (e.g., adding new API calls) can be downloaded and implemented at the edge device without changing or interrupting execution of existing FPUs, the engine402, framework406, etc.). In other words, new FPU/features or changes to FPUs/features can be added through runtime binding, avoiding the delays and additional computing resources associated with traditional techniques (e.g., compile time binding). FIG.5is a high-level flowchart illustrating various methods and techniques to deploy feature processing units to FPU engines at a provider network and at edge devices, according to some embodiments. In various embodiments, any of the functionality described for any portions of the flowcharts 5-7 may be performed by any of the components ofFIGS.1-4and/or8. These techniques, as well as the techniques discussed with regard toFIGS.6and7, may be implemented using components or systems as described above with regard toFIGS.1-4, as well as other types of components or systems, and thus the following discussion is not intended to be limiting as to the other types of systems that may implement the described techniques. For example, any of the techniques may be implemented by a feature deployment service of a provider network and/or by a local service/application of a client network. At block502, a feature deployment service receives, from a user (e.g., via the management device134/management interface146), one or more FPUs that are to be available for deployment to edge devices of a client's network and/or the provider network/feature deployment service. For example, the user may send, to the feature deployment service, the model to implement a data processing feature, the compute logic to implement the data processing feature, and/or metadata that describes one or more aspects of the FPU. In embodiments, metadata may include an indication of one or more sources of input data for the FPU/model/compute logic and/or an indication of one or more targets for output data of the FPU/model/compute logic. In embodiments, any number of FPUs may be provided by the feature deployment service as pre-made FPUs (e.g., developed by the provider network or a third-party entity). At block504, the feature deployment service receives, from the user, a request to deploy one or more FPUs to the feature deployment service and/or to one or more target edge devices of the client's network. In some embodiments, the user may select one or more FPUs from among a group of FPUs that are available for deployment (e.g., via user input using a graphical user interface that displays the group of FPUs). In embodiments, the user may also select one or more edge devices of the client's network as target edge devices that the selected FPUs are to be deployed to. In some embodiments, the user may select one or more groups of edge devices, where each group may include any number of target edge devices that the selected FPUs are to be deployed to. In some embodiments, blocks502and504may occur in parallel or concurrently. For example, one user of a client may upload an FPU to a library of FPUs, while another user of the client may request the FPU or a different FPU to be deployed to edge devices. At block506, the feature deployment service deploys the selected FPUs to an FPU engine of the feature deployment service. In some embodiments, the FPU engine may be hosted by another service of the provider network. At block508, the feature deployment service also deploys the selected FPUs to the selected target edge devices or selected groups of target edge devices (e.g., to an edge FPU engine of the selected device(s) or selected group(s) of devices). In embodiments, the deployment of the FPUs to the FPU engine of the provider network and the edge FPU engine of the edge devices may occur concurrently and/or at approximately the same time. In embodiments, this allows for new data processing features to be available at both the cloud and the edge devices (e.g., without a substantial lag in availability at the edge devices, as with traditional techniques). At any point in time, the feature deployment service may deploy a new FPU or another group of FPUs to the FPU engine and/or to any number of target edge devices (e.g., to any number of edge FPU engines of respective target edge devices). In embodiments, the new FPU may be used with previously deployed FPU(s) and/or any number of physical assets of the client. In embodiments, output data (e.g., model result(s) and/or compute logic result(s)) from one or more previously deployed FPUs and/or physical assets may be provided as input data for processing by the new FPU (e.g., by a model(s) and/or compute logic) and/or output data from the new FPU (e.g., model result(s) and/or compute logic result(s)) may be provided as input data for processing by one or more previously deployed FPUs (e.g., to a model(s) and/or compute logic) and/or to controllers of physical assets. In some embodiments, output data (e.g., model result(s) and/or compute logic result(s)) from any number of the FPUs may be provided to the provider network for further processing (e.g., to the feature deployment service). In various embodiments, the edge device (e.g., the edge FPU engine) may receive an updated FPU that replaces a previously deployed FPU (e.g., previous version of the FPU with a previous version of a model and/or compute logic). The updated FPU may include an updated version of the model to implement a data processing feature and/or an updated version of the compute logic to implement a data processing feature. The edge FPU engine may replace the model with the updated version of the model and/or replace the compute logic with the updated version of the compute logic. This may allow a client/customer of the feature deployment service to update individual data processing features or use new data processing features at any point in time, without the need to wait for an updated FPU engine and/or other updated software to be developed/released by the provider network. In embodiments, a new FPU or updated version of an FPU (e.g., updated model and/or compute logic) may be developed by the client/customer, uploaded to the feature deployment service (e.g., as part of the new or updated FPU), and then selected by the user or another user of the client (or a user of another clients) for deployment as described herein. In some embodiments, the new or updated FPU is automatically deployed (e.g., to one or more edge devices) in response to the feature deployment service determining that the new or updated FPU is available for deployment (e.g., in response to receiving the updated FPU from the client or other source). This may allow new FPUs to be quickly deployed or allow FPUs to be quickly updated, resulting in a much more granular control and flexibility for a client compared to traditional techniques. FIG.6is a high-level flowchart illustrating various methods and techniques to implement feature processing units at a provider network and edge devices, according to some embodiments. At block602, the feature deployment service implements an FPU at an FPU engine of a feature deployment service (e.g., after the FPU is deployed to the FPU engine). In embodiments, the FPU engine executes a feature-independent portion of the model and/or compute logic using an abstraction API and a feature-specific portion of the model and/or compute logic using the abstraction API. At block604, the feature deployment service deploys the FPU to an edge device of the client. The edge FPU engine executes the feature-independent portion of the model and/or compute logic using the abstraction API and the feature-specific portion of the model and/or compute logic using the abstraction API (e.g., after the edge device deploys the FPU to the edge FPU engine). In embodiments, the abstraction API of the edge FPU engine and the abstraction API of the FPU engine of the feature deployment service conform to a common API specification (or common FPU engine specification). In some embodiments, the abstraction API of any number of other edge FPU engines of other edge devices of the client network may also conform to the common API specification (or the common FPU engine specification). In some embodiments, at least some of the feature-independent portion of the model and/or compute logic may bypass the abstraction API and directly use the FPU engine402and/or the framework406, since the feature-independent portion of the model and/or compute logic may be common/standard functions/code (e.g., generic code) that are used across every engine. This is one way that performance/processing speed may be increased even further. FIG.7is a high-level flowchart illustrating various methods and techniques to deploy and implement a feature processing unit at an edge device, according to some embodiments. At block702, the edge device receives, from the feature deployment service, an FPU. The FPU may include a model and/or compute logic to implement a data processing feature (in embodiments, the FPU may also include metadata). At block704, the edge device deploys the FPU to an edge FPU engine of the edge device. At block706, the FPU implements the FPU. In embodiments, the implementation of the FPU may include the FPU engine executing a feature-independent portion of the model and/or compute logic using an abstraction API and executing a feature-specific portion of the model and/or compute logic using the abstraction API. In embodiments, the abstraction API of the FPU engine, the abstraction API of FPU engines of other edges devices of the client, and an abstraction API of a remote FPU engine of the feature deployment service conform to a common API specification. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as inFIG.8) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein (e.g., the functionality of the feature deployment service, other services, edge devices, models, compute logic, and any other components/devices that implement the techniques described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Embodiments to implement deploying feature processing units to implement data processing features at a provider network and edge devices as described herein may be executed on one or more computer systems, which may interact with various other systems or devices. One such computer system is illustrated byFIG.8. In different embodiments, computer system800may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing node or compute node, computing device, compute device, or electronic device. In the illustrated embodiment, computer system800includes one or more processors810coupled to a system memory820via an input/output (I/O) interface830. Computer system800further includes a network interface840coupled to I/O interface830, and one or more input/output devices850, such as cursor control device860, keyboard870, and display(s)880. Display(s) may include standard computer monitor(s) and/or other display systems, technologies or devices, in one embodiment. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system800, while in other embodiments multiple such systems, or multiple nodes making up computer system800, may host different portions or instances of embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system800that are distinct from those nodes implementing other elements. In various embodiments, computer system800may be a uniprocessor system including one processor810, or a multiprocessor system including several processors810(e.g., two, four, eight, or another suitable number). Processors810may be any suitable processor capable of executing instructions, in one embodiment. For example, in various embodiments, processors810may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors810may commonly, but not necessarily, implement the same ISA. In some embodiments, at least one processor810may be a graphics processing unit. A graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device, in one embodiment. Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms. For example, a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU). In various embodiments, graphics rendering may, at least in part, be implemented by program instructions for execution on one of, or parallel execution on two or more of, such GPUs. The GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s), in one embodiment. System memory820may store program instructions825and/or data accessible by processor810, in one embodiment. In various embodiments, system memory820may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those described above (e.g., the feature deployment service, other services, edge devices, models, compute logic, and any other components/devices, etc.) are shown stored within system memory820as program instructions825and data storage835, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory820or computer system800. A computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system800via I/O interface830. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface840, in one embodiment. In one embodiment, I/O interface830may be coordinate I/O traffic between processor810, system memory820, and any peripheral devices in the device, including network interface840or other peripheral interfaces, such as input/output devices850. In some embodiments, I/O interface830may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processor810). In some embodiments, I/O interface830may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface830may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of I/O interface830, such as an interface to system memory820, may be incorporated directly into processor810. Network interface840may allow data to be exchanged between computer system800and other devices attached to a network, such as other computer systems, or between nodes of computer system800, in one embodiment. In various embodiments, network interface840may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. Input/output devices850may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system800, in one embodiment. Multiple input/output devices850may be present in computer system800or may be distributed on various nodes of computer system800, in one embodiment. In some embodiments, similar input/output devices may be separate from computer system800and may interact with one or more nodes of computer system800through a wired or wireless connection, such as over network interface840. As shown inFIG.8, memory820may include program instructions825that implement the various embodiments of the systems as described herein, and data store835, comprising various data accessible by program instructions825, in one embodiment. In one embodiment, program instructions825may include software elements of embodiments as described herein and as illustrated in the Figures. Data storage835may include data that may be used in embodiments (e.g., models, functions, compute logic, metadata, etc.). In other embodiments, other or different software elements and data may be included. Those skilled in the art will appreciate that computer system800is merely illustrative and is not intended to limit the scope of the embodiments as described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including a computer, personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, network device, internet appliance, PDA, wireless phones, pagers, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. Computer system800may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-readable medium separate from computer system800may be transmitted to computer system800via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. This computer readable storage medium may be non-transitory. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, non-volatile media such as RAM (e.g., SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent example embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
55,682
11861357
Throughout the drawings, identical reference numbers may designate similar, but not necessarily identical, elements. An index number “N” appended to some of the reference numerals may be understood to merely denote plurality and may not necessarily represent the same quantity for each reference numeral having such an index number “N”. Additionally, use herein of a reference numeral without an index number, where such reference numeral is referred to elsewhere with an index number, may be a general reference to the corresponding plural elements, collectively or individually. In another example, an index number of “I,” “M,” etc. can be used in place of index number N. DETAILED DESCRIPTION Computing devices, such as servers, in a datacenter or other location may have one or multiple operating systems installed. Installing or updating an operating system (OS) may involve logging into an OS of a computing system, using an account credential that grants administrator privileges. The administrator, often referred to as the root user, or superuser, is granted privileges on the computer system that may exceed those that are available to a general user account. For example, execution of certain programs on the system may be limited to users that have administrator privileges. Included in those privileges may be the authority to perform updates on the firmware and/or drivers or to make configuration changes on the computer system. For purposes of this description, administrator credentials or operating system administrator credentials, refers to the credentials needed to log into a computer system as an administrator, and thus be granted administrator privileges. A user logged in as an administrator may then download any software used in updating the firmware and/or drivers. This downloaded data may be referred to as an update package, as it may include the data used to perform the firmware and/or driver update. This data may include an executable program that actually performs the update. The update package may be downloaded to the server computer over a network. The administrator user may then update the desired components or perform configuration updates on the computer. For the remainder of this disclosure, the term update or updating refers to updating of firmware and/or drivers, as well as altering the configuration of a server computer. Several issues may arise when using the process described above. One issue that arises is that often times the updating of the system may interfere with normal operation of the system. For example, performing an update may reduce the responsiveness of the system. In some cases, the update process may require that the system be restarted. These operations may impact the workloads that are being processed by the server. For example, in the cases where the server is restarted, that server is no longer available during the restart period. As used herein, a “live server” is a computing system providing a service where the service is being provided. As used herein, a “maintenance window” is the time that a live server is taken down to perform maintenance activity (e.g., an upgrade of an OS, upgrade of a drivers, upgrade of firmware on a system, etc.). When installing an OS, the OS upgrade package may not have an image of drivers customized for a particular type of machine (e.g., the processor, the bus adapter cards, etc.). A separate “software component package” may be separately downloaded to the computing device to be upgraded. However, as noted, the traditional approach is to update the OS and then download the drivers needed for that particular OS. This can lead to additional downtime and be implemented at a second maintenance window. Further, in some examples, a production network used to provide services that may be used for downloading the OS upgrade package and/or the “software component package.” This can be the same network that is used to provide services by a live server. As such, downloading via this network can take some of the bandwidth that the server uses to process server workload. Moreover, the size of every permutation of each computing device hardware set and supported operating systems can be large. Accordingly, a customized subset of a larger supported software component set may be desirable. Various embodiments described herein relate to a mechanism where an update platform can provide a software component package customized for a computing device to that computing device via management network using a baseboard management controller of the computing device. The software component package can be sent prior to installation of a new operating system and even before the new operating system is provided to the computing device. An operating system management platform can be used to manage operating systems on a number of computing devices including the computing device to install a new OS on. The OS management platform can determine that a new OS should be installed on a particular computing device. This can be based on a user input, based on a predetermined plan or schedule, or be based on a workload management algorithm. Using a workload management algorithm can be useful, for example, when there are changes in usage of the datacenter that the computing devices are contained within. The OS management platform can send information to an update platform that has access to a management network that baseboard management controllers (BMCs) associated with the computing devices have access to. The update platform can learn from the OS management platform that the new OS install is scheduled or is going to be scheduled. The OS management platform can provide details about the OS. The details may include, for example, a name and/or version number of the OS and the selected computing device. The update platform can learn from the computing device what hardware components are present on the computing device via the management network and BMC. The update platform can then select software components for the computing device based on the OS to be installed and the hardware present on the particular computing device. In some examples, the hardware components can be determined after the determination that the OS is going to be installed and in response to the determination to install the OS. In other examples, the hardware components can be determined by the update platform querying the BMC of the particular computing device prior to the determination to install the OS. The update platform can include a repository that includes a collection of software components that may be used with the computing devices and OSes. In some examples, the update platform can include a data structure that associates each component to one or multiple hardware device and/or one or multiple OSes. The update platform can take the information received about the OS that is to be installed on the computing device and filter the software components to a subset of the software components that are associated with the OS. The update platform can also filter the software components based on the hardware components present on the computing device. Similarly, the update platform can filter based on both to determine a subset of the software components applicable to both the OS to be installed and the computing device. The subset of software components can then be sent to the BMC of the computing device. In one example, the update platform can push the subset of software components to the BMC. The BMC can store the subset of software components in a local repository. That storage can be, for example, on a flash storage incorporated within the BMC or accessible via a bus. The local repository may also be accessible to an operating system once the operating system is installed. After the software component subset has been transferred to the repository, the OS can be sent and installed. Various approaches can be used for the OS install. For example, the OS management platform may cause download through a production network and install an OS install package, the OS install package may be downloaded to the computing device via the BMC as with the software component subset, etc. The OS install package can be executed to install the OS image on the computing device. The OS can be restarted, if necessary to complete installation. Further, after the installation is complete, the subset of the software components can be installed, In one example, the repository that the subset of software components resides can be exposed by the BMC to the OS. The OS, thus has access to the subset of software components. In one example, a script can be executed to install the software components. The script can be part of the subset of software components. In some examples, the OS image that is installed can be pre-configured to be able to access the repository, for example, using a known driver for the ecosystem. In some examples, one or more of the software components are executable to install themselves. In other examples, scripts or other software can be executed by the OS to install one or more of the software components. Examples of software components can include drivers. In some examples, firmware can also be included in the software components. In some examples, the BMC or other firmware executable on the computing device can install the update. Because the software component subset is already present, there is no need to separately download the software components. Further, because the applicable subset is already determined for the new OS before the new OS is even installed, bandwidth is not taken to send all of the collection of components to the computing device. As used herein, an “operating system install” is a determination to install a new or upgraded operating system onto a computing device. As used herein, an “operating system install package” is a file or set of files that can be used to install an operating system onto a computing device. An operating system install can occur on a computing device that already has an OS present or can occur on a computing device without an installed high level operating system. As used herein, a “software component” is a set of instructions, code, etc. that can be installed to be executed by a processor on a computing device. Examples of software components include firmware that can be executed by microcontrollers or a central processing unit, drivers to communicate between an operating system and hardware of a computing device, and other software that can execute such as middleware, management software, etc. As used herein, an “Operating System” is a set of software of a computing device that manages computer hardware and software resources and provides common services for computer programs such as scheduling tasks, executing applications, controlling peripherals, etc. Examples of OSes include LINUX, WINDOWS, hypervisors, etc. A hypervisor or virtual machine monitor is computer software, firmware and/or hardware that creates and runs virtual machines. A hypervisor can present a virtual operating platform to guest virtual machines. FIG.1is a block diagram of a system including an update platform that is capable to provide customized component install set for an operating system to a baseboard management controller of a computing device prior to installation of the operating system, according to an example. The system100can include an update platform102that communicates with computing devices108a-108nvia a management network109. The update platform102can have access to a collection of components120. Moreover, the update platform can be communicatively coupled to an OS management platform106. In certain examples, the update platform102, OS management platform106, computing devices108can be implemented using one or more of computing devices, such as servers, client computers, desktop computers, mobile computers, etc. The computing devices can be implemented via a processing element, memory, and/or other components. In some examples, platforms may be run in one or more virtual machines executing on a computing device. As noted above, a user or administrator of a datacenter may wish to install or update operating systems on one or more computing devices108a-108n. While doing this, a set of software components may be used to on the respective computing device108. The update platform102can have access to a collection of components120. In some examples, the collection of components120is stored as part of the update platform102, on the same computing device, or is accessible via a storage to the update platform102. The collection of components120can include, firmware and/or settings for one or more hardware components present on the computing devices. Further, the collection of components120can include drivers that can be installed on an operating system installed on the devices. In some examples, other software, such as middleware to can be included in the collection of components120. The collection of components120can include a superset of supported software components for multiple of the computing devices. This can be, for example, a package of components that would be included in a service pack from a manufacturer of the computing devices. As such, multiple hardware sets can be supported. As used herein, a “hardware set” includes physical hardware (e.g., peripheral devices, microcontrollers, processors, memory, etc.) that may be included on a device. In some examples, a supported hardware set is a set for which a software component is available to be installed via the collection of components120. Further, some components can be specific for particular operating systems (e.g., drivers for operating a particular hardware component for a particular OS). Hardware devices may include input output devices, peripheral devices connected via a bus such has a peripheral component interconnect (PCI), controllers to control busses, specific purpose hardware, etc. Multiple operating systems can include different versions of a same operating system type, operating systems of similar manufacturers, but with different configurations, operating systems from different manufacturers, or the like. As noted above, the OS may be a hypervisor. In some examples, each software component can be associated with information regarding the particular software components relevance to a particular operating system and/or hardware component. The information can be in the form of information in a table or linked list, one or multiple tags, metadata, other data structure used to keep the information, etc. The collection of components120can be a large amount of information and it could take up much bandwidth on the management network109or other network to transfer the entire collection to each computing device108. Each computing device108can include a BMC110a-110n. In some examples, the BMC110can be used to implement services for the computing device108. BMC110can be implemented using a separate processor from the processor114a-114nthat is used to execute a high level operating system (e.g., OS116a-116n). BMCs can provide so-called “lights-out” functionality for computing devices. The lights out functionality may allow a user, such as a systems administrator, to perform management operations on the computing device108even if an operating system is not installed or not functional on the computing device. Moreover, in one example, the BMC110can run on auxiliary power, thus the computing device108need not be powered on to an on state where control of the computing device108is handed over to an operating system after boot. As examples, the BMC110may provide so-called “out-of-band” services, such as remote console access, remote reboot and power management functionality, monitoring health of the system, access to system logs, and the like. As used herein, a BMC110has management capabilities for sub-systems of a computing device108, and is separate from a processor or processing element that executes a main operating system of a computing device (e.g., a server or set of servers). The BMC110may comprise an interface, such as a network interface, and/or serial interface that an administrator can use to remotely communicate with the BMC110. As used herein, an “out-of-band” service is a service provided by the BMC110via a dedicated management channel (e.g., the network interface or serial interface) and is available whether the computing device108is in powered on state. In some examples, a BMC110may be included as part of an enclosure. In other examples, a BMC110may be included in one or more of the servers (e.g., as part of the management subsystem of the server) or connected via an interface (e.g., a peripheral interface). In some examples, sensors associated with the BMC110can measure internal physical variables such as humidity, temperature, power supply voltage, communications parameters, fan speeds, operating system functions, or the like. The BMC110may also be capable to reboot or power cycle the device. As noted, the BMC110allows for remote management of the device, as such, notifications can be made to a centralized station such as the update platform102or a management platform using the BMC110and passwords or other user entry can be implemented via the BMC110. The BMC110a-110ncan have a storage112a-112ncoupled to the BMC110a-110n. The storage112can be, for example, on a flash storage incorporated within the BMC110or accessible via a bus. The storage112can act as a local repository that may also be accessible to an operating system once the operating system is installed. Further, the computing device may include a plurality of hardware devices or components. Examples, of hardware components include controllers, add on cards, peripherals, field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), application specific integrated circuits (ASICs), system on chips, etc. In some examples, one or more of the hardware components can be inventoried or communicatively coupled to the BMC110, either directly or via intermediary components. For example, the BMC110may request that another firmware (e.g., a basic input output system) take an inventory of one or more components if the BMC110is incapable of direct communications. The BMC110can provide the hardware devices that are present on a particular computing device108to the update platform102. This can occur on request, periodically, etc. The OS management platform106can determine to cause installation of an operating system on a computing device108. The OS management platform106can make the determination various ways, one example way is to determine that a system with a particular OS is needed to provide a particular service based on a load balancing algorithm for the datacenter. Another example is a user configuration. A further example is based on a request. The OS management platform106can determine how the OS management platform106wants the computing device108to be configured and create a request for the update platform102. The update platform102is to expose an application programming interface (API) to the OS management platform106. In some examples, the OS management platform106can be implemented as a virtual machine management system. In some examples, the API can be configured to work with more than one OS management platform can be management system agnostic. The request can include an identification to use the collection of components120, which operating system to install, etc. The request can also identify the particular computing device108. The request can be consider information associated with an operating system install. In response to receiving the request, the update platform102determines that an operating system install is to occur on the computing device108. The update platform102can take the information and determine a set of the supported OS, drivers, and firmware. In one example, the update platform102uses the information to select a subset of the collection of components120that are relevant for the identified OS. In one example, the potential components can be filtered by the identified OS. Further, as noted above, the BMC110can provide hardware device information about the hardware devices present on the identified computing device108. The collection of components120can further be filtered based on hardware devices present to create a subset of the collection of components. The subset can then be sent to the BMC110to be stored on its storage112. In one example, the update platform102can push the subset down to the BMC110. The subset can be in the form of a package or individual files. The package and/or individual files can be self-executable or executed via a different method. In some examples, the update platform102can push the subset to the storage112prior to the operating system to be installed is installed using an operating system install package. The update platform102can push the subset to the storage112while the computing device is acting as a live server. The live server can be implemented using an old operating system prior to install of the new OS. In another example, the server may not be executing an operating system. The pushing of the subset to the BMC110can be considered staging. The operating system install package can also be sent to the computing device108. In one example, the update platform102or a management platform sends the operating system install package to the computing device via the BMC110. In another example, the operating system install package can be sent via other means, for example, via a production network or a secondary management network that the OS management platform106can directly communicate with one or more of the computing devices108with. In some examples, the computing device108receives the subset of components before receiving the OS install package. The processor114can execute instruction to install the operating system install package. In some examples, the OS can access the storage112after being installed. In some examples, the OS is installed with a driver or other software that is capable of accessing the storage112. Implementation can be performed using an infrastructure automation tool such as Chef. Chef is a tool designed for automation at scale using Ruby. In some examples, the OS can be configured to install the subset on the storage112. The install could be as part of a periodic routine to check if anything new is on the storage and, if so, check credentials and install if appropriate. The install could also be as part of an install procedure for the OS. In some examples, for a live server, the install of the operating system and the subset can be within a single maintenance window. In some examples, all of the subset is not installed. This can happen, for example, if the associated driver or firmware has more than one component. In some examples, the BMC110can install or cause updates for firmware for hardware devices separate from the installation of drivers via the OS. In some examples, some of the associated software components are installed via the OS (e.g., middleware, drivers, etc.). As noted, these can be selected for the particular OS installed as well as the particular hardware devices identified by the BMC110to be on the computing device. The management network109can be a communication network that can use wired communications, wireless communications, or combinations thereof. Further, the communication network can include multiple sub communication networks such as data networks, wireless networks, telephony networks, etc. Such networks can include, for example, a public data network such as the Internet, local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANS), cable networks, fiber optic networks, combinations thereof, or the like. In certain examples, wireless networks may include cellular networks, satellite communications, wireless LANs, etc. Further, the communication network can be in the form of a direct network link between devices. Various communications structures and infrastructure can be utilized to implement the communication network(s). By way of example, computing devices communicate with each other and other components with access to the communication network via a communication protocol or multiple protocols. A protocol can be a set of rules that defines how nodes of the communication network interact with other nodes. Further, communications between network nodes can be implemented by exchanging discrete packets of data or sending messages. Packets can include header information associated with a protocol (e.g., information on the location of the network node(s) to contact) as well as payload information. In some examples, the BMCs110can communicate with each other and/or the update platform102via the management network109. In these examples, the BMCs110can be coupled to network interface cards (NICs) that can connect to the management network109. For various purposes, for example, security, the management network109can be isolated from a production network that the OSes116installed on the computing devices108provide services through and/or can be accessed directly through. The production network can use a separate communication network. In some examples, isolation can be implemented via virtual networks. In other examples, isolation can be implemented via using separate hardware (e.g., network switches). In one example, the OS management platform can perform a precheck on the OS and software component set to be installed. As noted, the OS management platform106can create a request for a particular OS to be installed on an identified computing device to the update platform102. The update platform102can read the metadata for the collection of components120can determine a supported OS, associated drivers, and associated firmware. The set can be identified via an identifier. The identifier can be used to identify a premade or predetermined subset package that can be used. The information can be sent back to the OS management platform106. The OS management platform106can then request that the update platform call the particular subset package using the identifier to set this as the package to be used with the particular computing device to install the new OS on. A precheck can then be called by the OS management platform106for the identifier. In one example, during the precheck stage, the update platform102reads a firmware inventory from a BMC110associated with the computing device110. In some examples, if there is a current OS116on the computing device110, some of the information is received via an agent on the OS116. Then, the update platform102identifies and/or creates the install set. The update is then staged by pushing the install set to the storage112via the BMC110. Once the new OS is updated, the OS management platform106can call a routine to apply the install set. This can be, for example, via a setting in the new OS that is installed. FIG.2is a block diagram of an update platform capable to determine and send a subset of software components customized for an operating system to a baseboard management controller of a computing device prior to installation of the operating system, according to an example.FIG.3is a flowchart of a method for sending a subset of software components customized for an operating system to a baseboard management controller of a computing device prior to installation of the operating system, according to an example. The update platform102includes, for example, a processing element210, and a machine-readable storage medium220including instructions222,224,226for providing a customized set of software components to a computing device via a BMC. In some examples, the update platform102may be implemented, for example, as a virtual machine executing on a computing device. In some example, the update platform102includes the software described to implement the features discussed herein in addition to hardware used for execution of the software. Processing element210may be, one or multiple central processing unit (CPU), one or multiple semiconductor-based microprocessor, one or multiple graphics processing unit (GPU), other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium220, or combinations thereof. The processing element110can be a physical device. Moreover, in one example, the processing element210may include multiple cores on a chip, include multiple cores across multiple chips, multiple cores across multiple devices (e.g., if the computing device200includes multiple node devices), or combinations thereof. Processing element210may fetch, decode, and execute instructions222,224,226to implement method300. As an alternative or in addition to retrieving and executing instructions, processing element210may include at least one integrated circuit (IC), other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for performing the functionality of instructions222,224,226. Machine-readable storage medium220may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a Compact Disc Read Only Memory (CD-ROM), flash memory, and the like. As such, the machine-readable storage medium can be non-transitory. As described in detail herein, machine-readable storage medium220may be encoded with a series of executable instructions for to select a subset of software components for a computing device awaiting an operating system install and sending the subset to the computing device prior to the operating system install. At302, the processing element210can execute update instructions222to determine that an OS update is to occur on a computing device. As noted above, the update platform can expose an API to an OS management platform (e.g., a virtual machine management system). The OS management platform can use the API to inform the update platform102that an OS install is to occur on the computing device by providing information about the request. The computing device can be identified along with the OS in the information. In some examples, an address for a BMC of the computing device can be mapped to an identifier of the computing device. The update platform102may maintain such a mapping (e.g., when a new computing device is added to a management network to be supported by the update platform, information about that device can be collected). At304, the information that is collected can be used by the update platform to select a subset of software components for the computing device by executing subset instructions224. A set of the software components can be filtered using the OS to be installed on the computing device. Further, the software components can be filtered using the hardware present on the computing device. As noted above, the BMC can provide information about what hardware is present on the computing device to the update platform102. The software components can include, for example, firmware, drivers, etc. At306, communications instructions226can be executed to send the subset of software components that were selected for the computing device to a BMC of the computing device via a management network. An input/output interface250(e.g., a network interface card) can be used for the transmission. In some examples, the sending can be scheduled. Further, the update platform102may have credentials to push the subset to the storage associated with the BMC. The update platform102can send the computing device the set prior to the installation of an OS install package on the computing device. FIG.4is a block diagram of a computing device with a baseboard management controller capable of storing software components to install customized for an operating system prior to installation of that operating system.FIG.5is a flowchart of a method for receiving a customized software component subset at a baseboard management controller of a computing device customized for an operating system not yet installed on the computing device. A processor430, such as a central processing unit (CPU) or a microprocessor suitable for retrieval and execution of instructions and/or electronic circuits can be configured to perform the functionality of various higher level modules described herein (e.g., operating system416). In certain scenarios, instructions and/or other information, such as operating system information, identifiers, sets and subsets of information, can be included in memory432or other memory. Input/output interfaces434may additionally be provided by the computing device400. For example, input devices, such as a keyboard, a sensor, a touch interface, a mouse, a microphone, etc, can be utilized to receive input from an environment surrounding the computing device. Further, an output device, such as a display, can be utilized to present information to users. Examples of output devices include speakers, display devices, amplifiers, etc. Moreover, in certain examples, some components can be utilized to implement functionality of other components described herein. Input/output devices such as communication devices like network communication devices or wireless devices can also be considered devices capable of using the input/output interfaces434. In some examples, the BMC410can include an input/output interface such as a NIC. One or more of the hardware devices included in the computing device400can include electronic circuitry for implementing the functionality described herein. In addition or as an alternative, some components may be implemented as a series of instructions encoded on a machine-readable storage medium of computing device400and executable by processor430. It should be noted that, in some examples, some modules are implemented as hardware devices, while other modules are implemented as executable instructions. In one example, the BMC410can run on auxiliary power, thus the computing device400need not be powered on to an on state where control of the computing device400is handed over to an operating system after boot. As examples, the BMC410may provide so-called “out-of-band” services, such as remote console access, remote reboot and power management functionality, monitoring health of the system, access to system logs, and the like. As used herein, a BMC410has management capabilities for sub-systems of a computing device400, and is separate from a processor or processing element that executes a main operating system of a computing device (e.g., a server or set of servers). The BMC410may comprise an interface, such as a network interface, and/or serial interface that an administrator can use to remotely communicate with the BMC410. As used herein, an “out-of-band” service is a service provided by the BMC410via a dedicated management channel (e.g., the network interface or serial interface) and is available whether the computing device400is in powered on state. In some examples, a BMC410may be included as part of an enclosure. In other examples, a BMC410may be included in one or more of the servers (e.g., as part of the management subsystem of the server) or connected via an interface (e.g., a peripheral interface). In some examples, sensors associated with the BMC410can measure internal physical variables such as humidity, temperature, power supply voltage, communications parameters, fan speeds, operating system functions, or the like. The BMC410may also be capable to reboot or power cycle the device. As noted, the BMC410allows for remote management of the device, as such, notifications can be made to a centralized station such as the update platform or a management platform using the BMC410and passwords or other user entry can be implemented via the BMC410. The BMC410can have a storage414coupled to the BMC410. The storage414can be, for example, on a flash storage incorporated within the BMC410or accessible via a bus. The storage414can act as a local repository that may also be accessible to an operating system once the operating system is installed. The computing device400can include hardware components412. Examples of hardware components include microcontrollers, controller hubs, a southbridge, a northbridge, peripheral devices coupled to one or more bus, daughter boards, graphics cards, ASICs, etc. One or more of the hardware devices may be associated with firmware engines. Some of these firmware engines can be updated with firmware. A firmware engine can be implemented using instructions executable by a processor and/or logic. In some examples, the firmware engine can be implemented as platform firmware. Platform firmware may include an interface such as a basic input/output system (BIOS) or unified extensible firmware interface (UEFI) to allow it to be interfaced with. The platform firmware can be located at an address space where the processor430(e.g., CPU) for the computing device400boots. In some examples, the platform firmware may be responsible for a power on self-test for the computing device400. In other examples, the platform firmware can be responsible for the boot process and what, if any, operating system to load onto the computing device400. Further, the platform firmware may be capable to initialize various components of the computing device400such as peripherals, memory devices432, memory controller settings, storage controller settings, bus speeds, video card information, etc. In some examples, platform firmware can also be capable to perform various low level functionality while the computing device400executes. Moreover, in some examples, platform firmware may be capable to communicate with a higher level operating system executing on a CPU, for example via an advanced configuration and power interface (ACPI). In some examples, BMC410can take an inventory of the hardware components412present on the computing device400as explained in detail above. The inventory can be sent to the update platform. The update platform can be implemented to use the inventory and an operating system install information set to determine a subset of software components to send to the computing device as described in method300. At502, the BMC410can receive the software component subset from the update platform. As noted above, in one example, the update platform can push the information down to the BMC410. As used herein, to push information means that the update platform initiates the transaction to provide the BMC410the software component subset (e.g., in response to determining that the operating system is to be installed on the computing device). In some examples, the software component subset is received while the computing device is acting as a live server. This can be accomplished without affecting the live server because the BMC410is separate subsystem from an OS416of the live server executing on processor430. At504, the BMC410can cause the received software component subset to be stored on a storage414. The BMC410can receive information and then store the information in the storage414directly or store in a buffer memory and then store the information in the storage414. As noted above, the computing device400can be configured such that an OS416with proper configuration (e.g., drivers) can access the storage414. At506, the computing device400can receive an operating system update package associated with the operating system install. In one example, the OS update package can be received after the subset. In some examples, the operating system update package is received via a network separate than a management network associated with the BMC. In other examples, the OS update package is received via the management network. The OS update package can be stored in a location accessible to an instruction set executed by the processor430, for example, a location that is capable to boot the OS update package. In another example, the OS update package can be executable by an already running OS416on the computing device400, which can facilitate install of the new OS. At508, the processor430can install the new OS using the OS update package. As noted, the execution can be initiated via an OS currently installed on the computing device400or via a startup routine. Further, in some examples, the update can be scheduled. In one example, while an existing OS416is acting as a live server, an OS management platform106can schedule a maintenance window where virtual machines and/or processes running on the computing device are transferred off of the computing device or stopped and additional virtual machines and/or processes are not started. Once this is complete, the computing device can be brought down for the installation of the new OS. At510, the software component subset can be installed. In some examples, the software components can include drivers. These drivers can be installed after the new OS is installed. In some examples, because there is no need to separately download the subset, the installation of the drivers in the subset can occur in the same maintenance window as the installation of the OS. As noted above, the subset can include drivers that are identified to work with the OS type. In another example, the BMC410can be configured to update or cause update of one or more firmware engines associated with the hardware components412that are supported by the received subset. While certain implementations have been shown and described above, various changes in form and details may be made. For example, some features that have been described in relation to one implementation and/or process can be related to other implementations. In other words, processes, features, components, and/or properties described in relation to one implementation can be useful in other implementations. Furthermore, it should be appreciated that the systems and methods described herein can include various combinations and/or sub-combinations of the components and/or features of the different implementations described. Thus, features described with reference to one or more implementations can be combined with other implementations described herein.
42,648
11861358
DETAILED DESCRIPTION Updating firmware of network interconnect devices typically requires taking each interconnect device out of service, updating the firmware and restarting each device. However, in some situations, when a first network interconnect device and then a redundant network interconnect device are restarted in quick succession, network connections may not be re-established or may fail to provide continuity of connection. If a link aggregation group (LAG) exists between a network interface card (NIC) of the server and interconnect ports of the network interconnect devices, a Link Aggregation Control Protocol (LACP) entity of the NIC may not have fully reactivated its transmission queues of the link to the updated first network interface device before the second network interface device is restarted. Thus, even though there may be at least one link available to carry traffic, the second network interface device may not be ready to use the link that just recovered, and so the entire LAG behaves as if links to both network interface device have failed at the same time. Various example systems and methods described herein maintain a network connection of a computing device (e.g., a server) while firmware on a network interconnect device is updated. The network interconnect device, such as a switch or a bridge, may be in a multi-chassis LAG pair of network interconnect devices that are updated. In various example systems, at least two network interconnect devices are connected to at least one server, each server including at least two Network Interface Card (NIC) ports. For each server, one NIC port is linked to one of the network interconnect devices, and a second NIC port is linked to a second network interconnect device. The links may be configured as a redundant pair according to IEEE 802.1AX, using Link Aggregation Control Protocol (LACP) to establish and maintain the redundant connections. A similarly configured set of links connects the two network interconnect devices to a core network device. In various examples, certain updates to firmware of the network interconnect devices require the network interconnect devices to be restarted in order for the new firmware to begin operation. In these examples, the example system manages the state of the aggregated links and controls the pace and order of the restarts so as to guarantee that at least one link of each redundant connection is available at all times, maintaining the servers' access to the network. An example process uses information about the state of the LACP entity in the servers to determine when a network interconnect device can be safely restarted. By administratively removing a network interconnect device from the multi-chassis LAG pair before taking it offline, transmission queues can be allowed to drain before taking the network interconnect device offline, ensuring no loss of data. In addition, by monitoring the state of the LACP entity on the server end of the redundant links, the systems and methods ensure that the physical link that was recently taken down has fully recovered before taking down its redundant partner to receive the firmware update. Referring now to the figures,FIG.1illustrates an example system100for automated firmware updating of a complex of network interconnect devices at a server edge without loss of server connectivity. The system100includes a computing device which, in the example ofFIG.1, is a server110. In other examples, the computing device may be any of a variety of other devices including but not limited to database appliances, storage appliances or workstations. The server110is communicatively coupled to a pair of network interconnect devices including a first network interconnect device140-1and a second network interconnect device140-2. The server110is connected to each of the first network interconnect device140-1and the second network interconnect device140-2via a single physical network media134-1and134-2(e.g., a cable), respectively. The first and second network interconnect devices140-1and140-2are each communicatively connected, e.g., via cables, to a core network device160. The system100provides a plurality of redundant multiplexed data connections between the server110and the core network device160. The example server110includes a central processing unit (CPU)112and a network interface card (NIC)126. The CPU112includes an operating system component114and a network stack component116. The CPU112also includes memory for storing software and/or firmware for implementing the operating system component114and the network stack116. In various examples, the memory may include non-volatile storage including but not limited to at least one of a read-only memory (ROM), programmable flash memory or erasable programmable ROM (EPROM). In various examples, the memory may be integrally formed with the CPU112or may be an external memory device. The network stack component116provides for packetized communication between the CPU112and the NIC126. The network stack component116may provide various layers including, for example, an application layer, a transport layer, an internet layer and/or a link layer. In the example ofFIG.1, the network stack component116is linked to a first LAG entity120-1(labeled LAGF) and a second LAG entity120-2(labeled LAGG). The network stack component116provides a first data stream to the first LAG entity120-1and a second data stream to the second LAG entity120-2. The LAG entities120each coordinate a LAG between the server110and one of the first network interconnect devices140. In the example ofFIG.1, the LAGs between the server110and the network interconnect devices140each may provide redundant data connections that include the first and second data streams provided by the network stack component116. These redundant data connections provide for resiliency in transporting the data streams. The first and second LAG entities120-1and120-2maintain first and second LAGs between the server110and the first and second network devices140-1and140-2according to the LACP protocol. LACP provides a method to control the bundling of several physical ports together to form a single logical channel. LACP allows the server110to negotiate an automatic bundling of links by sending LACP packets to the first and second network interconnect devices140-1and140-2. The first LAG entity120-1is coupled to first and second LACP driver components122-1and122-2. The first and second LACP driver components122-1and122-2communicate redundant copies of the first data stream from the first LAG entity120-1to first and second peripheral component interconnects (PCI)-express physical functions128-1and128-2(labeled PF0and PF1, respectively). The first and second PFs128-1and128-2are also referred to as NIC partitions. Each of the first and second PFs128-1and128-2modulates the first data stream received from the respective first and second LACP drivers122-1and122-2over a first portion of bandwidth of the first and second physical network media134-1and134-2. In addition, each of the first and second PFs128-1and128-2demodulates corresponding data stream received from the first and second network interconnect devices140-1and140-2and communicates the demodulated data stream to the respective first and second LACP drivers122-1and122-2. The second LAG entity120-2is coupled to third and fourth LACP driver components122-3and122-4. The third and fourth LACP driver components122-3and122-4communicate redundant copies of the second data stream from the second LAG entity120-2to third and fourth PFs128-3and128-4(labeled PF3and PF4, respectively). Each of the third and fourth PFs128-3and128-4modulates the second data stream received from the respective third and fourth LACP drivers122-3and122-4over a second portion of bandwidth of the first and second physical network media134-1and134-2, the second portion of bandwidth being different from the first portion of bandwidth utilized by the first and second PFs128-1and128-2. In addition, each of the third and fourth PFs128-3and128-4demodulates a corresponding data stream received from the first and second network interconnect devices140-1and140-2and communicates the demodulated data stream to the respective third and fourth LACP drivers122-3and122-4. In various examples, the PFs128modulate and demodulate multiplexed data connections according to the edge virtual bridging S-channel standard. The first through fourth PFs128-1to128-4are each coupled to first through fourth server channel access ports (CAP)130-1,130-2,130-3and130-4, respectively. The server CAPs130are connected to first and second physical NIC ports132-1and132-2. Specifically, the first and third server CAPs130-1and130-3are coupled to the first NIC port132-1and the second and fourth server CAPs130-2and130-4are coupled to the second NIC port132-2. In this way, copies of the first and second data streams are multiplexed and demultiplexed to and from the first and second physical network media134-1and134-2, as indicated by first and second multiplexed channels135-1and136-1, respectively, contained within the first physical network media134-1, and as indicated by third and fourth multiplexed channels135-2and136-2contained within the second physical network media134-2. The NIC126includes a processor (e.g., a CPU) and memory storing software and/or firmware for implementing various components of the PFs128and the CAPs130. In various examples, the memory may include at least one of ROM, programmable flash memory or erasable programmable ROM (EPROM). In various examples, the memory may be integrally formed with the CPU of the NIC126or may be an external memory device. The first and second physical network media134-1and134-2are each coupled to respective ones of first and second server side network interconnect ports142-1and142-2included in the first network interconnect device140-1and the second network interconnect device140-2, respectively. The first server side network interconnect port142-1is coupled to first and third network interconnect CAPs144-1and144-3. The first and third network interconnect CAPs144-1and144-3each receives copies of the first and second data streams that are received from and transmitted to the first physical network media134-1. The second server side network interconnect port142-2is coupled to second and fourth network interconnect CAPs144-2and144-4. The second and fourth network interconnect CAPs144-2and144-4each receives copies of the first and second data streams that are received from and transmitted to the second physical network media134-2. The first network interconnect CAP144-1, of the first network interconnect device140-1, and the second network interconnect CAP144-2, of the second network interconnect device140-2, each communicate a copy of the first multiplexed data stream to and from a first multi-chassis LAG entity148-1. In this example, the first data stream is one of a pair of edge virtual bridging S-channels and the first multi-chassis LAG entity148-1is a multi-chassis LAG of S channels and is thus labeled as S-LAGF. The first multi-chassis LAG entity148-1coordinates with the first server LAG entity120-1to complete the LAG containing the first data stream. In the example ofFIG.1, the first multi-chassis LAG entity148-1communicates the combined first data streams to and from a first multi-chassis LAG entity150-1that is linked with a corresponding first core network device LAG entity164-1in the core network device160. The combined first data streams are communicated from the first multi-chassis LAG entity150-1to first and second core network side network interconnect ports152-1and152-2of the first and second network interconnect devices140-1and140-2. Each of the first and second network interconnect devices140-1and140-2communicates a copy of the first data stream to first and second interconnect side core network device ports162-1and162-2of the core network device160. The first and second interconnect side core network device ports162-1and162-2are coupled to the first core network device LAG entity164-1so as to complete the core network side LAG of the first data stream. This first data stream may then be communicated to and from various client devices via a first client side core network device port166-1. The third network interconnect CAP144-3, of the first network interconnect device140-1, and the fourth network interconnect CAP144-4, of the second network interconnect device140-2, each communicates a copy of the second multiplexed data stream to and from a second multi-chassis LAG entity148-2. In this example, the second data stream is one of a pair of edge virtual bridging S-channels and the second multi-chassis LAG entity148-2is a multi-chassis LAG of S channels and is thus labeled as S-LAGG. The second multi-chassis LAG entity148-2coordinates with the second server LAG entity120-2to complete the LAG containing the second data stream. The second multi-chassis LAG entity148-2of S-channels communicates the combined second data streams to and from a second multi-chassis LAG entity150-2that is linked with a corresponding second core network device LAG entity164-2in the core network device160. The combined second data streams are communicated from the second multi-chassis LAG entity150-2to third and fourth core network side network interconnect ports152-3and152-4of the first and second network interconnect devices140-1and140-2. Each of the first and second network interconnect devices140-1and140-2communicates a copy of the second data stream to third and fourth interconnect side core network device ports162-3and162-4of the core network device160. The third and fourth interconnect side core network device ports162-3and162-4are coupled to the second core network device LAG entity164-2so as to complete the core network side LAG of the second data stream. This second data stream may then be communicated to and from various client devices via a second client side core network device port166-1. The first and second network interconnect device140-1and140-2communicate via an inter-switch link (ISL)146. InFIG.1, the first network interconnect device140-1is illustrated as controlling all the LAG entities148and150and communicating data streams to the second interconnect device140-2. However, the second interconnect device140-2may also include similar LAG entities148and150as the first network interconnect device so as to perform similar functions and to assume control of the data streams in preparation for taking interconnect device140-1out of service. Alternatively, LAG entities148and150could be distributed across network interconnect devices140-1and140-2. The components of the system100inFIG.1may be modified. For example, the network interconnection devices140, the server110and the core network device160may include more ports such that more than two physical network media134are provided between the server110and the network interconnection devices140, and more than two physical network media are provided between the network interconnection devices140and the core network device160. Further, more than one server110may be coupled to the network interconnect devices140and more than one core network device160may be coupled to the network interconnect devices140. Referring now toFIG.2, another example system200for automated firmware update of a complex of network interconnect devices at a server edge without loss of server connectivity is illustrated. The network interface devices140, the core network device160and the physical network media134are unchanged from the system100ofFIG.1. However, the server210illustrated inFIG.2has been modified from the server110ofFIG.1. Specifically, the first and second server LAG entities120-1and120-2of the server110have been replaced with first and second LAG entities220-1and220-2on a reconfigured NIC226. The first data stream is communicated from the network stack116to the first PF128-1via the first network device driver122-1. The second data stream is communicated from the network stack116to the second PF228-1via the second network device driver122-2. The first PF228-1is coupled to the first LAG entity220-1and the second PF228-2is coupled to the second LAG entity220-2. The first LAG entity220-1communicates first and second versions of the first data stream to and from the first and second CAPs130-1and130-2and the second LAG entity220-2communicates first and second versions of the second data stream to and from the third and fourth CAPs130-3and130-4. The PFs228may be implemented on a central processor of the NIC226. The remaining components of the system200function in similar manner to the components of the system100ofFIG.1. Referring toFIG.3, an example flow diagram for an example process300is illustrated for automated firmware update of a complex of network interconnect devices at a server edge without loss of server connectivity. The process300is an example only and may be modified. The example process300ofFIG.3will now be described with further references toFIGS.1and2. The process300may begin with a processor of the server110establishing a first data link with the core network device160via the first network interconnect device140-1connected to the first network interface card (NIC) port132-1(block310). The processor may be located on the CPU112and/or on the NIC226, as illustrated inFIG.2. At block320, the processor establishes a second data link with the core network device160via the second network interconnect device140-2connected to the second NIC port132-2. In various examples, the first and second data links form a pair of redundant data connections of a link aggregation group (LAG), each of the redundant data connections may include a plurality of multiplexed data connections within one physical network media. For example, the first and second physical network media134-1and134-2may each include multiplexed S-channels135-1,135-2,136-1and136-2. When an update of firmware of one or both of the network interconnect devices140is needed, the processor of the server110initiates removal of one of the first or second network interconnect devices from the LAG (block330). For example, the processor may send an instruction to the first network interconnect device140-1instructing the LAG entities148-1and150to remove all downlink and uplink ports from each of the LAGs formed between the server110and the core network device160. The removal of the first network interconnect device140-1takes place while the second network interconnect device140-2continues to maintain the second datalink between the server110and the core network device160. Upon receiving the instruction for removal from the LAGs at block330, the LACP mechanisms with in the LAG entities148and150of the first network interconnect device140-1transmit LACP frames within egress buffers of the server side interconnect ports142and the core network side interconnect ports152but schedule no more LACP frames for these ports. In addition, the LAG entities148and150of the first network interconnect device140-1may forward packets bound for the server110and the core network device160across the ISL146to the second interconnect device140-2to be transmitted to these devices. Also, the first network interconnect device140-1may continue to accept LACP packets from the first NIC port132-1allowing transmit buffers/queues of the first NIC port132-1to empty before failing over to the second network interconnect device140-2. At this time, the LAG entities148and150on the first network interconnect device140-1update states on the server side interconnect ports142and the core network side interconnect ports152. LACP agents on the LAG entities120or220of the server110, and the LAG entities164of the core network device160, detect the change in state on their ports and likewise remove the corresponding port(s)162and132from their LACP frame collection and distribution lists. At block340, the processor of the server110monitors states of the multiplexed data connections135-1and135-2on the first network interconnect device140-1to detect first changes in states indicating that the first network interconnect device140-1has stopped receiving or transmitting data to and from the LAG on the plurality of multiplexed data connections. The monitoring process at block340may include checks to ensure that: (a) all of the server side interconnect ports142-2and core network side interconnect ports152-2and152-4on the second network interconnect device140-2indicate that they their link LAG entities are collecting and distributing LACP frames, and (b) that all of the server side interconnect ports142-1and core network side interconnect ports152-1and152-3on the first network interconnect device140-1indicate that they their link LAG entities are not collecting nor distributing LACP frames. In various examples, the first network interconnect device140-1is configured to indicate states of multiple S-Channels per physical network media134-1. In these examples, the first network interconnect device140-1supports independent LACP states per S-Channel, not just per server side interconnect port142-1. This allows support for Ethernet network adapters that have multiple NIC partitions per port, where each NIC partition has a driver instance operating system114and S-Channels to the first network interconnect device140-1. The first network interconnect device140-1and the NIC126insert and remove an S-Channel Tag (S-VLAN Tag) for each S-Channel's LACP exchange. Also the first interconnect device140-1may support exchange Multi-Chassis LAG states across the ISL146for each S-Channel. Upon determining that the first network interconnect device140-1has stopped receiving or transmitting data to and from the LAG on the plurality of multiplexed data connections, the processor updates firmware of the first network interconnect device140-1and restarts the first network interconnect device140-1upon completing the firmware update (block350). Alternatively to the processor of the server110updating the firmware of the first network interconnect device140-1, a processor of the first network interconnect device140-1may initiate the firmware update and restart the first network interconnect device140-1. Upon restarting the first interconnect device140-1, the processor of the server110add the first network interconnect device back to the LAG (block360). At block370, the processor monitors a state of the first network interconnect device140-1to detect a second change in state of the first network interconnect device140-1indicating that the first network interconnect device140-1has been added back to the LAG. Upon detecting that the first network interconnect device140-1has been added back to the LAG, the processor of the server110reestablishes the redundant data connections of the first network interconnect device140-1with the core network device160(block380). Upon completion of the process300, the process300may be repeated to update the firmware on the second network interconnect device140-2in a similar fashion. Thus, the process300allows for firmware update of both the first and second interconnect devices140-1and140-2without losing connection of the first and second data streams between the server110and the core network device160. FIG.4illustrates a block diagram of an example system with a computer-readable storage medium including example instructions executable by a processor to update firmware of an interconnect device. The system400includes the processor410and the computer-readable storage medium420. The computer-readable storage medium420includes example instructions421-426executable by the processor410to perform various functionalities described herein. The example instructions includes initiating removal of a first interconnect device instructions421to initiate removal of a first network interconnect device forming a first datalink with a core network device while maintaining a second datalink with a second network interconnect device. As described above, the first datalink and the second datalink form a pair of redundant data connections of a LAG, each of the redundant data connections including a plurality of multiplexed data connections within one physical network media. The example instructions422cause the processor410to detect a first change in state indicating that the first network interconnect device has stopped receiving or transmitting data to and from the LAG on the plurality of multiplexed data connections. Upon detection of the first change in state, the example instructions423cause the processor410to update firmware of the first network interconnect device. The example instructions424cause the processor410to add the first network interconnect device back to the LAG. The example instructions425then cause the processor410to detect a second change in state indicating that the first network interconnect has been added back to the LAG. Upon detecting the second change, the example instructions426cause the processor410to reestablish the redundant data connections of the first network interconnect device with the core network device. Various examples described herein are described in the general context of method steps or processes, which may be implemented in one example by a software program product or component, embodied in a machine-readable medium, including executable instructions, such as program code, executed by entities in networked environments. Generally, program modules may include routines, programs, objects, components, data structures, etc. which may be designed to perform particular tasks or implement particular abstract data types. Executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. Software implementations of various examples can be accomplished with standard programming techniques with rule-based logic and other logic to accomplish various database searching steps or processes, correlation steps or processes, comparison steps or processes and decision steps or processes. The foregoing description of various examples has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or limiting to the examples disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various examples. The examples discussed herein were chosen and described in order to explain the principles and the nature of various examples of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various examples and with various modifications as are suited to the particular use contemplated. The features of the examples described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products. It is also noted herein that while the above describes examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope as defined in the appended claims.
27,579
11861359
DETAILED DESCRIPTION Specific structural or functional descriptions in the embodiments of the present disclosure introduced in this specification or application are only for description of the embodiments of the present disclosure. The descriptions should not be construed as being limited to the embodiments described in the specification or application. FIG.1is a diagram illustrating a storage device50in accordance with an embodiment of the present disclosure. Referring toFIG.1, the storage device50may include a memory device100and a memory controller200configured to control an operation of the memory device100. The storage device50may be a device configured to store data under the control of a host300such as a cellular phone, a smartphone, an MP3 player, a laptop computer, a desktop computer, a game machine, a TV, a tablet PC, an in-vehicle infotainment system, or the like. The storage device50may be manufactured as any one of various kinds of storage devices depending on a host interface, which is a communication system for communicating with the host300. For example, the data storage device50may be configured of any one of various kinds of storage devices such as an SSD, an MMC, an eMMC, an RS-MMC, a micro-MMC type multimedia card, an SD, a mini-SD, a micro-SD type secure digital card, a universal serial bus (USB) storage device, a universal flash storage (UFS) device, a personal computer memory card international association (PCMCIA) card type storage device, a peripheral component interconnection (PCI) card type storage device, a PCI-express (PCI-E) type storage device, a compact flash (CF) card, a smart media card, a memory stick, and so on. The storage device50may be manufactured in the form of any one of various package types such as a package on package (POP) type, a system in package (SIP) type, a system on chip (SOC) type, a multi-chip package (MCP) type, a chip on board (COB) type, a wafer-level fabricated package (WFP) type, a wafer-level stack package (WSP) type, and so on. The memory device100may store data therein. The memory device100may operate under the control of the memory controller200. The memory device100may include a memory cell array including a plurality of memory cells configured to store data therein. The memory cells may include a single level cell (SLC) capable of storing a single-bit data, a multi-level cell (MLC) capable of storing two-bit data, a triple-level cell (TLC) capable of storing three-bit data, or a quad-level cell (QLC) capable of storing four-bit data. The memory cell array may include a plurality of memory blocks. Each memory block may include a plurality of memory cells. Each memory block may include a plurality of pages. In an embodiment, a page may be a unit of storing data in the memory device100or reading stored data from the memory device100. A memory block may be a unit of erasing data. In an embodiment, the memory device100may be a double data rate synchronous dynamic random access memory (DDR SDRAM), a low power double data rate4 (LPDDR4) SDRAM, a graphics double data rate (GDDR) SDRAM, a low power DDR (LPDDR), a rambus dynamic random access memory (RDRAM), a NAND flash memory, a vertical NAND flash memory, a NOR flash memory device, a resistive random access memory (RRAM), a phase-change random access memory (PRAM), a magnetoresistive random access memory (MRAM), a ferroelectric random access memory (FRAM), a spin transfer torque random access memory (STT-RAM), or the like. In this specification, for the sake of explanation, it is assumed that the memory device100is a NAND flash memory. The memory device100may receive a command and an address from the memory controller200and access an area of the memory cell array that is selected by the address. In other words, the memory device100may perform an operation instructed by the command on the area selected by the address. For example, the memory device100may perform a write (or program) operation, a read operation, and an erase operation. During the program operation, the memory device100may program data to the area selected by the address. During the read operation, the memory device100may read data from the area selected by the address. During the erase operation, the memory device100may erase data from the area selected by the address. The memory controller200may control overall operations of the storage device50. When power is applied to the storage device50, the memory controller200may execute firmware (FW). In the case where the memory device100is a flash memory device, the memory controller200may execute firmware such as a flash translation layer (FTL) for controlling communication between the host300and the memory device100. In an embodiment, the memory controller200may receive data and a logical block address (LBA) from the host300, and translate the LBA into a physical block address (PBA) indicating addresses of memory cells to which the data is to be stored, the memory cells being included in the memory device100. The memory controller200may control the memory device100to perform a program operation, a read operation, or an erase operation in response to a request from the host300. During the program operation, the memory controller200may provide a write command, a PBA, and data to the memory device100. During the read operation, the memory controller200may provide a read command and a PBA to the memory device100. During the erase operation, the memory controller200may provide an erase command and a PBA to the memory device100. In an embodiment, the memory controller200may autonomously generate a command, an address, and data regardless of a request from the host300, and transmit them to the memory device100. For example, the memory controller200may provide a command, an address, and data to the memory device100to perform background operations such as a program operation for wear leveling, and a program operation for garbage collection. In an embodiment, the memory controller200may control at least two or more memory devices100. In this case, the memory controller200may control the memory devices100according to an interleaving scheme so as to enhance the operating performance. The interleaving scheme may be an operating scheme of overlapping operating periods of at least two or more memory devices100. In an embodiment, the memory controller200may include a processor210and a buffer memory220. The processor210may include a plurality of cores. Each core may store a firmware image for an operation of the storage device50. Each core may execute the stored firmware and thus control overall operations of the storage device50. The memory controller200may load a boot loader image from the buffer memory220in a memory of an arbitrarily selected core of the plurality of cores. The memory controller200may dynamically allocate, in the memory of the selected core, an address of a target memory area in which the boot loader image is to be loaded. The memory controller200may execute the boot loader image loaded in the target memory area. The memory controller200may receive a new firmware image from the host300in response to the executed boot loader image. The memory controller200may update a firmware image stored in a memory of each of the plurality of cores with the new firmware image. In an embodiment, the memory controller200may update the firmware image stored in the memory of each of the plurality of cores with the new firmware image, in parallel with processing a request received from the host300. The memory controller200may control the memory device100so that the memory device100stores therein the updated firmware image stored in the memory of each of the plurality of cores before power-off. The buffer memory220may store the boot loader image for firmware update running. In an embodiment, the buffer memory220may be formed of a volatile memory device. In this case, after power-on, the boot loader image stored in the memory device100may be loaded in the buffer memory220. In another embodiment, the buffer memory220may be formed of a nonvolatile memory device. In this case, loading the boot loader image from the memory device100may be omitted. The host300may communicate with the storage device50using at least one of various communication methods such as universal serial bus (USB), serial AT attachment (SATA), serial attached SCSI (SAS), high speed interchip (HSIC), small computer system interface (SCSI), peripheral component interconnection (PCI), PCI express (PCIe), nonvolatile memory express (NVMe), universal flash storage (UFS), secure digital (SD), multimedia card (MMC), embedded MMC (eMMC), dual in-line memory module (DIMM), registered DIMM (RDIMM), and load reduced DIMM (LRDIMM) communication methods. FIG.2is a diagram for describing a configuration and operation of the processor210ofFIG.1in accordance with an embodiment. Referring toFIG.2, the processor210may include a plurality of cores Core 1 to Core 4. The number of cores included in the processor210is not limited to that of the present embodiment. The first core Core 1 may be a processor dedicated for the firmware update running. A firmware image to be loaded in a memory of the first core Core1 may include a firmware update code for the firmware update running. The firmware image including the firmware update code, e.g., a main firmware image, may be loaded in a memory area corresponding to a static address in the memory of the first core Core 1. The first core Core 1 may update, in response to an executed firmware update code, a firmware image stored in a memory of the other cores Core 2 to Core 4 with a new firmware image received from the host300. During the firmware update running, the main firmware image stored in the memory of the first core Core 1 may not be updated because the firmware update code included in the main firmware image is executed. In other words, since the first core Core 1 communicates with the host300in response to the executed firmware update code, the firmware image update cannot be performed on the first core Core 1 while the communication with the host300is performed. Before power-off, the new firmware image updated in the memory of the other cores Core 2 to Core 4 may be stored in the memory device100. Thereafter, the new firmware image stored in the memory device100is loaded in the memory of the first core Core 1 after power-on, so that the main firmware image corresponding to the first core Core 1 may be updated. In other words, the main firmware image stored in the memory of the first core Core 1 may be updated through a power-off or power-on process after the communication with the host300has been completed. FIG.3is a diagram for describing the firmware image ofFIG.2. Referring toFIG.3, a memory of each core of the processor210may store a corresponding firmware image. Here, the firmware update code for the firmware update running may be included in the main firmware image corresponding to the first core Core 1 that is dedicated for the firmware update. The firmware update code may be stored in a memory area corresponding to a static address in the memory of the first core Core 1. FIG.4is a diagram for describing a configuration and operation of the processor210ofFIG.1in accordance with an embodiment. Referring toFIG.4, the processor210may include a plurality of cores Core 1 to Core 4. The number of cores included in the processor210is not limited to that of the present embodiment. InFIG.4, a separate processor dedicated for the firmware update running may not be present. Therefore, any one core of the plurality of cores Core 1 to Core 4 may be selected for the firmware update running. The firmware update code may be generated as a boot loader image. The generated boot loader image may be stored in the buffer memory220. The boot loader image for the firmware update running may be loaded from the buffer memory220in a memory of an arbitrarily selected core of the plurality of cores Core 1 to Core 4. In an embodiment, the boot loader image may be generated as a binary code. The selected core may dynamically allocate a target memory area in which the boot loader image is to be loaded among memory areas of the selected core. The target memory area may be an empty area in which data is not stored. Therefore, an address of the target memory area may be variable. Referring toFIG.4, the selected core may be the second core Core 2. The second core Core 2 may execute the boot loader image loaded in the target memory area therein. The second core Core 2 may receive a new firmware image from the host300in response to the executed boot loader image. The second core Core 2 may update, in response to the executed boot loader image, a firmware image stored in a memory of each of the other cores Core 1, Core3, and Core 4 with the new firmware image. The firmware image stored in the memory of the second core Core2 may also be updated with the new firmware image. The reason for this is because the boot loader image has been loaded in the target memory area that is an empty area regardless of an area of the memory of the second core Core 2 in which the firmware image is loaded. Therefore, even while the boot loader image is executed, the firmware image stored in the other area than the target memory area in the memory of the second core Core 2 may be updated with the new firmware image. In other words, the second core Core 2 may perform communication with the host300in response to the executed boot loader image code loaded in the target memory area, but this operation is performed regardless of running the firmware image stored in the other area of the second core Core 2, so that the second core Core 2 may update the firmware image stored therein while performing the communication with the host300. In other words, the second core Core 2 may perform an operation of updating the firmware image stored in the memory of each of the plurality of cores Core 1 to Core 4, in parallel with processing a request received from the host300. Before power-off, the new firmware image updated in the memory of each core may be stored in the memory device100. FIG.5is a diagram for describing the firmware image ofFIG.4. Referring toFIG.5, a memory of each core may store a corresponding firmware image. Here, the boot loader image for the firmware update running may be separately generated rather than being included in a specific firmware image. The boot loader image may be loaded in a memory of an arbitrarily selected core among the plurality of cores Core 1 to Core 4. In an embodiment ofFIG.5, in the memory of the arbitrarily selected core, e.g., the second core Core 2, the boot load image may be loaded in a target memory area that is an empty memory area, in response to a dynamically allocated address, i.e., a dynamic address. FIG.6is a flowchart for describing an operation of the memory controller200ofFIG.1in accordance with an embodiment of the present disclosure. Referring toFIG.6, at S601, the memory controller200may load, in a memory of a selected core of a plurality of cores in the processor210, a boot loader image from the buffer memory220, the boot loader image being provided for firmware update running and stored in the buffer memory220. The boot loader image loaded in the memory of the selected core is stored in an empty target memory area in the memory of the selected core, rather than being included in a firmware image of the selected core. At S603, the memory controller200may receive a new firmware image from the host300in response to the boot loader image that is executed in the selected core. At S605, the memory controller200may update a firmware image stored in a memory of each of the plurality of cores with the new firmware image. FIG.7is a flowchart for describing in detail the method ofFIG.6. Referring toFIG.7, at S701, the memory controller200may dynamically allocate, in the memory of the selected core, an address of the target memory area in which the boot loader image is to be loaded. At S703, the memory controller200may load the boot loader image in the target memory area and then execute the boot loader image. At S705, the memory controller200may receive a new firmware image from the host300in response to the boot loader image that is executed. At S707, the memory controller200may update the firmware image stored in the memory of each of the plurality of cores with the new firmware image, in parallel with processing the request received from the host300. FIG.8is a diagram illustrating a memory controller1000in accordance with an embodiment. The memory controller1000ofFIG.8may correspond to the memory controller200ofFIG.1. Referring toFIG.8, the memory controller1000is coupled to a host, e.g., the host300ofFIG.1, and a memory device, e.g., the memory device100ofFIG.1. In response to a request from the host300, the memory controller1000may access the memory device100. For example, the memory controller1000may control a write operation, a read operation, an erase operation, and a background operation of the memory device100. The memory controller1000may provide an interface between the memory device100and the host300. The memory controller1000may drive firmware for controlling the memory device100. The memory controller1000may include a processor1010, a memory buffer1020, an error correction code (ECC) circuit1030, a host Interface1040, a buffer controller1050, a memory interface1060, and a bus1070. The bus1070may provide a channel between the components of the memory controller1000. The processor1010may control the overall operation of the memory controller1000and perform a logical operation. The processor1010may communicate with the host300through the host interface1040, and communicate with the memory device100through the memory interface1060. In addition, the processor1010may communicate with the memory buffer1020through the buffer controller1050. The processor1010may control an operation of a storage device, e.g., the storage device50ofFIG.1, by using the memory buffer1020as an operating memory, a cache memory, or a buffer memory. The processor1010may perform the function of a flash translation layer (FTL). The processor1010may translate a logical block address (LBA), provided by the host300, into a physical block address (PBA) through the FTL. The FTL may receive the LBA and translate the LBA into the PBA using a mapping table. An address mapping method using the FTL may be modified in various ways depending on a unit of mapping. Representative address mapping methods may include a page mapping method, a block mapping method, and a hybrid mapping method. The processor1010may randomize data received from the host300. For example, the processor1010may use a randomizing seed to randomize the data received from the host300. Randomized data may be provided to the memory device100as data to be stored, and may be programmed to a memory cell array of the memory device100. During a read operation, the processor1010may derandomize data received from the memory device100. For example, the processor1010may use a derandomizing seed to derandomize the data received from the memory device100. Derandomized data may be output to the host300. In an embodiment, the processor1010may drive software or firmware to perform the randomizing operation or the derandomizing operation. The memory buffer1020may be used as an operating memory, a cache memory, or a buffer memory of the processor1010. The memory buffer1020may store codes and commands to be executed by the processor1010. The memory buffer1020may store data to be processed by the processor1010. The memory buffer1020may include a static RAM (SRAM) or a dynamic RAM (DRAM). The ECC circuit1030may perform error correction. The ECC circuit1030may perform an ECC encoding operation based on data to be written to the memory device100through the memory interface1060. ECC encoded data may be transmitted to the memory device100through the memory interface1060. The ECC circuit1030may perform an ECC decoding operation on data received from the memory device100through the memory interface1060. For example, the ECC circuit1030may be included in the memory interface1060as a component of the memory interface1060. The host interface1040may communicate with the external host300under the control of the processor1010. The host interface1040may perform communication using at least one of various communication methods such as a universal serial bus (USB), a serial AT attachment (SATA), a serial attached SCSI (SAS), a high speed interchip (HSIC), a small computer system interface (SCSI), a peripheral component interconnection (PCI), a PCI express (PCIe), a nonvolatile memory express (NVMe), a universal flash storage (UFS), a secure digital (SD), multiMedia card (MMC), an embedded MMC (eMMC), a dual in-line memory module (DIMM), a registered DIMM (RDIMM), and a load reduced DIMM (LRDIMM) communication methods. The buffer controller1050may control the memory buffer1020under the control of the processor1010. The memory interface1060may communicate with the memory device100under the control of the processor1010. The memory interface1060may communicate a command, an address, and data with the memory device100through a channel. In another embodiment, the memory controller1000may include neither the memory buffer1020nor the buffer controller1050therein. For example, the processor1010may use codes to control the operation of the memory controller1000. The processor1010may load codes from a nonvolatile memory device (e.g., a read only memory) provided in the memory controller1000. Alternatively, the processor1010may load codes from the memory device100through the memory interface1060. For example, the bus1070of the memory controller1000may be divided into a control bus and a data bus. The data bus may transmit data in the memory controller1000. The control bus may transmit control information such as a command and an address in the memory controller1000. The data bus and the control bus may be separated from each other and may neither interfere with each other nor affect each other. The data bus may be coupled to the host interface1040, the buffer controller1050, the ECC circuit1030, and the memory interface1060. The control bus may be coupled to the host interface1040, the processor1010, the buffer controller1050, the memory buffer1020, and the memory interface1060. In an embodiment, the processor210ofFIG.1may be included in the processor1010. The buffer memory220ofFIG.1may be included in the memory buffer1020. FIG.9is a block diagram illustrating a memory card system2000to which the storage device in accordance with the embodiment of the present disclosure is applied. ReferringFIG.9, the memory card system2000may include a memory controller2100, a memory device2200, and a connector2300. The memory controller2100is coupled to the memory device2200. The memory controller2100may access the memory device2200. For example, the memory controller2100may control a read operation, a write operation, an erase operation, and a background operation of the memory device2200. The memory controller2100may provide an interface between the memory device2200and a host. The memory controller2100may drive firmware for controlling the memory device2200. The memory controller2100may be embodied in the same manner as that of the memory controller200described with reference toFIG.1. In an embodiment, the memory controller2100may include components such as one or more of a random access memory (RAM), a processing unit, a host interface, a memory interface, and an ECC circuit. The memory controller2100may communicate with an external device (e.g., the host) through the connector2300. The memory controller2100may communicate with the external device based on a specific communication protocol. In an embodiment, the memory controller2100may communicate with the external device through at least one of various communication protocols such as universal serial bus (USB), multimedia card (MMC), embedded MMC (eMMC), peripheral component interconnection (PCI), PCI-express (PCI-E), advanced technology attachment (ATA), serial-ATA (SATA), parallel-ATA (PATA), small computer system interface (SCSI), enhanced small disk interface (ESDI), integrated drive electronics (IDE), Firewire, universal flash storage (UFS), Wi-Fi, Bluetooth, and nonvolatile memory express (NVMe) protocols. In an embodiment, the connector2300may be defined by at least one of the above-described various communication protocols. In an embodiment, the memory device2200may be implemented as any of various nonvolatile memory devices, such as an electrically erasable and programmable ROM (EEPROM), a NAND flash memory, a NOR flash memory, a phase-change RAM (PRAM), a resistive RAM (ReRAM), a ferroelectric RAM (FRAM), and a spin transfer torque magnetic RAM (STT-MRAM). In an embodiment, the memory controller2100and the memory device2200may be integrated into a single semiconductor device to form a memory card such as a personal computer memory card international association (PCMCIA), a compact flash (CF) card, a smart media card (SM or SMC), a memory stick, a multimedia card (MMC, RS-MMC, or MMCmicro), a SD card (SD, miniSD, microSD, or SDHC), or a universal flash storage (UFS). FIG.10is a block diagram illustrating a solid state drive (SSD) system3000to which the storage device in accordance with the embodiment of the present disclosure is applied. Referring toFIG.10, the SSD system3000may include a host3100and an SSD3200. The SSD3200may exchange signals SIG with the host3100through a signal connector3001and may receive power PWR through a power connector3002. The SSD3200may include an SSD controller3210, a plurality of nonvolatile memories (NVMs)3221to322n, an auxiliary power supply3230, and a buffer memory3240. In an embodiment, the SSD controller3210may perform the function of the memory controller200described above with reference toFIG.1. The SSD controller3210may control the plurality of NVMs3221to322nin response to the signals SIG received from the host3100. In an embodiment, the signals SIG may be signals based on an interface between the host3100and the SSD3200. For example, the signals SIG may be signals defined by at least one of various interfaces such as universal serial bus (USB), multimedia card (MMC), embedded MMC (eMMC), peripheral component interconnection (PCI), PCI-express (PCI-E), advanced technology attachment (ATA), serial-ATA (SATA), parallel-ATA (PATA), small computer system interface (SCSI), enhanced small disk interface (ESDI), integrated drive electronics (IDE), Firewire, universal flash storage (UFS), Wi-Fi, Bluetooth, and nonvolatile memory express (NVMe) interfaces. The auxiliary power supply3230may be coupled to the host3100through the power connector3002. The auxiliary power supply3230may be supplied with power PWR from the host3100, and may be charged by the power PWR. The auxiliary power supply3230may supply the power to the SSD3200when the supply of power from the host3100is not smoothly performed. In an embodiment, the auxiliary power supply3230may be positioned inside the SSD3200or positioned outside the SSD3200. For example, the auxiliary power supply3230may be disposed in a main board and may supply auxiliary power to the SSD3200. The buffer memory3240functions as a buffer memory of the SSD3200. For example, the buffer memory3240may temporarily store data received from the host3100or data received from the plurality of NVMs3221to322n, or may temporarily store metadata (e.g., a mapping table) of the plurality of NVMs3221to322n. The buffer memory3240may include any of volatile memories such as a DRAM, an SDRAM, a DDR SDRAM, an LPDDR SDRAM, and a GRAM or nonvolatile memories such as an FRAM, a ReRAM, an STT-MRAM, and a PRAM. FIG.11is a block diagram illustrating a user system4000to which the storage device in accordance with the embodiment of the present disclosure is applied. Referring toFIG.11, the user system4000may include an application processor4100, a memory module4200, a network module4300, a storage module4400, and a user interface4500. The application processor4100may run the components included in the user system4000, an operating system (OS), or a user program. In an embodiment, the application processor4100may include one or more of controllers, interfaces, graphic engines, etc. for controlling the components included in the user system4000. The application processor4100may be provided as a system-on-chip (SoC). The memory module4200may function as a main memory, a working memory, a buffer memory, or a cache memory of the user system4000. The memory module4200may include a volatile memory such as a DRAM, an SDRAM, a DDR SDRAM, a DDR2 SDRAM, a DDR3 SDRAM, an LPDDR SDARM, an LPDDR2 SDRAM, or an LPDDR3 SDRAM, or a nonvolatile memory such as a PRAM, a ReRAM, an MRAM, or an FRAM. In an embodiment, the application processor4100and the memory module4200may be packaged based on package-on-package (POP), and then provided as a single semiconductor package. The network module4300may communicate with external devices. For example, the network module4300may support wireless communication, such as code division multiple access (CDMA), global system for mobile communication (GSM), wideband CDMA (WCDMA), CDMA-2000, time division multiple access (TDMA), long term evolution (LTE), WiMAX, WLAN, UWB, Bluetooth, or Wi-Fi communication. In an embodiment, the network module4300may be included in the application processor4100. The storage module4400may store data therein. For example, the storage module4400may store data received from the application processor4100. Alternatively, the storage module4400may transmit the data stored in the storage module4400to the application processor4100. In an embodiment, the storage module4400may be implemented as a nonvolatile memory such as a phase-change RAM (PRAM), a magnetic RAM (MRAM), a resistive RAM (RRAM), a NAND flash memory, a NOR flash memory, a NAND flash memory having a three-dimensional (3D) structure, or the like. In an embodiment, the storage module4400may be provided as a removable storage medium (i.e., a removable drive) such as a memory card or an external drive of the user system4000. In an embodiment, the storage module4400may include a plurality of nonvolatile memory devices, and each of the plurality of nonvolatile memory devices may operate in the same manner as that of the memory device100described above with reference toFIG.1. The storage module4400may operate in the same manner as that of the storage device50described above with reference toFIG.1. The user interface4500may include one or more interfaces for inputting data or instructions to the application processor4100or outputting data to an external device. In an embodiment, the user interface4500may include one or more of user input interfaces such as a keyboard, a keypad, a button, a touch panel, a touch screen, a touch pad, a touch ball, a camera, a microphone, a gyroscope sensor, a vibration sensor, a piezoelectric device, and so on. The user interface4500may further include one or more of user output interfaces such as an a liquid crystal display (LCD), an organic light emitting Diode (OLED) display device, an active matrix OLED (AMOLED) display device, an LED, a speaker, a monitor, and so on. As described above, various embodiments of the present disclosure may provide a storage device having improved firmware update performance, and a method of operating the storage device. Examples of embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present disclosure as set forth in the following claims.
32,558
11861360
DETAILED DESCRIPTION OF THE EMBODIMENTS (Underlying Knowledge Forming Basis of the Present Disclosure) The present inventor has found that the following problem occurs in the development of software described in “BACKGROUND”. The software is developed by agile development in which not only software developing companies but also many unspecified software developers participate. Such a form of development may generate a variety of version series by improving the software by a large number of software developers. In a management system which manages software, the version information indicating versions of software developed by the software developers are managed along with the identification information of the software developers (for example, see Japanese Unexamined Patent Application Publication No. 2014-203352). Here, the version information has a role to uniquely specify the version of the software. The identification information of the software developer may be used to provide a reward the software developer about the development of a new version of the software. Unfortunately, traditional management systems do not ensure safe transactions of software between the software developers and the users. The present disclosure provides a management method in which the transaction safety of software is improved. The management method according to one aspect of the present disclosure is A management method for software versions, the management method to be executed by a version management system, the management method including: receiving request information by a first management apparatus among management apparatuses which are included in the version management system and have distributed ledgers, the request information indicating a requested version requested by a user; and storing first transaction data in the distributed ledgers through execution of a consensus algorithm by the management apparatuses, the first transaction data indicating that the user provides a predetermined number of tokens to a software developer who has developed the requested version. According to the aspect above, the distributed ledger manages provision of the tokens from the user to the software developer of the new version of the software. The distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by system failures. For this reason, the management of the transmission and reception of the tokens by the distributed ledger prevents the falsification of the history of the transmission and reception of the tokens and the missing of the history. Thus, the management method for software versions can improve the transaction safety of software. For example, the version management system may further possess identification information identifying software developers of one or more versions of the software. The management method may further include: transmitting second transaction data to the first management apparatus by a user apparatus used by the user, the second transaction data being data including the request information and further including identification information identifying the user as a sender of the predetermined number of tokens; obtaining, by the first management apparatus, identification information of a software developer of the requested version which is indicated by the request information included in the second transaction data; in the storing of the first transaction data in the distributed ledgers, generating the first transaction data which includes the identification information identifying the user as the sender included in the second transaction data as the sender of the predetermined number of tokens and includes the identification information, which has been obtained, as a destination of the predetermined number of tokens; and storing the first transaction data in the distributed ledgers by the management apparatuses. According to the aspect above, the identification information of the software developer of the requested version received from the user apparatus using the second transaction data is obtained from a version management apparatus, and the identification information of the software developer is used as a destination of the tokens. Thereby, the distributed ledger manages provision of the tokens from the user to the software developer even when the user does not know the software developer of the requested version, that is, when the user apparatus does not possess the identification information of the software developer. Thus, the transaction safety of the software can be improved even when the user does not know the software developer of the requested version. For example, the management method may further include: transmitting the first transaction data to the first management apparatus by a user apparatus, the first transaction data including identification information identifying the user as a sender of the predetermined number of tokens and identification information of the software developer of the requested version as a destination of the predetermined number of tokens; and in the storing of the first transaction data in the distributed ledgers, storing the first transaction data received from the user apparatus in the distributed ledgers. According to the aspect above, the transaction data indicating that the tokens are provided from the user apparatus to the software developer of the requested version is transmitted from the user apparatus to the management apparatus (in other words, the token management apparatus). Thus, the distributed ledger manages the provision of the tokens from the user to the software developer even when the token management apparatus does not obtain the information on the software developer of the requested version from another apparatus. Accordingly, the transaction safety of the software can be improved even when the token management apparatus does not obtain the information on the software developer of the requested version from another apparatus. For example, the version management system may further possess identification information identifying software developers of one or more versions of software, and the version management system may further include: obtaining first identification information which is the identification information of the software developer of the requested version indicated by the request information included in the first transaction data; and storing the first transaction data in the distributed ledgers only when the first identification information matches second identification information which is the identification information included in the first transaction data. According to the aspect above, the identification information of the software developer of the requested version received from the user apparatus using the first transaction data is stored and managed in the distributed ledger only when this identification information matches the identification information of the software developer managed in the version management apparatus. Accordingly, if the identification information of the software developer possessed by the user apparatus is not correct (i.e., is an error, or is invalid), the tokens are not provided from the user to the software developer. This prevents the tokens from being provided to a false software developer. Thus, the transaction safety of the software can be improved by preventing the provision of the tokens to a dishonest software developer when the information possessed in the user apparatus is false. For example, the version management system may further possess location information indicating locations where one or more versions of the software are stored. The management method may further include: providing the location information to the user apparatus used by the user after the management apparatuses store the first transaction data in the distributed ledgers, the location information indicating where the requested version of the software is stored; and obtaining, by the user apparatus, the requested version of the software using the location information provided. According to the aspect above, after the provision of the tokens from the user to the software developer is managed by the distributed ledger, the software of the requested version is provided to the user. Thus, a transaction is performed more safely in which the software is provided in exchange for the tokens provided from the user apparatus. Accordingly, the transaction safety of the software can be further improved. For example, in the storing of the first transaction data in the distributed ledgers, when the software developer who has developed the requested version includes two or more software developers of the requested version, first transaction data may be stored in the distributed ledgers, the first transaction data indicating that the predetermined number of tokens are provided from the user to the two or more software developers in a predetermined distribution ratio. According to the aspect above, in the case where two or more software developers are responsible for the development of the requested version, the distribution and provision of the tokens from the user to the two or more software developers is managed by the distributed ledger. Accordingly, the transaction safety of the software can be improved even when two or more software developers are responsible for the development of the requested version. For example, when the two or more software developers include one or more software developers of versions older than the requested version, the predetermined distribution ratio may be a distribution ratio in the software developer of the requested version and the one or more software developers, the distribution ratio being controlled such that a smaller number of tokens are distributed to a software developer of an older version. According to the aspect above, when two or more software developers are responsible for the development of the requested version, tokens can be provided in the distribution ratio such that a smaller number of tokens are provided to a software developer of an older version, in other words, a larger number of tokens are provided to a software developer of a newer version. In general, it is considered that a software developer of a version closer to the requested version i.e., a newer version has greater contribution to the development of the requested version. The management method according to the present embodiment can implement such a distribution ratio of tokens according to the degree of contribution. Accordingly, the transaction safety of the software can be further improved through the distribution of the tokens according to the degree of contribution of two or more software developers responsible to the development of the requested version. For example, processing according to the management method may be partially or entirely performed by executing smart contract codes stored in the distributed ledgers of the management apparatuses. According to the aspect above, a series of processing such as the provision of the tokens from the user to the software developer is automatically executed based on the smart contract codes stored in the distributed ledger without intervention by any other person or system. Thus, the series of processing can be implemented with higher safety by the smart contract. Accordingly, the transaction safety of the software can be further improved. For example, the version management system may further include a version management apparatus, and the version management apparatus may possess the identification information of the software developers of the one or more versions of the software. According to the aspect above, the identification information of the software developers of the versions of the software is managed using the version management apparatus. Thus, the management method for software versions can further facilitate an improvement in transaction safety of the software using the version management apparatus. For example, the version management apparatus may include version management apparatuses having distributed ledgers, and the distributed ledgers possessed by the version management apparatuses may store transaction data including the identification information of the software developers of the one or more versions of the software. According to the aspect above, the version management apparatuses manage the software developers of the versions of the software according to the distributed ledgers, and the information of each software developer is used as a destination of the tokens. The distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by system failures. Thus, the falsification of the information of the software developer of the version can be prevented, further improving the transaction safety of the software. For example, the distributed ledgers may be blockchains, and the first transaction data indicating that the user provides the predetermined number of tokens to the software developer who has developed the requested version may be stored in the blockchains through execution of the consensus algorithm by the management apparatuses. According to the aspect above, using the blockchains as the distributed ledgers, the management apparatuses can more readily prevent the falsification of the information under management. The management apparatus according to one aspect of the present disclosure is a management apparatus which is one management apparatus among management apparatuses which are included in a version management system for managing software versions and have distributed ledgers, the management apparatus including: an obtainer which obtains request information indicating a requested version requested by a user; and a ledger manager which stores first transaction data in the distributed ledgers through execution of a consensus algorithm by the management apparatuses, the first transaction data indicating that predetermined number of tokens are provided from the user to a software developer who has developed the requested version. Such a configuration achieves the same effect as that in the management method. The program according to one aspect of the present disclosure is a program for operating a computer as one management apparatus among management apparatuses which are included in a version management system for managing software versions and have distributed ledgers, the program including: obtaining request information indicating a requested version requested by a user; and storing first transaction data in the distributed ledgers through execution of a consensus algorithm by the management apparatuses, the first transaction data indicating that predetermined number of tokens are provided from the user to a software developer who has developed the requested version. Such a program achieves the same effect as that in the management method. These comprehensive or specific aspects may be implemented with systems, methods, integrated circuits, computer programs, or recording media such as computer-readable CD-ROMs, or may be implemented with any combination of systems, methods, integrated circuits, computer programs, or recording media. Embodiments will now be specifically described with reference to the drawings. The embodiments described below all are comprehensively or specifically illustrative. Numeric values, shapes, materials, components, arrangements, positions, and connection forms thereof, steps, order of steps, and the like described in the following embodiments are exemplary, and should not be construed as limitative to the present disclosure. Among the components of the embodiments below, the components not described in an independent claim representing the most superordinate concept of the present disclosure are described as arbitrary components. Embodiment 1 In the present embodiment, a management method for software versions will be described. Here, the software is, for example, software which is installed in a home appliance (such as a laundry machine, an air conditioner, a refrigerator, or a television set) to control the operation of the home appliance and demonstrate the function of the home appliance. FIG.1is a diagram illustrating a version series of software in agile development. As illustrated inFIG.1, in the agile development, a software development company (Z Company) develops a first version, i.e., version 1 (represented as “Ver1” in the drawing, the same applies below), and provides Ver1 to the community of software developers. Based on the software of version 1 provided, the software developers belonging to the community then perform development to generate a variety of version series. Different software programs having different functions are developed in the version series, for example. The version series is represented as series 1A inFIG.1. The version series includes one or more versions. As illustrated inFIG.1, based on the software of version 1, version 1.A1 is generated through the development by software developer A, version 1.B1 is generated through the development by software developer B, and version 1.C1 is generated through the development by software developer C. Further development may be performed based on these versions. For example, version 1.A2 is developed based on version 1.A1, and version 1.A3 is developed based on version 1.A2. Version 1.B2 is developed based on version 1.B1. Based on version 1.C1, software developers D and E develop versions 1.C1.D1 and 1.C1.E1 as version series. Here, the versions of and after version 1.A1 (i.e., version 1.A1 and versions 1.A2 and 1.A3 which are versions developed based on version 1.A1) are referred to as series 1A. Similarly, the versions of and after version 1.B1 are referred to as series 1B. Version 1.C1 is referred to as series 1C, version 1.C1.D1 is referred to as series 1D, and version 1.C1.E1 is referred to as series 1E. The series including version 1 and all the versions of series 1A to 1E is referred to as series 1 in some cases. As described above, in the agile development, software developers different from the software development company develop software based on the software provided by the software development company (Z Company), generating several version series. Among these versions, a version which a user desires is provided to the user. For example, the latest version of the version series having the functions which the user desires is provided to the user. FIG.2is a diagram illustrating transmission and reception of a token in the agile development. Here, the token represents a concept corresponding to a profit or value, and may be possessed and transferred by a person (natural person) or a manufacturer (legal person). In the agile development, the development of the software is advanced by appropriately transferring tokens between a software developer, a general user, and a manufacturer. For example, the general user receives the software provided by the software developer. The user operates the home appliance by operating the software on the home appliance possessed by the user. The general user provides tokens to the software developer in exchange for the software provided. The general user provides the data of the product, which is obtained when the home appliance having the software installed therein is operated, to the manufacturer, and receives tokens in exchange for the data. Here, the tokens are directly transferred between the general user and the software developer without the manufacturer interposed therebetween. When such transfer of the tokens occurs, the identification information of the software developer under management may be falsified for the purpose of dishonestly obtaining a profit or impairing profits of others in some cases. The falsified identification information enables the following behaviors: A malicious person may spoof the software developer to receive the tokens, or may spoof another person to provide malicious software and damage the reputation of the software developer. The management system according to the present embodiment aims at preventing the falsification of information under management. FIG.3is a diagram illustrating a configuration of management system1according to Embodiment 1. As illustrated inFIG.3, management system1includes management apparatuses10A,10B, and10C, development apparatuses20A,20B, and20C, and storage server30. These apparatuses are communicably connected to each other through network N. Management apparatuses10A,10B, and10C (also referred to as management apparatuses10A and others) are management apparatuses which manage the information on the versions of software by computers. Although an example of three management apparatuses10A and others will be described, the number of management apparatuses may be two or more. Management apparatuses10A and others are communicably connected to each other. Although management apparatus10A is used as a representative of management apparatuses10A and others in the following description in some cases, the same also applies to other management apparatuses10B and10C. Management apparatuses10A and others can also communicate through network N. Management apparatuses10A and others each have a distributed ledger for managing the information on the version of software. Management apparatuses10A and others update the distributed ledgers of their own while synchronizing with each other through communication. When one of management apparatuses10A and others obtains the information on a new version from one of development apparatuses20A and others, management apparatuses10A and others each have a copy of the obtained information. In general, the distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by the system failure. Development apparatuses20A,20B, and20C (also referred to as development apparatuses20A and others) are computers used by a software developer of the software, and each independently operate. Although an example of three development apparatuses20A and others will be described, the number of development apparatuses may be one or more. Although development apparatus20A is used as a representative of development apparatuses20A and others in the following description, the same also applies to other development apparatus20B and20C. The software developer develops a new version of the software using development apparatus20A, and transmits the developed software of the new version to storage server30to store the software in storage server30. Development apparatus20A also transmits the information on the new version developed by the software developer through network N to one of management apparatuses10A and others. Storage server30is a computer which stores the software. Storage server30stores one or more versions of the software in a memory device. Network N is a communication line which communicably connects management apparatuses10A and others, development apparatus20A, and storage server30to each other. Any communication line can be used. Any combination of wired networks with wireless networks may be used. Network N may partially include the Internet. Storage server30, development apparatuses20A and others, and management apparatuses10A and others will now be described in more detail. FIG.4is a block diagram of a configuration of storage server30according to the present embodiment. As illustrated inFIG.4, storage server30includes communicator31, storage32, publisher33, and memory device34. The functions of storage server30can be implemented by a processor which executes a predetermined program using a memory. Communicator31is a communication interface device connected to network N. Storage server30can communicate with development apparatus20A through communicator31. Storage32is a processor which stores the software using memory device34. Storage32obtains the software of the new version from development apparatus20A through communicator31, and stores the obtained software in memory device34. Storage32also reads the software stored in memory device34in response to a request from a user. Publisher33is a processor which publishes location information indicating the location where the software is stored. In the case where storage32stores the software in memory device34, publisher33obtains the information indicating the location where the software is stored, and generates and publishes the location information indicating the location. Publisher33notifies development apparatus20A of the generated location information. The location information is, for example, a uniform resource locator (URL) indicating a position on the Internet of an electric file related with the software in memory device34. This case will now be described below as an example. The URL includes the information of the path indicating the location in memory device34and the file name, and the host name of storage server30, for example. Memory device34is a memory device in which the software is stored. Memory device34stores one or more versions of the software. The software is stored in memory device34by storage32, and is read therefrom by storage32. FIG.5is a block diagram illustrating a configuration of development apparatus20A according to the present embodiment. Development apparatuses20B and20C also have the same configuration, and each independently operate. As illustrated inFIG.5, development apparatus20A includes communicator21, developer22, transaction generator23, and memory device24. The functions of development apparatus20A can be implemented by a processor which executes a predetermined program using a memory. Communicator21is a communication interface device connected to network N. Development apparatus20A can communicate with storage server30and management apparatus10A through communicator21. Developer22is a processor which generates a new version of the software developed by the software developer based on the operation by a user or the function of a tool for developing software. Developer22specifically has software (or program or program codes) of a version (corresponding to a first version) underlying the development of the software, and generates a new version (corresponding to second version) of the software based on the possessed software. Thus, the software developer develops the new version of the software using development apparatus20A (specifically, developer22). The development of the new version is also referred to as version upgrade. Developer22transmits the developed software of the new version through communicator21to storage server30to store the software in storage server. At this time, storage server30(specifically, publisher33) notifies developer22of the URL indicating the location of the software stored in storage server30. Transaction generator23is a processor which generates transaction data including the information on the version of the software. The transaction data includes at least information on a first version of the software (corresponding to first information), information on a second version obtained through version upgrade of the first version by the software developer (corresponding to second information), a software developer ID as an identification information of the software developer, and the electronic signature of the software developer. The electronic signature of the software developer is generated from the information included in the transaction data through encryption with the private key of the software developer. The identification information of the software developer and the private key thereof can be obtained by reading these from memory device24by transaction generator23. Transaction generator23transmits the generated transaction data through communicator21to management apparatus10A. Transaction generator23also generates a request to issue a new version number, and transmits the request to management apparatus10A. Transaction generator23receives the notification of the new version number in reply. Memory device24is a memory device which stores the information on the software developer and the information on the software. The information on the software developer includes a software developer ID as the identification information of the software developer, and key information of the software developer (including the private key). The software developer ID is information which enables unique identification of the software developer. The information on the software includes a body of software, and the URL indicating the location in storage server30where the software is stored. Here, the body of software indicates a software program, and is simply represented as “software” inFIG.5. The body of software stored in memory device24is read by developer22. The software developer ID, the key information, and the URL stored in memory device24are read by transaction generator23. FIG.6is a block diagram illustrating a configuration of management apparatus10A according to the present embodiment. As illustrated inFIG.6, management apparatus10A includes communicator11, number manager12, transaction validator13, ledger manager14, and token manager16. The functions included in management apparatus10A can be implemented by a processor which executes a predetermined program using a memory. Communicator11is a communication interface device connected to network N. Management apparatus10A can communicate with development apparatus20A and other management apparatuses10B and10C through communicator11. Number manager12is a processor which manages the version number of the version of the software. When receiving a request to issue a new version number of the software from development apparatus20A, number manager12issues the new version number according to the request, and notifies development apparatus20A of the request. Among the versions currently possessed, number manager12issues a version number advanced from the version number of the latest version. In the case where the version has several series, number manager12receives a request to issue a new version number for each series, and issues a version number for each series. Here, the version number is set according to predetermined rules. For example, the version number is set using numeric values such that a more advanced version (that is, a version more repeatedly subjected to version upgrade) has a greater numeric value. At this time, letters may also be used in combination with numeric values. Here, an example where the version series is represented with letters will be illustrated. In other words, the versions included in series 1A developed based on the first version, i.e., version 1 are referred to as version 1.A1, version 1.A2, version 1.A3, and the like. The versions included in series 1B developed based on version 1 separately from series 1A are referred to as version 1.B1, version 1.B2, and the like. Transaction validator13is a processor which validates the legitimacy of the transaction data. Transaction validator13receives the transaction data through communicator11from development apparatus20A. The transaction data to be received includes first information on the first version of the software, second information on the second version of the software obtained through version upgrade of the first version by the software developer, the identification information of the software developer, and the electronic signature of the software developer. When receiving the transaction data, transaction validator13validates the legitimacy of the transaction data using the electronic signature included in the received transaction data. The legitimacy of the transaction data is validated using the information included in the transaction data and the public key of the software developer to determine the legitimacy of the transaction data. More specifically, it is determined that the transaction data is surely generated by development apparatus20A and the transaction data has not been falsified from the generation. The validation of the legitimacy of the transaction data is also simply referred to as validation of the transaction data. The transaction data received by transaction validator13may include a new version number notified by number manager12. The transaction data received by transaction validator13may further include the URL or location information of the software of the new version. Ledger manager14is a processor which manages the distributed ledger for managing the versions of software. Although an example where the distributed ledger is blockchain15will be described here, another type distributed ledger (such as IOTA or a hashgraph) may also be used. In the case where transaction validator13validates the transaction data, ledger manager14synchronizes the transaction data through the transmission of the transaction data to other management apparatuses10B and10C. Ledger manager14then executes a consensus algorithm between management apparatus10A and other management apparatuses10B and10C. In the case where an agreement is generated by the consensus algorithm, a block including the transaction data is generated, and the generated block is stored in blockchain15. Although one example of consensus algorithms is PracticalByzantineFault Tolerance (PBFT), any other consensus algorithms such as Proof of Work (PoW) or Proof of Stake (PoS) may also be used. Token manager16is a processor which manages tokens possessed by the user and the software developer. Token manager16provides a token to the software developer with reference to the transaction data stored in blockchain15. Token manager16may use blockchains for management of tokens. Three examples of a configuration of transaction data which allows management apparatuses10A and others to manage the new version of the software will now be illustrated. FIG.7is a diagram illustrating transaction data40as a first example of the transaction data according to the present embodiment. Transaction data40is an example where the first information on the first version of the software includes the hash value of the first version of the software and the version number of the first version, and the second information on the second version of the software includes the version number of the second version. As illustrated inFIG.7, transaction data40includes software developer ID41, URL42, new version number43, base version number44, hash value45of the new version, and signature46. Software developer ID41is the identification information of the software developer who has developed the new version to be newly managed according to transaction data40. URL42is an URL indicating the location where the new version to be newly managed according to transaction data40is stored. URL42indicates the location in memory device34of storage server30where the software of the new version is stored. New version number43is a version number of the new version to be newly managed according to transaction data40. Base version number44is a version number of the version (also referred to as base version) underlying the new version to be newly managed according to transaction data40. Hash value45of the new version is a hash value obtained through a hash operation performed on all the programs of the new version to be newly managed according to transaction data40or predetermined part of the programs. Signature46is an electronic signature generated from the information included in transaction data40through encryption with the private key of the software developer. Specifically, signature46is a value obtained as follows: A hash value is obtained by performing a hash operation on the information including software developer ID41, URL42, new version number43, base version number44, and hash value45of the new version, and is encrypted with the private key of the software developer. FIG.8is a diagram illustrating transaction data50as a second example of the transaction data according to the present embodiment. Transaction data50is an example where the first information on the first version of software includes the hash value of the first version of the software and the second information on the second version of the software includes the hash value of the second version of the software. As illustrated inFIG.8, transaction data50includes software developer ID51, URL52, hash value53of the new version, hash value54of the base version, and signature55. Software developer ID51and URL52are the same as those in transaction data40. Hash value53of the new version is a hash value obtained by the hash operation performed on all the programs of the new version of the software to be newly managed according to transaction data50or predetermined part of the programs. Hash value54of the base version is a hash value obtained by the hash operation performed on all the programs of the base version of the software underlying the new version of the software to be newly managed according to transaction data50or predetermined part of the programs. Signature55is an electronic signature generated from the information included in transaction data50through encryption with the private key of the software developer. Specifically, signature55is a value obtained as follows: A hash value is obtained by performing a hash operation on the information including software developer ID51, URL52, hash value53of the new version, and hash value54of the base version, and is encrypted with the private key of the software developer. FIG.9is a diagram illustrating transaction data60as a third example of the transaction data according to the present embodiment. Transaction data60is an example where the first information on the first version of software includes the hash value of the first version of the software, and the second information on the second version of the software includes the hash value of the difference between the first version of the software and the second version thereof. As illustrated inFIG.9, transaction data60includes software developer ID61, URL62, hash value63of the difference, hash value64of the base version, and signature65. Software developer ID61and URL62are the same as those in transaction data40. Hash value63of the difference is a hash value of the difference between a new version of the program to be newly managed according to transaction data60and a base version of the program underlying the development of the new version. Hash value64of the base version is a hash value obtained through a hash operation performed on all the programs in the new version of the software to be newly managed according to transaction data60or predetermined part of the programs. Signature65is an electronic signature generated from the information included in transaction data60through encryption with the private key of the software developer. Specifically, signature65is a value obtained as follows: A hash value is obtained by performing a hash operation on the information including software developer ID61, URL62, hash value63of the difference, and hash value64of the base version, and is encrypted with the private key of the software developer. The transaction data stored in blockchain15will now be described. FIG.10is a diagram illustrating an example of the transaction data stored in blockchain15according to the present embodiment.FIG.10is specifically transaction data managed with blockchain15by management apparatuses10A and others. One entry (one row) shown inFIG.10corresponds to one piece of transaction data. The data located in a lower portion ofFIG.10is newer transaction data. As illustrated inFIG.10, each transaction data includes the URL, the new version number, the base version number, and the software developer ID of each version of the software. The information in the transaction data illustrated inFIG.10corresponds to the information included in transaction data40illustrated inFIG.7. As illustrated inFIG.10, blockchain15stores the information on the early versions of the software from the current point of time. Specifically, blockchain15stores the information indicating that versions 1.A1, 1.A2, and 1.A3 are developed from version 1 and that versions 1.B1 and 1.B2 are developed from version 1. The information on the early versions of the software from the current point of time is managed by management apparatus10A so as to prevent falsification, because the blockchain is difficult to falsify. Processing of management system1will now be described. FIGS.11and12are sequence diagrams illustrating first and second processings in management system1according to the present embodiment, respectively.FIGS.11and12illustrate a series of processing from the development of the new version of the software by development apparatus20A to the management of the developed version of the software by management apparatuses10A and others. As illustrated inFIG.11, in step S121, a new version of the software is completed by development apparatus20A. In step S122, development apparatus20A transmits the new version of the software developed in step S121to storage server30to store the new version of the software in storage server30. In step S131, storage server30receives the new version of the software transmitted from development apparatus20A, and stores it in memory device34. In step S132, storage server30publishes an URL indicating the location of the new version of the software stored in step S131. Storage server30then transmits the published URL to development apparatus20A. The URL can be transmitted as a reply to the software received in step S122. In step S123, development apparatus20A generates a request to issue a new version number (also referred to as new number), and transmits it to management apparatus10A. Here, the request to issue a new version number is communication data for requesting the issuing of a new number to be assigned to the new version of the software (i.e., the new version number) to management apparatus10A. The request includes at least the base version number. In step S111, management apparatus10A receives the request transmitted in step S123, and determines whether the base version included in the request is stored in blockchain15managed by management apparatus10A. In the case where management apparatus10A determines that the base version is stored in blockchain15(Yes in step S111), the processing goes to step S112. In the case where management apparatus10A determines that the base version is not stored in blockchain15(not illustrated), management apparatus10A executes predetermined error processing (such as processing to transmit a notification indicating the failure of the issuing of the new number to development apparatus20A), and terminates the processing. In this case, management apparatus10A may terminate the processing without performing any processing. Management apparatus10A determines that the base version is not stored in blockchain15, for example, when management apparatuses10A and others are caused to manage a version of software not managed by management apparatuses10A and others. In step S112, management apparatus10A issues the version number of the new version. Referring toFIG.12, in step S113, management apparatus10A notifies development apparatus20A of the version number of the new version issued in step S112. The notification of the version number of the new version can be transmitted as a reply to the request to issue the new version number in step S123. In step S124, transaction data for writing the new version in blockchain15is generated, and is transmitted to management apparatus10A. This transaction data includes the new version number transmitted in step S113or the information calculated using this new version number. In step S114, management apparatus10A validates the transaction data transmitted by development apparatus20A in step S124. Here, assume that it is determined as a result of validation of the transaction data that the transaction data is legitimate. In step S115, management apparatus10A transmits the transaction data to management apparatuses10B and10C. The block including the transaction data is stored in blockchain15through execution of the consensus algorithm by management apparatuses10A and others. Thus, the information on the new version of the software developed by the software developer, more specifically, the software developer ID and the version number are stored in blockchain15, obstructing the falsification of the information after the storage thereof. In the case where the validation of the transaction data is failed in step S114, that is, it is validated that the transaction data is not legitimate, development apparatus20A may be notified of this failure. By this notification, the software developer can recognize and treat the failure. This notification does not need to be performed. Management apparatus10A may store the software itself in blockchain15, and manage the software. Such an operation is more useful because not only the information on the version but also the software can be managed while the falsification of the software itself is also prevented. To do so, development apparatus20A may generate the transaction data including the software itself (i.e., the program codes of the software), and transmit the transaction data to management apparatus10A. Management apparatus10A may store the received transaction data in blockchain15. As above, in the management method according to the present embodiment, the information on the software developer who has updated the version of the software is managed by the distributed ledger. The distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by system failures. Accordingly, the management method can prevent the falsification of the information under management. Moreover, the version number of the new version is issued, and the information on the software developer of the new version is managed in correspondence with the issued version number. Failures such as duplication of the version number may occur when the version number is assigned by an apparatus different from the version management system. The management method according to the present disclosure can prevent such failures of the version number and prevent the falsification of the information under management. The prevention in falsification of the information under management can be further facilitated using the hash value of the first version, the version number of the first version, and the version number of the second version. The prevention in falsification of the information under management can be further facilitated using the hash value of the first version and the hash value of the second version. The prevention in falsification of the information under management can be further facilitated using the hash value of the first version and the hash value of the difference between the first version and the second version. Moreover, the information indicating the location where the software of the second version is stored is stored in the distributed ledger together with the information on the software developer. Accordingly, further, the falsification of the information under management can be prevented while the falsification of the information on the location where the second version is stored is also prevented. Moreover, the tokens are provided to the software developer of the new version based on the transaction data so far. Because the falsification of the transaction data stored in the distributed ledger is difficult, provision of the tokens to an inappropriate person who spoofs the software developer can be prevented. Thus, the falsification of the information under management can be prevented, preventing inappropriate provision of the tokens. Embodiment 2 In the present embodiment, a management method for software versions which provides improved transaction safety of the software will be described. Here, a management method of improving the transaction safety when the software managed by the management method for versions according to Embodiment 1 is provided to a user will be described. FIG.13is a diagram illustrating a configuration of management system2according to the present embodiment. As illustrated inFIG.13, management system2according to the present embodiment includes management apparatuses10A and others, development apparatuses20A and others, storage server30, token management apparatuses70A,70B, and70C (also referred to as70A and others), and user apparatus80. Management apparatuses10A and others according to the present embodiment are also referred to as version management apparatuses. Token management apparatuses70A and others are also simply referred to as management apparatuses. The token management apparatus may also be configured to additionally have the function of the version management apparatus. Management apparatuses10A and others, development apparatuses20A and others, and storage server30are the same as those in Embodiment 1, and their descriptions will be omitted. Token management apparatuses70A and others are management apparatuses which manage transmission and reception of tokens between software developers and a user by computers. Although an example of three token management apparatuses70A and others will be described, the number thereof may be any number of two or more. Token management apparatuses70A and others are communicably connected to each other. Although token management apparatus70A is used as a representative of token management apparatuses70A and others in the following description in some cases, the same also applies to other token management apparatuses70B and70C. Token management apparatuses70A and others can also communicate through network N. Token management apparatuses70A and others each possess a distributed ledger for managing the information on the transmission and reception of tokens. Token management apparatuses70A and others update the possessed distributed ledgers while synchronizing with each other through communication. When one of token management apparatuses70A and others obtains the information on the transmission and reception of tokens from user apparatus80, token management apparatuses70A and others each possess a copy of the obtained information. In general, the distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by the system failure. User apparatus80is used by a user of the software. User apparatus80is associated with the user. User apparatus80has a communication function accessible to network N. While an example where user apparatus80is an information terminal such as a computer, a smartphone, or a tablet will be described, user apparatus80may be a home appliance provided with a communication interface accessible to network N. The software stored in storage server30is provided to user apparatus80in response to an operation of the user. The software provided to user apparatus80is provided to a home appliance, and the home appliance then operates using the software. In the case where user apparatus80is a home appliance, user apparatus80operates using the software provided after the software is provided. When the software is provided, user apparatus80transmits transaction data for providing a token to the software developer to token management apparatus70A. The tokens are provided to the software developer in exchange for the software provided, for example. User apparatus80and token management apparatuses70A and others will now be described in more detail. FIG.14is a diagram illustrating a configuration of user apparatus80according to the present embodiment. As illustrated inFIG.14, user apparatus80includes communicator81, transaction generator82, obtainer83, display84, inputter85, and memory device86. Communicator81is a communication interface apparatus connected to network N. User apparatus80is communicable with storage server30through communicator81. Transaction generator82is a processor which generates transaction data including information on the transmission and reception of tokens. The transaction data includes at least a user ID, a version number, a token price, and the electronic signature of the user. The electronic signature of the user is generated from the information included in the transaction data through encryption with the private key of the user. The user ID and the private key can be obtained by reading these from memory device86by transaction generator82. Transaction generator82transmits the generated transaction data through communicator81to token management apparatus70A. The token price represents the numerical quantity of token provided according to the transaction data. Obtainer83is a processor which obtains software. For example, obtainer83transmits a request to obtain software, and obtains the software transmitted in response to the request. More specifically, obtainer83determines one version to be obtained (also referred to as requested version) among one or more versions stored in storage server30, and transmits a request to obtain the software of the requested version through communicator81to storage server30. Obtainer83receives the software of the requested version, which has been transmitted by storage server30in response to the request to obtain software, through communicator81. Obtainer83may obtain the software, based on the reception in inputter85of an instruction to obtain the software from the user or based on another trigger (such as release of a new version). The requested version may be determined by the specification by the user, which is accepted by inputter85, or may be determined to be a predetermined version of the software (such as the latest version, or a stable version whose stable operation is verified), for example. Display84is a display screen on which images are displayed. Display84displays an image indicating a variety of pieces of information on user apparatus80. Display84displays an image presenting to the user a question whether to obtain the software or not, for example. Inputter85is an input interface which accepts an input from the user. Inputter85is, for example, a touch screen which is arranged so as to be superimposed on display84and accepts an input from touch operation of the user. Inputter85may also be a keyboard, a mouse, or a touch pad. Inputter85accepts an input of an instruction to obtain the software, for example. Memory device86stores the information on the user. The information on the user specifically includes a user ID, i.e., the identification information identifying the user, and the key information of the user (including the private key). The user ID is the information which can uniquely identify the user. The user ID and the key information stored in memory device86are read by transaction generator82. FIG.15is a diagram illustrating a configuration of token management apparatus70A according to the present embodiment. As illustrated inFIG.15, token management apparatus70A includes communicator71, transaction validator72, ID obtainer73, and ledger manager74. Communicator71is a communication interface apparatus connected to network N. Token management apparatus70A is communicable with user apparatus80, storage server30, and other token management apparatuses70B and70C through communicator71. Transaction validator72is a processor which validates the legitimacy of the transaction data. Transaction validator72receives the transaction data through communicator71from user apparatus80. The transaction data to be received includes a user ID, a version number, a token price, and the electronic signature of the user. When receiving the transaction data, transaction validator72validates the legitimacy of the transaction data using the electronic signature included in the received transaction data. The validation of the legitimacy of the transaction data is performed using the information included in the transaction data and the public key of the user, and it is determined whether the transaction data is legitimate or not. More specifically, it is determined that the transaction data has been surely generated by user apparatus80and that the transaction data has not been falsified from the generation. The validation of the legitimacy of the transaction data is also simply referred to as validation of the transaction data. In the case where the transaction data obtained by transaction validator72does not include the software developer ID of the software developer of the requested version, ID obtainer73obtains the software developer ID. ID obtainer73is a processor which obtains the software developer ID of the software developer of the requested version of the software. ID obtainer73determines whether the transaction data obtained by transaction validator72includes the software developer ID of the software developer of the requested version or not. In the case where the transaction data does not include the software developer ID, ID obtainer73asks management apparatus10A the version number of the requested version included in the transaction data. In response to this inquiry, management apparatus10A transmits the version number of the requested version to token management apparatus70A (ID obtainer73). When receiving the version number, ID obtainer73generates transaction data including the user ID, the version number, and the token price included in the transaction data received by transaction validator72, as well as the software developer ID obtained from management apparatus10A. Ledger manager74is a processor which manages the distributed ledgers for managing the transmission and reception of tokens. Although an example where the distributed ledger is blockchain75will be described here, another type of distributed ledger (such as IOTA or hashgraph) may also be used. When transaction validator72validates the transaction data and the transaction data does not include the software developer ID of the software developer of the requested version, ledger manager74stores the transaction data generated by ID obtainer73in blockchain75. In contrast, when the transaction data includes the software developer ID of the software developer of the requested version, ledger manager74stores the transaction data validated by transaction validator72in blockchain75. When storing the transaction data in blockchain75, ledger manager74synchronizes the transaction data by transmitting the transaction data to other token management apparatuses70B and70C. Ledger manager74then executes a consensus algorithm between token management apparatus70A and other token management apparatuses70B and70C. In the case where an agreement is generated by the consensus algorithm, ledger manager74generates a block including the transaction data, and stores the generated block in blockchain75. The transaction data stored in blockchain75by ledger manager74is also referred to as first transaction data. Although one example of consensus algorithms is PracticalByzantineFault Tolerance (PBFT), any other consensus algorithms such as Proof of Work (PoW) or Proof of Stake (PoS) may also be used. FIG.16is a diagram illustrating transaction data90as a first example of the transaction data according to the present embodiment. Transaction data90is an example of the transaction data not including the software developer ID of the software developer of the version requested by the user. Transaction data90may be used in the case where user apparatus80does not possess the software developer ID. As illustrated inFIG.16, transaction data90includes user ID91, requested version number92, token price93, and signature94. User ID91is an user ID of the user corresponding to user apparatus80as a transmission source of transaction data90. Requested version number92is a version number of the requested version or the version to be newly obtained by user apparatus80according to transaction data90. Token price93is a price of the tokens provided to the software developer by the user according to transaction data90. Signature94is an electronic signature generated from the information included in transaction data90through encryption with the private key of the user. Specifically, signature94is a value obtained as follows: A hash value is obtained by performing a hash operation on the information including user ID91, requested version number92, and token price93, and is encrypted with the private key of the user. When receiving transaction data90, with reference to requested version number92included in transaction data90, token management apparatus70A obtains the identification information of the software developer of the version according to requested version number92from management apparatuses10A and others, and controls the provision of the tokens from the user to the software developer. FIG.17is a diagram illustrating transaction data90A as a second example of the transaction data according to the present embodiment. Transaction data90A is an example of the transaction data including the software developer ID of the software developer of the version requested by the user. Transaction data90A may be used in the case where user apparatus80possesses the software developer ID. As illustrated inFIG.17, transaction data90A includes user ID91, requested version number92, token price93, software developer ID93A, and signature94. Transaction data90A corresponds to transaction data90including software developer ID93A. User ID91, requested version number92, and token price93are the same as those in transaction data90. Software developer ID93A is the software developer ID of the software developer of the requested version or the version to be newly obtained by the user according to transaction data90A. Signature94is an electronic signature generated from the information included in transaction data90A through encryption with the private key of the user. Specifically, signature94is a value obtained as follows: A hash value is obtained by performing a hash operation on the information including user ID91, requested version number92, token price93, and software developer ID93A, and is encrypted with the private key of the user. When receiving transaction data90A, with reference to requested version number92and software developer ID93A included in transaction data90A, token management apparatus70A determines that the software developer of requested version number92is surely the software developer shown in software developer ID93A, and controls the transmission and reception of tokens. Token management apparatus70A may control the transmission and reception of tokens without performing the determination above. Such control is also enabled in the case where the legitimacy of software developer ID93A included in transaction data90A is guaranteed. FIG.18is a diagram illustrating the transaction data stored in blockchain75according to the present embodiment.FIG.18specifically illustrates the transaction data managed by token management apparatuses70A and others according to blockchain75. One entry (one row) shown inFIG.18corresponds to one piece of transaction data. The data located in a lower portion ofFIG.18is newer transaction data. As illustrated inFIG.18, each piece of transaction data includes the information indicating the token price, the sender of the tokens, and the destination of the tokens as the information on the transmission and reception of the tokens. The sender and the token price in the transaction data illustrated inFIG.18correspond to user ID91and token price93included in transaction data90illustrated inFIG.16, respectively. The sender, the destination, and the token price in the transaction data illustrated inFIG.18correspond to user ID91, software developer ID93A and token price93included in transaction data90A illustrated inFIG.17, respectively. As inFIG.18, blockchain75stores the information on the transmission and reception of the tokens when the software is provided in the past from the current time. Specifically, blockchain75stores the information indicating that user A provided 50 tokens (i.e., tokens equivalent to a token price of 50) to software developer X, and the like. Token management apparatus70A manages the information on the transmission and reception of the tokens when the software is provided in the past from the current time such that the information is not falsified, because the blockchain is difficult to falsify. Processing of management system2will now be described. The processing of management system2will be described as the following two cases, that is, (1) the case where user apparatus80transmits the transaction data not including the software developer ID, and (2) the case where user apparatus80transmits the transaction data including the software developer ID. (1) An example where user apparatus80transmits the transaction data not including the software developer ID of the software developer of the version requested by the user (seeFIG.16) will be described. In this case, user apparatus80transmits the transaction data to token management apparatus70A, the transaction data including request information (also referred to as second transaction data) and further including the identification information identifying the user as a sender of the predetermined number of tokens. From management apparatus10A, token management apparatus70A obtains the identification information of the software developer of the requested version indicated by the request information included in the second transaction data. When storing the first transaction data in blockchain75, token management apparatus70A generates first transaction data which includes the identification information identifying the user as the sender which is included in the second transaction data, as a sender of the predetermined number of tokens and the identification information obtained from management apparatus10A as a destination of the predetermined number of tokens. Token management apparatuses70A and others each store the generated first transaction data in blockchain75. FIGS.19and20are sequence diagrams illustrating the processing of management system2according to the present embodiment. As illustrated inFIG.19, in step S281, user apparatus80accepts a user's input of an instruction to obtain a new version of the software. In step S282, user apparatus80generates transaction data for obtaining the new version of the software, and transmits it to token management apparatus70A. This transaction data does not include the software developer ID of the software developer of the version requested by the user. In step S271, token management apparatus70A receives the transaction data transmitted from user apparatus80in step S281, and validates the received transaction data. Here, assume that from the result of the validation of the transaction data, token management apparatus70A determines that the transaction data is legitimate. Token management apparatus70A determines that the transaction data does not include the software developer ID. In step S272, token management apparatus70A transmits a communication packet for inquiry of the software developer ID of the software developer of the version requested by the user to management apparatus10A. The communication packet includes at least the requested version number. In step S211, management apparatus10A receives the communication packet for inquiry transmitted in step S272, and identifies the software developer ID of the software developer of the version requested by the user with reference to blockchain15managed by ledger manager14. Management apparatus10A transmits the communication packet including the identified software developer ID to token management apparatus70A. In step S273, token management apparatus70A receives the communication packet transmitted in step S211, and obtains the software developer ID included in the received packet. Token management apparatus70A generates the transaction data including user ID91, requested version number92, and token price93of the transaction data (seeFIG.16) received in step S271and the software developer ID obtained above. The generated transaction data has the form of transaction data90A inFIG.17. In step S274, token management apparatus70A transmits the transaction data generated in step S273to token management apparatuses70B and70C. The block including the transaction data is then stored in blockchain75through execution of the consensus algorithm by token management apparatuses70A and others. As a result, the information on the version requested by the user, more specifically, the user ID, the requested version number, the token price, and the software developer ID are stored in blockchain75, thereby obstructing the falsification of the information after the storage thereof. With reference toFIG.20, in step S275, token management apparatus70A transmits a communication packet to storage server30, the communication packet including a notification indicating a permission to provide the software to the user (also referred to as notification of permission). In step S231, storage server30issues access information, and transmits the communication packet including the access information to user apparatus80. Here, the access information refers to the information needed for user apparatus80to download the software stored in storage server30, and includes at least location information indicating the location where the software is stored (specifically, the URL) and authentication information. The authentication information indicates the user ID and the password, for example. In step S283, user apparatus80receives the communication packet transmitted in step S231, and downloads the software based on the access information included in the communication packet. Specifically, using the authentication information, user apparatus80accesses to the location indicated in the location information included in the access information, and downloads the software. (2) An example where user apparatus80transmits the transaction data including the software developer ID of the software developer of the version requested by the user (seeFIG.17) will be described. In this case, user apparatus80transmits first transaction data to token management apparatus70A, the first transaction data including the identification information identifying the user as the sender of the predetermined number of tokens and the identification information of the software developer of the requested version as the destination of the predetermined number of tokens. When storing the first transaction data in blockchain75, token management apparatus70A stores the first transaction data received from user apparatus80in blockchain75. FIG.21is a sequence diagram illustrating third processing in management system2according to the present embodiment. Steps S281and S282illustrated inFIG.21are the same as those in the processing inFIG.19. In step S271A, token management apparatus70A receives the transaction data transmitted from user apparatus80in step S282, and validates the received transaction data. Here, assume that from the result of the validation of the transaction data, token management apparatus70A determines that the transaction data is legitimate. Token management apparatus70A determines that the transaction data includes the software developer ID. In step S274, token management apparatus70A transmits the transaction data received in step S271A to token management apparatuses70B and70C. The block including the transaction data is then stored in blockchain75through execution of the consensus algorithm by token management apparatuses70A and others. As a result, the information on the version requested by the user, more specifically, the user ID, the requested version number, the token price, and the software developer ID are stored in blockchain75, thereby obstructing the falsification of the information after the storage thereof. After step S274, a series of processing illustrated inFIG.20is executed. In the series of processing illustrated inFIG.21, token management apparatus70A may further determine whether the software developer ID included in the transaction data obtained from user apparatus80is surely the software developer ID of the software developer of the requested version included in the transaction data, and then may store the transaction data in the blockchain. In other words, token management apparatus70A may obtain first identification information, which is the identification information of the software developer of the requested version indicated by the request information included in the first transaction data, from management apparatus10A, and may store the first transaction data in blockchain75only when the first identification information matches the second identification information, which is the identification information included in the first transaction data. The sequence of the processing in this case will now be described. FIG.22is a sequence diagram illustrating fourth processing in management system2according to the present embodiment. Among the steps illustrated inFIG.22, those other than steps S271A and S272A are the same as those in the processing inFIG.19or21. In the processing illustrated inFIG.22, in the case where token management apparatus70A determines in step S271A that the transaction data includes the software developer ID, token management apparatus70A transmits a communication packet for inquiry of the software developer ID to management apparatus10A (step S272), and obtains the software developer ID. In step S272A, token management apparatus70A determines whether the software developer ID obtained from management apparatus10A (corresponding to the first identification information) matches the software developer ID included in the transaction data received from user apparatus80in step S271A (corresponding to the second identification information). Token management apparatus70A then stores the block including the transaction data received in step S271A in blockchain75only when there is a match between these two software developer IDs. Such a configuration can prevent the storage of the transaction data in blockchain75when the software developer ID obtained from user apparatus80does not match the software developer ID managed in management apparatus10A, that is, when the software developer ID has a problem. The processing of token management apparatuses70A and others may be implemented by a smart contract. In other words, ledger manager74in token management apparatus70A stores a blockchain including the transaction data including smart contract codes, which are program codes for the smart contract. When user apparatus80transmits the transaction data in step S282, user apparatus80transmits the transaction data which requires the execution of the smart contract. As a result, the program codes are executed by the smart contract, thereby executing the processing of token management apparatuses70A and others illustrated inFIG.19or21, andFIG.20. In such a configuration, the program is executed in response to the transmission of the transaction data by user apparatus80, executing the series of processing. As a result, the result obtained through the execution of the program is stored in the blockchain, advantageously obstructing the falsification of the result of the series of processing. As described above, in the management method according to the present embodiment, the distributed ledger manages provision of the tokens from the user to the software developer of the new version of the software. The distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by the system failure. For this reason, the management of the transmission and reception of the tokens by the distributed ledger prevents the falsification of the history of the transmission and reception of the tokens and the missing of the history. Thus, the management method for software versions can improve the transaction safety of the software. Moreover, the identification information of the software developer of the requested version received from a user apparatus using the second transaction data is obtained from the version management apparatus, and the identification information of the software developer is used as the destination of the tokens. Thereby, the distributed ledger manages provision of the tokens from the user to the software developer of the new version of the software even when the user does not know the software developer of the requested version, that is, when the user apparatus does not possess the identification information of the software developer. Thus, the transaction safety of the software can be improved even when the user does not know the software developer of the requested version. Moreover, the transaction data indicating the provision of the tokens from the user apparatus to the software developer of the requested version is transmitted from the user apparatus to the token management apparatus. Thus, the distributed ledger manages the provision of the tokens from the user to the software developer of the new version of the software even when the token management apparatus does not obtain the information on the software developer of the requested version from another apparatus. Accordingly, the transaction safety of the software can be improved even when the token management apparatus does not obtain the information on the software developer of the requested version from another apparatus. Moreover, the identification information of the software developer of the requested version received from the user apparatus using the first transaction data is stored and managed in the distributed ledger only when this identification information matches the identification information of the software developer managed in the version management apparatus. Accordingly, if the identification information of the software developer possessed by the user apparatus is not correct (i.e., is an error, or is invalid), the tokens are not provided from the user to the software developer. This prevents the provision of the tokens to a false software developer. Thus, the transaction safety of the software can be improved by preventing the provision of the tokens to a dishonest software developer when the information possessed in the user apparatus is false. Moreover, after the provision of the tokens from the user to the software developer is managed by the distributed ledger, the software of the requested version is provided to the user. Thus, a transaction is performed more safely in which the software is provided in exchange for the tokens provided from the user apparatus. Accordingly, the transaction safety of the software can be further improved. Moreover, the series of processing such as the provision of the tokens from the user to the software developer is automatically executed based on the smart contract codes stored in the distributed ledger without intervention by any other person or system. Thus, the series of processing can be implemented with higher safety by the smart contract. Accordingly, the transaction safety of the software can be further improved. Moreover, the version management apparatuses manages the software developer of the versions of the software according to the distributed ledgers, and the information of the software developer is used as the destination of the tokens. The distributed ledger is advantageous in obstructing the falsification of the possessed information and in reducing influences by the system failure. Thus, the falsification of the information of the software developer of the version can be prevented, further improving the transaction safety of the software. Embodiment 3 In the present embodiment, a management method for software versions will be described in which the transaction safety of software is improved. In particular, an improvement in transaction safety when tokens are distributed and provided to several software developers of the software will be described. FIG.23is a diagram illustrating a method of providing tokens in management system2according to the present embodiment. As illustrated inFIG.23, assume that versions 1.A1, 1.A2, and 1.A3 are developed in this order based on version 1. Here, the software developers of versions 1.A1, 1.A2, and 1.A3 are software developer X, Y, and Z, respectively. In this case, although software developer Z is directly responsible for the development of version 1.A3, it is considered that the software developers X and Y of versions 1.A1 and 1.A2 underlying the development also contribute to the development of version 1.A3. In this consideration, it is appropriate that not only software developer Z but also software developers X and Y receive tokens when version 1.A3 of the software is provided to user A. For example, as illustrated inFIG.23, in the case where user A provides 1000 tokens to software developer Z in exchange for the obtained version 1.A3 of the software, it is appropriate that 200 tokens of the 1000 tokens are provided to software developer Y and 50 tokens of the 200 tokens are provided to software developer Z. Such a method of providing tokens will be described. The entire configuration of the management system according to the present embodiment is the same as that in management system2according to Embodiment 2. The management system according to the present embodiment includes management apparatuses10D,10E, and10F (also referred to as management apparatuses10D and others) rather than management apparatuses10A and others according to Embodiment 2, and includes token management apparatuses70D,70E, and70F (also referred to as token management apparatuses70D and others) rather than token management apparatuses70A and others according to Embodiment 2. These apparatuses will now be described. FIG.24is a diagram illustrating a configuration of management apparatus10D according to the present embodiment. As illustrated inFIG.24, management apparatus10D includes branch information generator17in addition to the components of management apparatus10A according to Embodiment 2. Branch information generator17is a processor which generates branch information indicating the history of development of versions. Branch information generator17receives a version number of a requested version from token management apparatus70D through communicator11, and then generates the branch information indicating the history of the requested version with reference to blockchain15managed by ledger manager14. Branch information generator17then transmits the generated branch information to token management apparatus70D. FIG.25is a diagram illustrating a configuration of token management apparatus70D according to the present embodiment. As illustrated inFIG.25, token management apparatus70D includes token distributor76in addition to the components of token management apparatus70A according to Embodiment 2. Token distributor76is a processor which performs processing to distribute the tokens provided from a user to two or more software developers. Token distributor76receives transaction data from user apparatus80, the transaction data indicating that tokens are provided from a user to a software developer. Token distributor76then obtains branch information from management apparatus10D, and generates two or more pieces of transaction data based on the obtained branch information such that the tokens provided from the user are distributed to the two or more software developers. Token distributor76transmits a communication packet including at least a requested version to management apparatus10D to obtain the branch information from management apparatus10D. The two or more pieces of transaction data generated by token distributor76are subjected to the consensus algorithm by token management apparatuses70D and others, and are stored in blockchain75by ledger manager74. FIG.26is a diagram illustrating one example of the branch information according to the present embodiment.FIG.26illustrates branch information table T1shown as one example of a table of the branch information. Although branch information table T1shows the branch information of two previous versions of a requested version as an example, it may be the branch information of one previous version of the requested version or may be the branch information of three or more previous versions of the requested version. Branch information table T1shows the version numbers and the software developers for the requested version, the first-previous version of the requested version as a version underlying the development thereof, and the second-previous version of the requested version. Specifically, branch information table T1shows that the software developer of version 1.A3 or the requested version is software developer Z. It also shows that the software developers of the first- and second-previous versions of the requested version (i.e., versions 1.A2 and 1.A3) are software developers Y and X, respectively. FIG.27is a diagram illustrating an example of two or more pieces of transaction data for distributing tokens according to the present embodiment.FIG.27is specifically transaction data managed by token management apparatuses70D and others according to blockchain75, the transaction data indicating that the tokens provided by user A are distributed to software developers X, Y, and Z. The format of the transaction data illustrated inFIG.27is the same as that inFIG.18, where one entry (one row) corresponds to one piece of transaction data. Specifically, transaction data101illustrated inFIG.27indicates that 1000 tokens are provided from user A to software developer Z. Transaction data102indicates that 200 tokens are provided from software developer Z to software developer Y. Transaction data103indicates that 50 tokens are provided from software developer Y to software developer X. According to the three pieces of transaction data above, token management apparatuses70D and others perform the management as follows: 1000 tokens are provided from user A to software developer Z in exchange for the software of the requested version obtained by user A, 200 tokens are provided from software developer Z to software developer Y, and 50 tokens are provided from software developer Y to software developer X. Here, the ratio of the tokens distributed and provided to the software developers is also referred to as distribution ratio. In this example, the distribution ratio is represented as Z:Y:X=1000:200:50. The distribution ratio may be represented as Z:Y:X=800:150:50 where the token price provided from each software developer is subtracted from the token price received by the software developer. In such a case where the two or more software developers include software developers of versions older than the requested version, the distribution ratio in the software developers of the requested version and its older versions may be controlled such that a smaller number of tokens are distributed to the software developer of an older version. This is because an appropriate distribution ratio of the token should be set according to the degree of contribution to the development of the version. The tokens are distributed and provided from user A to software developers X, Y, and Z by storing the three pieces of transaction data shown inFIG.27in blockchain75, thereby preventing the falsification of the transaction data thereafter. FIG.28is a sequence diagram illustrating processing in the management system according to the present embodiment. As illustrated inFIG.28, steps S381, S382, and S371are the same as steps S281, S282, and S271inFIG.19. In step S372, token management apparatus70D transmits a communication packet including an inquiry about the branch information to management apparatus10D. This communication packet includes at least the version number of the requested version. In step S311, management apparatus10D obtains the version number of the requested version from the communication packet transmitted from step S372, creates the branch information indicating the version history to the requested version, and transmits the branch information to token management apparatus70D. For example, in the case where the requested version is version 1.A3 illustrated inFIG.23, the branch information including versions 1.A1 and 1.A2 is created. In step S373, token management apparatus70D determines the distribution ratio of the tokens based on the history information. In step S374, according to the distribution ratio of the tokens determined in step S373, token management apparatus70D generates two or more pieces of transaction data indicating that the tokens are provided from the user to the software developers. In step S375, token management apparatus70D transmits the transaction data generated in step S374to token management apparatuses70E and70F. The block including the transaction data is then stored in blockchain75through execution of the consensus algorithm by token management apparatuses70D and others. Thereby, the information on the tokens provided from the user to several software developers is stored in blockchain75, obstructing the falsification of the information thereafter. The blockchain in the embodiments above will be complementally described. FIG.29is a diagram illustrating a data structure of the blockchain. The blockchain is composed of blocks (recording unit) connected on a chain. One block has pieces of transaction data and a hash value of a block immediately before the one block. Specifically, block B2includes the hash value of block B1immediately before block B2. The hash value obtained from an arithmetic operation performed on the pieces of transaction data included in block B2and the hash value of block B1is included in block B3as the hash value of block B2. Thus, the blocks are connected into a chain while the contents of the previous blocks are included as hash values, thereby effectively preventing the falsification of the recorded transaction data. Any change in the transaction data in the past will result in a hash value of the block different from that before the change. To look the falsified block legitimate, all the blocks thereafter should be regenerated. This regeneration is very difficult in reality. Such properties ensure the difficulties in falsification of the blockchain. FIG.30is a diagram illustrating a data structure of the transaction data. The transaction data illustrated inFIG.30includes transaction body P1and electronic signature P2. Transaction body P1is a body of data included in the transaction data. Electronic signature P2is generated by signing on the hash value of transaction body P1with the signature key of the creator of the transaction data, more specifically, encrypting the hash value with the private key of the creator. Because the transaction data has electronic signature P2, the falsification is substantially impossible. Thus, electronic signature P2prevents the falsification of the transaction body. As described above, in the management method according to the present embodiment, in the case where two or more software developers are responsible for the development of the requested version, the distribution and provision of the tokens from the user to the two or more software developers is managed by the distributed ledger. Accordingly, the transaction safety of the software can be improved even when two or more software developers are responsible for the development of the requested version. Moreover, when two or more software developers are responsible for the development of the requested version, tokens can be provided in the distribution ratio such that a smaller number of tokens are provided to a software developer of an older version, in other words, a larger number of tokens are provided to a software developer of a newer version. In general, it is considered that a software developer of a version closer to the requested version or a newer version has greater contribution to the development of the requested version. The management method according to the present embodiment can implement such a distribution ratio of tokens according to the degree of contribution. Accordingly, the transaction safety of the software can be further improved through the distribution of the tokens according to the degree of contribution of each of the two or more software developers responsible to the development of the requested version. In the embodiments above, the components may be implemented as dedicated hardware, or may be implemented by executing software programs suitable for the components. The components each may be implemented by a program executer, such as a CPU or a processor, which reads and executes the software program recorded on a recording medium, such as a hard disk or a semiconductor memory. Here, the management apparatus and the like in the embodiments are implemented with the following software program. Namely, the program is a program causing a computer to execute a management method for software versions, the management method to be executed by a version management system, the management method including: receiving request information by a first management apparatus among management apparatuses which are included in the version management system and have distributed ledgers, the request information indicating a requested version requested by a user; and storing first transaction data in the distributed ledgers through execution of a consensus algorithm by the management apparatuses, the first transaction data indicating that the user provides a predetermined number of tokens to a software developer who has developed the requested version. Although the management methods according to one or more aspects have been described based on the embodiments, these embodiments should not be construed as limitation to the present disclosure. A variety of modifications of the present embodiments conceived by persons skilled in the art and embodiments in combination with components in different embodiments may also be included in the scope of one or more aspects without departing from the gist of the present disclosure. Although only some exemplary embodiments of the present disclosure have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. INDUSTRIAL APPLICABILITY The present disclosure is a management method for software versions, and can be used in a management system which prevents falsification of information under management.
95,796
11861361
DETAILED DESCRIPTION At least some embodiments described herein identify repetitive differences between the content of different versions of a document, based on identifying patterns in those differences. The embodiments described herein then hide or give a different visual treatment to these repetitive or “pattern” differences versus non-repetitive or “non-pattern” differences when presenting a set of differences between these versions of the document. As a result, non-repetitive changes stand out and can be easily reviewed. As used herein, a “pattern difference” is a difference between two versions of a document, and which matches an identified pattern that explains a transformation from a first string in one version of the document to a second string in another version of the document. Pattern differences are repetitive—occurring two or more times—such that a single pattern matches a plurality of pattern differences. As an example, a refactoring change to source code (e.g., renaming a variable, renaming a function) may be a pattern difference (e.g., if it occurs repeatedly). As used herein, a “non-pattern difference” is any difference for which there is no identified pattern. This may be because the difference only occurs once, or because no pattern has been identified for the difference (even if one could exist) even though it is repetitive. As an example, a structural change to source code (e.g., altering the logic of a function, changing a mathematical operation, changing a condition) may be a non-pattern difference. In some embodiments, differences are specified as part of a change proposal. As used herein, a “change proposal” comprises a set of differences between the content of files in two branches of a version control system repository (e.g., git, subversion, mercurial, and the like), and are a set of proposed changes to be merged from one branch to the other. One example of a change proposal is a pull request in GitHub. FIG.1illustrates an example computer architecture100that facilitates distinguishing pattern differences between document versions from non-pattern differences between the document versions. As shown, computer architecture100includes a computer system101comprising a processor102(or a plurality of processors), a memory103, and a storage media104, all interconnected by a bus106. As shown, computer system101may also interconnect, via a network107, to one or more computer system(s)108(i.e., using a network interface105). The storage media104is illustrated as storing computer-executable instructions implementing at least a differencing component109and a user interface component110. In general, the differencing component109operates to identify a set of differences between contents of two (or more) different versions of a document, and to interoperate with the user interface component110to present those differences at a user interface. In embodiments, a set of differences between versions of a document describes one or more insertions, replacements, and/or deletions needed to transform one version of the document into another version of the document. Differences may be line-oriented or character-oriented. Either way, a difference identifies a string (i.e., one or more characters) in one version of a document that is different than a corresponding string in another version of the document (e.g., a replacement), or a string that exists in one version of the document but not another version of the document (e.g., an insertion or deletion). The storage media104is also illustrated as storing different versions of a document111on which the differencing component109operates. As shown, these versions include at least a document111a(i.e., a first version of document111) and a document111n(i.e., a second version of document111). As indicated by ellipses, there can be any number of versions of the document111. In some embodiments, one or more versions the document111are obtained from at least one of computer system(s)108. However, one or more versions of the document111may originate at computer system101(e.g., via use of a document editor). The document111can comprise a variety of data but, in embodiments, document111comprises human-readable textual data, such as one or more passages of written human language, one or more portions of computer source code, computer-generated log data (e.g., relating to execution of a software component), and the like. In some embodiments, document111comprises computer source code. In these embodiments, the differencing component109and/or the user interface component110may be part of an application this is usable as part of code editing and/or management. In various examples, this application is a dedicated diffing program, a generic text editor, an integrated development environment (IDE), a version control system front-end, and the like. In some embodiments, the computer system101is an end-user computer system, such that the differencing component109operates with the user interface component110to present diffing results directly at the computer system101(e.g., at a display device connected thereto). In other embodiments, the computer system101is a server computer system, such that the differencing component109operates with the user interface component110to present diffing results to another computer system (e.g., one of computer system(s)108) via the network107. In embodiments, the differencing component109operates to identify, from a set of a plurality of differences between two versions of a document, a subset of pattern differences, and to visually distinguish those pattern differences from other differences in the set when presenting the set of differences with the user interface component110. This may include hiding or giving a different visual treatment to pattern differences versus non-pattern differences. For example, the differencing component109may visually highlight non-pattern differences, while presenting pattern-differences without highlighting—effectively deemphasizing or hiding the pattern-differences. As a result, non-pattern differences (e.g., non-repetitive changes) stand out and can be easily reviewed. FIG.2illustrates an example200of details of the differencing component109ofFIG.1. The differencing component109is shown as including a document identification component201and a difference identification component202. In embodiments, the document identification component201identifies at least two versions of a document on which to perform a differencing analysis. In embodiments, the difference identification component202identifies a set of a plurality of differences (i.e., changes) between these document versions. In some embodiments, the document identification component201identifies each of a first and a second version of the document (e.g., document111aand document111n), and then the difference identification component202performs an analysis (e.g., based on edit distance) to determine the set of differences. In other embodiments, the document identification component201identifies a first version of the document (e.g., document111a) and constructs the second version of the document based on a specification of the set of differences (e.g., as part of a change proposal, such as a GitHub pull request). In these embodiments, the difference identification component202identifies the set of differences from that specification. In embodiments, the differencing component109presents this set of differences at a user interface, such as via an interaction between the presentation component205and the user interface component110. For example,FIG.3Aillustrates an example of a diffing user interface300athat highlights all differences between two versions of a document. In diffing user interface300athere is a left pane301presenting a first version of a document (e.g., document111a), and a right pane302presenting a second version of the document (e.g., document111n). Additionally, when presenting these versions of the document, the diffing user interface300ahighlights each line that differs across the document versions. For example, when presenting lines 11 to 48 of the document, the diffing user interface300ahighlights lines 13, 24, 40, 42, and 43 due to changes renaming a list from ‘fTemps’ to ‘tempsInF’; highlights lines 14, 34, 47, and 48 due to changes renaming a list from ‘cTemps’ to ‘tempsInC’; highlights line 18 due to changing a ‘!’ character to an ‘=’ character (i.e., a condition of inequality to a condition of equality); and highlights line 26 due to a change that calls a function (i.e., ‘FtoC(tempF)’) rather than expressly computing a Fahrenheit to Celsius conversion (i.e., ‘(tempF−31)*(5.0/9.0)’). It is noted that there could be other differences between the first and second versions of the document that are not shown in the diffing user interface300a, since the line(s) corresponding to those difference are not presently being presented at the diffing user interface300a. Additionally, it is noted that, while the diffing user interface300ahighlights entire lines, the diffing user interface300acould additionally, or alternatively, highlight particular characters that have changed on those lines. Notably, many of the changes shown inFIG.3Aare refactoring changes (e.g., renaming ‘fTemps’ to ‘tempsInF’; renaming ‘cTemps’ to ‘tempsInC’; and moving a temperature conversion into a function called FtoC) and are unlikely to affect program function. By highlighting all changes with the same visual treatment, a potentially important (and detrimental) structural change (i.e., changing the condition on line 18 from inequality to equality) may be easy to miss when reviewing the changes made in the second version of the document. To improve on the presentation of differences, the differencing component109also includes a pattern identification component203and a group identification component204. In embodiments, the pattern identification component203identifies, for each of one or more differences identified by the difference identification component202, a pattern that explains a transformation from a string in one version of a document to a corresponding string in a second version of the document. In some embodiments, the pattern identification component203identifies patterns for each difference identified by the difference identification component202, while in other embodiments the pattern identification component203only identifies patterns that would match two or more of the differences identified by the difference identification component202. In some embodiments, a pattern is a simple substitution. For example, referring toFIG.3A, a first pattern explaining the difference at line 13 may be a substitution of the string ‘fTemps’ in the version of the document presented in the left pane301with the string ‘tempsInF’ in the version of the document presented in the right pane302, and a second pattern explaining the difference at line 14 may be a substitution of the string ‘cTemps’ in the version of the document presented in the left pane301with the string ‘tempsInC’ in the version of the document presented in the right pane302. In other embodiments, a pattern is search pattern, such as a regular expression or other domain-specific language, that includes wildcards, variables, and the like, and that is used by a string-searching algorithm for “find” or “find and replace” operations on strings. For example, a third pattern explaining the difference at line 26 may be a regular expression matching the expression ‘\((\w+)\s*-\s*32\)\s*\*\s*\(5\.0\s*/\s*9 \.0\)’ in the version of the document presented in the left pane301and matching the expression ‘FtoC($1)’ in the version of the document presented in the right pane302. Notably, by using search patterns, embodiments can match a single pattern to differences that are not identical. For example, the foregoing regular expression would also match a difference that operates on a variable labeled ‘tempC’ rather than ‘temp F’. In embodiments, for a given pattern, the group identification component204identifies a subset of the differences identified by the difference identification component202that match the pattern. In embodiments, this subset is a group of two or more pattern differences that match that pattern. For example, in the context of the diffing user interface300a, the group identification component204may identify a first subset of pattern differences (e.g., including at least lines 13, 24, 40, 42, and 43) for the first pattern substituting ‘fTemps’ with ‘tempsInF’; identify a second subset of pattern differences (e.g., including at least lines 14, 34, 47, and 48) for the second pattern substituting ‘cTemps’ with ‘tempsInC’; and identify a third subset of pattern differences (e.g., including at least line 26, along with at least one other line not shown inFIG.3A) for the third pattern comprising the regular expression substituting the expression ‘\((\w+)\s*-\s*32\)\s*\*\s*\(5\.0\s*/\s*9\.0\)’ with the expression ‘FtoC($1)’. In some embodiments, the pattern identification component203generates one or more patterns based on receipt of user input identifying each side of a transformation (e.g., receipt of user input at diffing user interface300aselecting ‘f Temps’ at line 13 in left pane301and selecting ‘tempsInF’ at line 13 in right pane302). Additionally, or alternatively, in some embodiments the pattern identification component203automatically generates one or more patterns based on performing an analysis of the differences identified by the difference identification component202. Additionally, or alternatively, in some embodiments the pattern identification component203identifies one or more patterns from a log112that specifies those patterns. In some embodiments, log112is generated based on prior operation of the differencing component109. In other embodiments, log112is received a part of a change proposal. In other embodiments, log112is generated by a language service of an IDE (e.g., as a user makes refactoring changes in the IDE). In any of these embodiments, the log112may be generated at the computer system101or be received from at least one of computer system(s)108. In embodiments, based on operation of the pattern identification component203and the group identification component204, the presentation component205interacts with the user interface component110to apply a different visual treatment to repetitive or pattern differences that are identified by the group identification component204than a visual treatment applied to other differences identified by the difference identification component202(e.g., non-repetitive or non-pattern differences). In embodiments, this different visual treatment operates to hide or otherwise deemphasize the pattern differences, as compared to non-pattern differences. For example,FIG.3Billustrates an example of a diffing user interface300bthat hides pattern differences between two versions of a document. Diffing user interface300bis identical to the diffing user interface300aofFIG.3A, except that only the difference at line 18 has been highlighted. Here, the presentation component205has applied one visual treatment (i.e., no visual highlight) to pattern differences that are being displayed—including lines 13, 24, 40, 42, and 43 from the first subset of pattern differences; lines 14, 34, 47, and 48 from the second subset of pattern differences; and line 25 from the third subset of pattern differences. The presentation component205has also applied another visual treatment (i.e., a visual highlight) to the non-pattern difference that is being displayed at line 18. Here, the pattern differences have been deemphasized (hidden, in this case, by not highlighting them), such that the non-pattern difference at line 18 stands out. This provides a cleaner, clearer, and/or more focused visual presentation of information than the more conventional presentation of diffing user interface300a. Notably, applying a different visual treatment to pattern differences than non-pattern differences can take any form. For example,FIG.3Cillustrates an example of a diffing user interface300cthat flags pattern differences. In diffing user interface300c, in addition to presenting pattern differences without a visual highlight, the presentation component205has also presented a flag in connection with each pattern difference. For example, diffing user interface300cincludes a flag303in connection with line 26, as well as similar flags in connection with other lines that correspond to a non-pattern difference. In some embodiments, the differencing component109operates to use partial pattern matching to identify potentially missed opportunities for making a repetitive (e.g., refactoring) change. For instance, previously described was a refactoring change (i.e., line 26) calling the function ‘FtoC’ rather than expressly computing a Fahrenheit to Celsius conversion (i.e., ‘(tempF−31)*(5.0/9.0)’). Suppose that the pattern identification component203had also identified a similar refactoring change calling the function ‘CtoF’ rather than expressly computing a Celsius Fahrenheit to conversion (e.g., ‘tempC*(9.0/5.0)+32’). In embodiments, this may be based on differences not presently shown in diffing user interface300c(e.g., lines 1 to 10, or a line beyond line 48), based on log112, etc. In these embodiments, the pattern identification component203may identify a fourth pattern comprising a regular expression replacing the expression ‘(\w+)\s*\*\s*\(9\.0\s*/\s*5 \.0\)\s*\+\s*32’ with the expression ‘CtoF ($1)’. Here, the differencing component109can determine that there is a partial match to line 36; that is, the expression ‘(\w+)\s*\*\s*\(9\.0\s*/\s*5\.0\)\s*\+\s*32’ matches to ‘tempC*(9.0/5.0)+32’ in line 36 of the left pane301, but the expression ‘CtoF ($1)’ does not match to line 36 in the right pane302. As such, this may be a missed opportunity to have used a new ‘CtoF’ function. For example, it is possible that line 36 in of the left pane301could have been changed to ‘Console.WriteLine(CtoF(tempC));’. In embodiments, based on an identification of a potentially missed opportunities for making a repetitive (e.g., refactoring) change, the presentation component205provides an indication of those opportunities. For example,FIG.3Dillustrates an example of a diffing user interface300dthat flags partial matches. In diffing user interface300d, line 36 has been associated with a flag304(shown in the example as being a black flag, as opposed to the white flags used to indicate the presence of a pattern change) indicating the potentially missed refactoring opportunity. In some embodiments, diffing user interface300dprovides an option for a user to elect to make the change. In embodiments, the presentation component205provides one or more user interface controls relating to creating, removing, displaying, and interacting with pattern differences. These user interface controls can take a variety of forms, such as context menus, popups, panes, etc. To illustrate one example,FIG.3Eillustrates an example of a diffing user interface300ethat includes a pattern difference management control. Here, diffing user interface300eincludes a control305comprising a popup. Control305may be invoked in a variety of ways, such as by interacting with a flag (e.g., flag303or flag303), via a menu item, via a context menu, via a status bar item, and the like. In the example, the control305lists a rule created for each pattern. For example, control305includes a rule called ‘fTemps’ (e.g., corresponding to the first pattern discussed supra), a rule called ‘tempC*(9.0/5.0)+32’ (e.g., corresponding to the fourth pattern discussed supra), a rule called ‘(tempF−31)*(5.0/9.0)’ (e.g., corresponding to the third pattern discussed supra), and a rule called ‘cTemps’ (e.g., corresponding to the second pattern discussed supra). In the example, the control305also lists how many differences that are covered by that rule (i.e., how many differences are in a subset of differences matching the rule), and options for interacting with those rules. For example, each rule is associated with a filter button; in embodiments, selection of the filter button enables or disables using a different visual treatment for differences covered by that rule (e.g., to enable/disable the rule). Additionally, each rule is associated with a check mark button; in embodiments, selection of the checkmark button accepts all differences covered by that rule in a version control system, such as git, subversion, mercurial, and the like, and then hides those differences in the diffing user interface300e(e.g., by enabling the filter button). Although not shown, other functionality is also possible, such as a button that initiates creation of a comment that is associated with the rule (e.g., for inclusion in a change proposal), a button that deletes the rule, etc. The differencing component109is now further described in connection withFIG.4, which illustrates a flow chart of an example method400for distinguishing pattern differences between different versions of a document from non-pattern-differences. In embodiments, instructions for implementing method400are encoded as computer-executable instructions (e.g., differencing component109, user interface component110) stored on a computer program product (e.g., storage media104) that are executable by a processor (e.g., processor102) to cause a computer system (e.g., computer system101) to perform method400. The following discussion now refers to a number of methods and method acts. Although the method acts may be discussed in certain orders, or may be illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed. Referring toFIG.4, method400comprises an act401of identifying different versions of a document. As shown, act401includes and act401aof identifying a first version of a document, and an act401bof identifying a second version of the document. In an example, the document identification component201identifies the document111a(e.g., as shown in the left pane301ofFIGS.3A-3E) as a first version of document111, and identifies the document111n(e.g., as shown in the right pane302ofFIGS.3A-3E) as a second version of document111(e.g., based on accessing document111ndirectly, or by reconstructing document111nfrom a specification of differences). As shown, there is no ordering requirement between act401aand act401b; thus, these acts may be performed serially (in either order), or in parallel. Method400also comprises an act402of identifying a set of differences between the first and second versions. In some embodiments, act402comprises identifying a set of differences, the set of differences comprising a plurality of differences between the first version of the document and the second version of the document. In an example, the difference identification component202identifies a set of a plurality of differences between document111aand document111n, such as those differences discussed in connection withFIGS.3A-3E. In embodiments, the difference identification component202performs an analysis (e.g., based on edit distance) on document111aand document111nto determine the set of differences. In other embodiments, the difference identification component202determine the set of differences based on a specification of the set of differences (e.g., as part of a change proposal, as part of log112). Method400also comprises an act403of identifying a pattern explaining a difference. In some embodiments, act403comprises identifying a pattern explaining a transformation from a first string in the first version of the document to a second string in the second version of the document. In an example, the pattern identification component203identifies a first pattern explaining the difference at line 13 (e.g., a substitution of the string ‘fTemps’ with the string ‘tempsInF’), identifies a second pattern explaining the difference at line 14 (e.g., substitution of the string ‘cTemps’ with the string ‘tempsInC’), identifies a third pattern explaining the difference at line 26 (e.g., a regular expression substituting the expression ‘\((\w+)\s*−\s*32\)\s*\*\s*\(5\.0\s*/\s*9\.0\)’ with the expression ‘FtoC($1)’), and the like. As described, a pattern may comprise a substitution (e.g., the first and second patterns), or a search pattern such as a regular expression (e.g., the third pattern). Thus, in embodiments of act403, the pattern comprises at least one of a substitution or a search pattern. As described, the pattern identification component203may identify a pattern based on receipt of user input identifying each side of a transformation, based on analysis of the differences identified by the difference identification component202, and/or based on a log112that specifies those patterns. Thus, in embodiments of act403, the pattern is identified based on at least one of (i) receiving a user input identifying the first string and the second string, (ii) an automated pattern analysis between the first version of the document and the second version of the document, or (iii) reading one or more patterns from a log. As described, a log112may be received a part of a change proposal request or may be generated by a language service of an IDE (e.g., as a user makes refactoring changes in the IDE). Thus, in embodiments of act403, the pattern is identified based on the log, and the log is included in a change proposal, or is generated by a language service of an IDE based on one or more code refactoring changes. As shown, there is no ordering requirement between act402and act403; thus, these acts may be performed serially (in either order), or in parallel. Method400also comprises an act404of identifying a subset of differences that match the pattern. In some embodiments, act404comprises identifying a subset of differences, the subset of differences comprising a plurality of differences, from among the set of differences, which match the pattern. In an example, for a given pattern identified in act403, the group identification component204identifies—from among the set of differences identified in act402—a subset of differences as a plurality of pattern differences matching the pattern. For instance, the group identification component204may identify a first subset of pattern differences (e.g., including at least lines 13, 24, 40, 42, and 43) for the first pattern substituting ‘fTemps’ with ‘tempsInF’; may identify a second subset of pattern differences (e.g., including at least lines 14, 34, 47, and 48) for the second pattern substituting ‘cTemps’ with ‘tempsInC’; and may identify a third subset of pattern differences (e.g., including at least line 26, along with at least one other line not shown inFIGS.3A-3E) for the third pattern comprising the regular expression substituting the expression ‘\((\w+)\s*-\s*32\)\s*\*\s*\(5\.0\s*/\s*9\.0\)’ with the expression ‘FtoC($1)’. Method400also comprises an act405of visually distinguishing a pattern-change difference. In some embodiments, act405comprises, while presenting a user interface that visually highlights differences between the first version of the document and the second version of the document, applying visual treatments to differences. As shown, act405includes an act405aof applying a first visual treatment to a difference from the subset. In some embodiments, act405acomprises, based at least on a first difference of the set of differences being included in the subset of differences, applying a first visual treatment to the first difference. In an example, diffing user interface300bpresents each displayed pattern change difference without a highlighting, while diffing user interface300cand diffing user interface300dpresent each displayed pattern change difference with a flag. Thus, in some embodiments of act405a, the first visual treatment is one or more of a non-highlight or a flag. However, it will be appreciated by one of ordinary skill in the art that a variety of visualization techniques could be used for the first visual treatment. As shown, act405also includes an act405bof applying a second visual treatment to a difference outside of the subset. In some embodiments, act405bcomprises, based at least on a second difference of the set of differences being excluded from the subset of differences, applying a second visual treatment to the second difference, the second visual treatment being different than the first visual treatment. In embodiments, the first visual treatment is visually deemphasized compared to the second visual treatment. In an example, diffing user interfaces300b-300cpresent each displayed difference that is not a pattern change difference with highlighting. Thus, in some embodiments of act405bthe second visual treatment is a highlight. However, it will be appreciated by one of ordinary skill in the art that a variety of visualization techniques could be used for the second visual treatment. As shown, there is no ordering requirement between act405aand act405b; thus, these acts may be performed serially (in either order), or in parallel. Technical improvements and technical effects of method400include providing a cleaner, clearer, and/or more focused visual presentation of information than conventional document difference visualization techniques. For example, conventional document difference visualization techniques merely displayed all differences with the same type of visual treatment (e.g., highlighting) regardless of whether they were repetitive or non-repetitive. As described, in some embodiments the differencing component109operates to use partial pattern matching to identify potentially missed opportunities for making a repetitive (e.g., refactoring) change. For instance, as discussed in connection withFIG.3D, the differencing component109may determine that there is a partial match to line 36; that is, the expression ‘(\w+)\s*\*\s*\(9\.0\s*/\s*5 \.0\)\s*\+\s*32’ matches to ‘tempC*(9.0/5.0)+32’ in line 36 of the left pane301, but the expression ‘CtoF ($1)’ does not match to line 36 in the right pane302. As such, this may be a missed opportunity to use a new ‘CtoF’ function. Thus, in diffing user interface300d, the presentation component205associates line 36 with a flag304that indicates the potentially missed refactoring opportunity. Thus, in some embodiments, method400also comprises identifying a pairing of identical strings, between the first version of the document and the second version of the document, which partially match the pattern; and while presenting the user interface, applying a third visual treatment to at least one of the identical strings, the third visual treatment being different from the first visual treatment and the second visual treatment. Here, technical improvements and technical effects of method400include providing automated assistance in identifying missed refactoring changes. As described, in some embodiments a log112describing a set of patterns is generated based on prior operation of the differencing component109. Thus, in some embodiments, method400also comprises storing the pattern to a log. As described, some embodiments include a user interface control (e.g., control305) that lists a rule created for each pattern and enables those rules to be interacted with (e.g., a filter that that enables or disables using a different visual treatment for differences covered by that rule, a button that deletes the rule, etc.). Thus, some embodiments of method400also comprise presenting a user interface element that includes one or more rules that is each associated with a corresponding pattern, presence of each rule causing the user interface to apply the first visual treatment to one or more differences matching the corresponding pattern, and wherein the user interface element enables receipt of a user input to remove each rule. As described, some embodiments include a user interface control (e.g., a checkmark button within control305) that, when selected, accepts all differences covered by that rule in a version control system, such as git, subversion, mercurial, and the like. Thus, some embodiments of method400also comprise presenting a selectable user interface control that, when selected, initiates an approval of all differences in the subset of differences. Notably, this provides an improved user interface interaction, by enabling all differences in a group to be accepted with a single user input, rather than a different user input for each difference in the group. As described, some embodiments include a user interface control that initiates creation of a comment that is associated with the rule (e.g., for inclusion in a change proposal). Thus, some embodiments of method400also comprise associating a comment with the subset of differences. In some embodiments, the pattern identification component203generates a pattern (and/or rule) based on a user interaction accepting difference, and then the presentation component205automatically hides all differences matching the pattern. Thus, some embodiments of method400also comprises receiving a user input accepting the first difference; and based on the user input, generating a rule for the pattern, the rule causing all differences in the subset of differences to be hidden while presenting the user interface. In some embodiments, this user input also initiates an approval of all differences in the subset of differences. Accordingly, the embodiments described herein identify repetitive differences between the content of different versions of a document, based on identifying patterns in those differences. These embodiments then hide or give a different visual treatment to these repetitive or “pattern” differences versus non-repetitive or “non-pattern” differences when presenting a set of differences between these versions of the document. As a result, non-repetitive changes stand out and can be easily reviewed. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above, or the order of the acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. Embodiments of the present invention may comprise or utilize a special-purpose or general-purpose computer system (e.g., computer system101) that includes computer hardware, such as, for example, one or more processors (e.g., processor102) and system memory (e.g., memory103), as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions and/or data structures are computer storage media (e.g., storage media104). Computer-readable media that carry computer-executable instructions and/or data structures are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media. Computer storage media are physical storage media that store computer-executable instructions and/or data structures. Physical storage media include computer hardware, such as RAM, ROM, EEPROM, solid state drives (“SSDs”), flash memory, phase-change memory (“PCM”), optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage device(s) which can be used to store program code in the form of computer-executable instructions or data structures, which can be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Transmission media can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures, and which can be accessed by a general-purpose or special-purpose computer system. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer system, the computer system may view the connection as transmission media. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computer system components, program code in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., network interface105), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at one or more processors, cause a general-purpose computer system, special-purpose computer system, or special-purpose processing device to perform a certain function or group of functions. Computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. As such, in a distributed system environment, a computer system may include a plurality of constituent computer systems. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed. A cloud computing model can be composed of various characteristics, such as on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud computing model may also come in the form of various service models such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). The cloud computing model may also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. Some embodiments, such as a cloud computing environment, may comprise a system that includes one or more hosts that are each capable of running one or more virtual machines. During operation, virtual machines emulate an operational computing system, supporting an operating system and perhaps one or more other applications as well. In some embodiments, each host includes a hypervisor that emulates virtual resources for the virtual machines using physical resources that are abstracted from view of the virtual machines. The hypervisor also provides proper isolation between the virtual machines. Thus, from the perspective of any given virtual machine, the hypervisor provides the illusion that the virtual machine is interfacing with a physical resource, even though the virtual machine only interfaces with the appearance (e.g., a virtual resource) of a physical resource. Examples of physical resources including processing capacity, memory, disk space, network bandwidth, media drives, and so forth. The present invention may be embodied in other specific forms without departing from its essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. When introducing elements in the appended claims, the articles “a,” “an,” “the,” and “said” are intended to mean there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Unless otherwise specified, the terms “set,” “superset,” and “subset” are intended to exclude an empty set, and thus “set” is defined as a non-empty set, “superset” is defined as a non-empty superset, and “subset” is defined as a non-empty subset. Unless otherwise specified, the term “subset” excludes the entirety of its superset (i.e., the superset contains at least one item not included in the subset). Unless otherwise specified, a “superset” can include at least one additional element, and a “subset” can exclude at least one element.
42,957
11861362
DETAILED DESCRIPTION The present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for enabling software migration and modernization services of a cloud provider network to provide completion time forecasts for various types of migration and modernization actions performed by the services relative to users' software applications and systems. In some embodiments, a cloud provider network provides a software migration and modernization orchestration service that helps users orchestrate the use of various tools and services used to migrate and modernize software applications and associated systems (e.g., servers, databases, networks, etc.) running in users' on-premises environments to infrastructure provided by a cloud provider network. Depending on the technical characteristics of users' software applications and the selection of migration and modernization actions and action workflows to be performed, an amount of time needed to complete such actions can vary widely. According to embodiments described herein, migration and modernization services of the cloud provider network train and use machine learning (ML) models to forecast an amount of time needed to complete such actions based on historical action execution data collected by the services, thereby providing useful insights into complex migration and modernization actions and action workflows. Many cloud providers provide services that help users to migrate and modernize software applications located in their on-premises data centers or other computing environments to infrastructure provided by a cloud provider network. The migration and modernization of a given user's application might typically involve several related actions including, for example, collecting application profile data, analyzing the profile data to obtain migration and modernization recommendations, obtaining replication data associated with the applications, converting virtual machine (VM) images, creating containers, refactoring source code, and the like. The amount of time needed to perform these actions for an application depends on several factors such as, for example, a type of software application, a type of software application architecture used to implement the application, a size of collected snapshot data, network bandwidth available at the users' data center, an operating system and server versions and types, a number of storage volumes, a type of filesystem, a number of cloud services to be used, and so forth. While users may typically be provided with information indicating a duration of time used to perform such migration and modernization actions after the actions are completed, users generally lack information about how much time such actions are expected to take in advance. Among other challenges, this lack of information makes it difficult for users and migration and modernization systems to select and plan optimal migration and modernization action workflows and to obtain completion time information about in-progress actions and action workflows. These challenges, among others, are addressed by enabling software migration and modernization services to forecast an amount of time needed to complete various software migration and modernization actions. In some embodiments, a software migration or modernization service collects historical actions metrics from data sources including users' computing environments, migration or modernization agents, and migration or modernization services and tools used to implement the migration and modernization actions. The collected historical actions metrics are used to train ML-based models (e.g., linear regression models) or other types of models that can forecast an estimated action completion time based on the technical characteristics of a given application and the actions to be performed. In some embodiments, the action time completion forecasts can be used to identify optimal migration and modernization orchestration plans where several different orchestration plans are possible, to gain insight into estimated completion percentages for in-progress actions, action workflows, or orchestration plans, among other uses. The ability to obtain migration and modernization completion time forecasts for a given software application and to identify optimal actions, action workflows, and orchestration plans enables the development of more robust and resilient software applications, improves users' ability to modernize and migrate software applications to cloud-based execution environments, and enables software applications to use more efficient cloud-based computing resources to support their execution. FIG.1is a diagram illustrating an environment that enables software migration and modernization services of a cloud provider network to provide action completion time forecasts for various migration and modernization actions performed relative to users' software applications and systems according to some embodiments. A provider network100(or “cloud” provider network) provides users with the ability to utilize one or more of a variety of types of computing-related resources such as compute resources (e.g., executing virtual machine (VM) instances and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (e.g., object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (e.g., configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (e.g., databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc. These and other computing resources may be provided as services, such as a hardware virtualization service102that can execute compute instances (e.g., VM instances104), a container service106that can execution containers (e.g., containers108), a database service110that provides databases (e.g., database112), and storage service(s)114that can store data objects, etc. The users (or “customers”) (e.g., a user116) of provider networks100may utilize one or more user accounts that are associated with a customer account, though these terms may be used somewhat interchangeably depending upon the context of use. Users may interact with a provider network100via an electronic device (e.g., electronic device(s)118) across one or more intermediate networks120(e.g., the internet) via one or more interface(s)128, such as through use of application programming interface (API) calls, via a console implemented as a website or application, etc. An API refers to an interface and/or communication protocol between a client and a server, such that if the client makes a request in a predefined format, the client should receive a response in a specific format or initiate a defined action. In the cloud provider network context, APIs provide a gateway for customers to access cloud infrastructure by allowing customers to obtain data from or cause actions within the cloud provider network, enabling the development of applications that interact with resources and services hosted in the cloud provider network. APIs can also enable different services of the cloud provider network to exchange data with one another. The interface(s) may be part of, or serve as a front-end to, a control plane130of the provider network100that includes “backend” services supporting and enabling the services that may be more directly offered to customers. For example, a cloud provider network (or just “cloud”) typically refers to a large pool of accessible virtualized computing resources (such as compute, storage, and networking resources, applications, and services). A cloud can provide convenient, on-demand network access to a shared pool of configurable computing resources that can be programmatically provisioned and released in response to customer commands. These resources can be dynamically provisioned and reconfigured to adjust to variable load. Cloud computing can thus be considered as both the applications delivered as services over a publicly accessible network (e.g., the Internet, a cellular communication network) and the hardware and software in cloud provider data centers that provide those services. A cloud provider network can be formed as a number of regions, where a region is a geographical area in which the cloud provider clusters data centers. Each region includes multiple (e.g., two or more) availability zones (AZs) connected to one another via a private high-speed network, for example a fiber communication connection. An AZ (also known as an availability domain, or simply a “zone”) provides an isolated failure domain including one or more data center facilities with separate power, separate networking, and separate cooling from those in another AZ. A data center refers to a physical building or enclosure that houses and provides power and cooling to servers of the cloud provider network. Preferably, AZs within a region are positioned far enough away from one another so that a natural disaster (or other failure-inducing event) should not affect or take more than one AZ offline at the same time. Customers can connect to AZ of the cloud provider network via a publicly accessible network (e.g., the Internet, a cellular communication network), e.g., by way of a transit center (TC). TCs are the primary backbone locations linking customers to the cloud provider network and may be collocated at other network provider facilities (e.g., Internet service providers (ISPs), telecommunications providers) and securely connected (e.g., via a VPN or direct connection) to the AZs. Each region can operate two or more TCs for redundancy. Regions are connected to a global network which includes private networking infrastructure (e.g., fiber connections controlled by the cloud provider) connecting each region to at least one other region. The cloud provider network may deliver content from points of presence (or “POPs”) outside of, but networked with, these regions by way of edge locations and regional edge cache servers. This compartmentalization and geographic distribution of computing hardware enables the cloud provider network to provide low-latency resource access to customers on a global scale with a high degree of fault tolerance and stability. Generally, the traffic and operations of a provider network may broadly be subdivided into two categories: control plane operations carried over a logical control plane and data plane operations carried over a logical data plane. While the data plane represents the movement of user data through the distributed computing system, the control plane represents the movement of control signals through the distributed computing system. The control plane generally includes one or more control plane components distributed across and implemented by one or more control servers. Control plane traffic generally includes administrative operations, such as system configuration and management (e.g., resource placement, hardware capacity management, diagnostic monitoring, system state information). The data plane includes customer resources that are implemented on the provider network (e.g., computing instances, containers, block storage volumes, databases, file storage). Data plane traffic generally includes non-administrative operations such as transferring customer data to and from the customer resources. The control plane components are typically implemented on a separate set of servers from the data plane servers, and control plane traffic and data plane traffic may be sent over separate/distinct networks. To provide these and other computing resource services, provider networks100often rely upon virtualization techniques. For example, virtualization technologies may be used to provide users the ability to control or utilize compute resources (e.g., a “compute instance” such as a VM using a guest operating system (O/S) that operates using a hypervisor that may or may not further operate on top of an underlying host O/S, a container that may or may not operate in a VM, a compute instance that can execute on “bare metal” hardware without an underlying hypervisor), where one or multiple compute resources can be implemented using a single electronic device. Thus, a user may directly utilize a compute resource (e.g., provided by a hardware virtualization service) hosted by the provider network to perform a variety of computing tasks. Additionally, or alternatively, a user may indirectly utilize a compute resource by submitting code to be executed by the provider network (e.g., via an on-demand code execution service), which in turn utilizes one or more compute resources to execute the code—typically without the user having any control of or knowledge of the underlying compute instance(s) involved. As indicated above, a cloud provider network100typically provides a wide variety of computing-related services. For example, the hardware virtualization service102(referred to in various implementations as an elastic compute service, a virtual machines service, a computing cloud service, a compute engine, or a cloud compute service) enables users of the provider network100to provision and manage compute resources such as virtual machine instances. Virtual machine technology can use one physical server to run the equivalent of many servers (each of which is called a virtual machine), for example using a hypervisor, which may run at least on an offload card of the server (e.g., a card connected via PCI or PCIe to the physical CPUs) and other components of the virtualization host may be used for some virtualization management components. Such an offload card of the host can include one or more CPUs that are not available to customer instances, but rather are dedicated to instance management tasks such as virtual machine management (e.g., a hypervisor), input/output virtualization to network-attached storage volumes, local migration management tasks, instance health monitoring, and the like). Virtual machines are commonly referred to as compute instances or simply “instances.” As used herein, provisioning a virtual compute instance generally includes reserving resources (e.g., computational and memory resources) of an underlying physical compute instance for the client (e.g., from a pool of available physical compute instances and other resources), installing or launching required software (e.g., an operating system), and making the virtual compute instance available to the client for performing tasks specified by the client. The container service106can be a container orchestration and management service (referred to in various implementations as a container service, cloud container service, container engine, or container cloud service) that allows users of the cloud provider network to instantiate and manage containers. In some embodiments the container service106may be a Kubernetes-based container orchestration and management service (referred to in various implementations as a container service for Kubernetes, Azure Kubernetes service, IBM cloud Kubernetes service, Kubernetes engine, or container engine for Kubernetes). A container, as referred to herein, packages up code and all its dependencies so an application (also referred to as a task, pod, or cluster in various container services) can run quickly and reliably from one computing environment to another. A container image is a standalone, executable package of software that includes everything needed to run an application process: code, runtime, system tools, system libraries and settings. Container images become containers at runtime. Containers are thus an abstraction of the application layer (meaning that each container simulates a different software application process). Though each container runs isolated processes, multiple containers can share a common operating system, for example by being launched within the same virtual machine. In contrast, virtual machines are an abstraction of the hardware layer (meaning that each virtual machine simulates a physical machine that can run software). While multiple virtual machines can run on one physical machine, each virtual machine typically has its own copy of an operating system, as well as the applications and their related files, libraries, and dependencies. Some containers can be run on instances that are running a container agent, and some containers can be run on bare-metal servers, or on an offload card of a server. As indicated, users responsible for the development and administration of various types of software applications or workloads may wish to migrate their enterprise applications from on-premises environments (e.g., from a user computing environment132) to a cloud provider network100to take advantage of the performance, scalability, and cost advantages of cloud provider networks. The software applications and systems that users desire to migrate typically consist of multiple individual components like compute (e.g., VM instances, containers, database servers, etc.), network, and storage components. Users currently often use many different services and tools to migrate these various types of components to cloud. For example, users might use a server migration service122to migrate compute instances, a database migration service124to migrate databases, use one or more backup services, and use one or more modernization services126to modernize their application for the cloud in various ways. Users typically coordinate the migration and modernization actions performed by each such service individually, manually configure and launch the selected migration and modernization actions, and run tests to validate the results. These manual processes are often further repeated at regular intervals during application's migration life cycle until the user is satisfied with the migration. Migration actions refer to steps that are taken to move the code and data of these applications from the customer's computing devices (e.g., on-premise servers) to cloud infrastructure. A successful migration can involve not only moving the application and data to the cloud infrastructure, but also modernizing the application. Modernization actions refer to the actions taken for existing legacy applications to modernize their platform infrastructure, internal architecture, and/or features. Examples include decomposing monolithic applications into microservices, restructuring an application to use cloud services, and bringing applications into cloud architecture and release patterns such as DevOps and CI/CD. Accordingly, some orchestration plans may include both migration actions and modernization actions, sometimes referred to herein collectively as cloud migration actions. In some embodiments, to alleviate users from managing many of the processes described above and others, a software migration and modernization orchestration service134is provided to help orchestrate and automate migration and modernization actions and action workflows. At a high level, users can use a software migration and modernization orchestration service134to identify a software application of interest (e.g., a software application136in the user's computing environment132), provide input indicating how the user desires for the application to be migrated and modernized, validated, launched, etc., and have provided one or more possible migration and modernization orchestration plans to accomplish the user's goals. For example, based on the input provided by the user and additional information optionally collected about the user's application from the user's environment, the software migration and modernization orchestration service134can generate migration and modernization orchestration plans including a set of actions or action workflows performed by associated migration and modernization services (e.g., a server migration service122, database migration service124, modernization service(s)126, etc.) and implement the orchestration plans on an individual or recurring basis. As part of the orchestration services described above or separately, in some embodiments, the migration and modernization services of a cloud provider network include various discovery services, assessment services, and transformation services, collectively aimed at helping users to discover and use recommended software migration and modernization workflows for their software applications. The discovery services, for example, may provide downloadable migration and modernization agent(s)138and other tools that can be used to generate an inventory of software applications in a user's computing environment132and to collect application profile data (e.g., application profile data142) for software applications undergoing migration and modernization processes. In some embodiments, an assessment services enable users and applications to obtain various types of application migration and modernization assessments and recommendations, e.g., based on application profile data collected by the discovery services. The recommendations provided by an assessment service can include, for example, recommended migration and modernization strategies, migration and modernization tools, estimated migration and modernization costs, recommended software architectures for a software application, etc. The transformation services provide various types of migration and modernization tools to assist with performing migration modernization processes, e.g., to assist users with obtaining replication data140in their own computing environments, containerizing applications, refactoring an application based on generated source code recommendations, deploying a modernized application using one or more other services of the cloud provider network100, and the like. In some embodiments, a user may initially access the software migration and modernization orchestration service134to obtain information about various available migration and modernization services and to download one or more agents138. In some embodiments, users can obtain one or more agent(s)138by downloading the agents via a web-based console or other interface and installing the agents within a user's computing environment132to assist with migration modernization-related processes. For example, in some embodiments, an agent138collects and generates application profile data142based on application artifacts (e.g., source code or other types of application artifacts146such as bytecode, Common Intermediate Language (CIL) code, etc., used to implement a software application136and possibly stored in a version control system148) and monitors the execution of the software application136. As described in more detail herein, once such application profile data142is obtained, in some embodiments, the data can be used as input to one or more completion time models (e.g., one or more of completion time models152A-152N) to identify a forecasted amount of time to complete various migration and modernization action involving the application. In some embodiments, these completion time forecasts156can be provided as part of a migration and modernization orchestration status report154or used as part of the other migration or modernization-related actions. In some embodiments, the agents138are installed on servers or other electronic devices162within a user's on-premises computing environment132(e.g., on physical servers or VMs). Users (e.g., a user116) can use a computing device118to interact with an agent138via a command line interface (CLI), graphical user interface (GUI), or any other type of interface provided by an agent. Although referred to herein as an “agent,” in general, a migration or modernization agent138can include a software agent, a standalone application, a server, or any other type of software application, and may be accessed using any of a GUI, CLI, web-based API, or any other type of interface. In some embodiments, instead of using an agent138, users can instead cause the collection of application profile data142using other software tools or processes and can upload the data using an API provided by the software migration and modernization orchestration service134or other service. For example, some of the migration or modernization services may be “agentless” and enable users to perform actions without installing an agent locally in their on-premises environment. As part of assessing a user's computing environment using agent(s)138, in some embodiments, a user may invoke a command used to generate an inventory of applications located within the user's computing environment132(e.g., including software application136in the example ofFIG.1). In some embodiments, instead of interacting directly with an agent138, the user116instead interacts with a web-based console or other interface provided by the software migration and modernization orchestration service134. For example, at circle “1” inFIG.1, a user may cause one or more migration and modernization request(s)158to be sent to the software migration and modernization orchestration service134. The requests, for example, may provide an indication of migration and modernization actions that the user desires to perform relative to a software application136in the user's computing environment132. The software migration and modernization orchestration service134may then in turn instruct an agent138or other software agents running in the user computing environment132to perform some or all the operations described in reference toFIG.1such as, for example, identifying an inventory of applications, obtaining application profile data142for one or more selected applications, and performing various application migration and modernization analyses. In other embodiments, the request(s)158may be sent directly to the agents, may be sent after an agent138has first created an inventory of applications in the user's computing environment, or in any other order. In some embodiments, once a software application of interest is identified based the inventory processes described above or otherwise, at circle “3,” the user may then execute a command requesting to profile and analyze the identified application136or such processes may execute automatically. In some embodiments, one or more agents138may then analyze the identified software application136and generate application profile data142containing profiling and analysis results, e.g., based on static analyses of source code or other artifacts associated with the application, dynamic analyses of the application's execution, among other possible types of analyses. Example application attributes or features identified by such analyses can include an operating system type associated with the application, an operating system version, a process identifier, application type, a programming language used to develop the application, a location at which source code for the application is stored (e.g., a source code repository location), application server type and version, database type and version, integrations with other systems, configuration information, architecture type (e.g., monolithic, 3-tier, microservice-based, etc.), application scale (e.g., number of servers, data storage size, source code size), application importance, identified anti-patterns and cloud anti-patterns associated with the application, application dependencies (e.g., on third party software and libraries, other libraries and files, execution environments), application relationships (e.g., network connections, inter-process communications (IPC), remote procedure calls (RPC)), data flow and network throughput, a number of storage volumes, a filesystem type, and the like. In some embodiments, the application profile data142includes identified “subunits” of the application and dependency and performance data related to the identified subunits. For example, the dependency and performance data may include data describing dependency relations among packages, classes, methods, etc., as well as information about CPU usage, memory usage, etc., for each of the identified subunits. This application profile data142can be used by an agent138or software migration and modernization orchestration service134, for example, to identify migration and modernization recommendations. According to embodiments described herein, features found in the application profile data can also be used as input to one or more completion time model(s) to obtain completion time forecasts for various candidate migration and modernization actions and action workflows to be performed. In some embodiments, either automatically by a migration or modernization agent138or with input from a user, at circle “4,” an agent138sends the application profile data142to the software migration and modernization orchestration service134via a secure communication channel. In some embodiments, the software migration and modernization orchestration service134stores the obtained data in a storage location associated with the user, e.g., in a storage bucket of a storage service114or in any other storage resource. In some embodiments, the software migration and modernization orchestration service134stores the application profile data142in a storage resource using a service-linked account configured by the user. In some embodiments, at circle “5,” the software migration and modernization orchestration service134optionally identifies one or more recommended migration and modernization orchestration plans and, in association with one or more orchestration plans (or actions or action workflows comprising an orchestration plan), generates completion time forecast(s)156using completion time models. As shown inFIG.1, each of the various migration and modernization services may individually collect historical action metrics (e.g., historical action metrics150A-150N) reflecting actions completed by each service in the past and, using the metrics data, train and use respective completion time models to generate action completion time forecast(s) as needed. For example, as users use a server migration service122over time to perform server migrations and associated actions, the service can collect information indicating, for each action, features of the server(s) migrated (e.g., a type of server, a type of operating system, etc.), features of the migration environment (e.g., an amount of resources devoted to the agent running in the user's computing environment, an amount of network bandwidth available, etc.), and a duration of time needed to complete the action. Other services collect similar information relevant to the actions provided by those services and monitored past action executions. FIG.2is a diagram illustrating the collection of migration and modernization action training data used to train ML-based models or other models to enable the generation of action completion time forecasts according to some embodiments. As shown inFIG.2, in some embodiments, the training data204collected by an example database migration service124can include data derived from application profile data206generated by migration agents212running in various users' on-premises environments214and historical action metrics generated by migration actions200, e.g., as part of migrating databases, among other possible data sources. In general, the training data includes information about users' computing resources to be migrated or modernized (including, e.g., snapshot sizes, whether a snapshot is a base snapshot or an incremental snapshot, network bandwidth available in a user's computing environment, an operating system type, an operating system version, a number of storage volumes to be migrated, a type of boot loader, a type of file system, a server workload type, an indication of when the action was performed, a region of the cloud provider network100associated with the action, a number of servers to be migrated in parallel, resource information associated with the agent212, etc.) and information indicating an amount of time to complete various cloud migration actions in association with those computing resources. In some embodiments, a service may store the training data204in one or more data stores202accessible to a model training and execution system210. In some embodiments, the training data204is collected on continuous basis and used to continuously train and refine an action completion time model as additional migration and modernization actions are performed. In some embodiments, a model training and execution system210or other component optionally performs various data pre-processing operations on the training data204. In some embodiments, pre-processing operations can also include organizing the data in various ways, cleaning or transforming the data, deduplicating data entries, or any other operations to aid in the model training processes. In some embodiments, the optionally preprocessed data is stored in a modernization training data store. In some embodiments, the modernization training data store can be any type of data storage managed either by the database migration service124or by another service or application accessible to the database migration service (e.g., by an object storage service of the provider network100). In some embodiments, users can interact with the model training and execution system210via a frontend of the model training and execution system210. For example, a user device can provide a training request that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (for example, an address or location of input data), one or more hyperparameter values (for example, values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, and so forth), and/or information describing the computing machine on which to train a machine learning model (for example, a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, and so forth). In some embodiments, a container image can include one or more layers, where each layer represents an executable instruction. Some or all the executable instructions together represent an algorithm that defines a ML model. The executable instructions (for example, the algorithm) can be written in any programming language (for example, Python, Ruby, C++, Java, etc.). In some embodiments, the algorithm is pre-generated and obtained by a user, via the user device, from an algorithm repository. In some embodiments, the algorithm is completely user-generated or partially user-generated (for example, user-provided code modifies or configures existing algorithmic code). In some embodiments, instead of providing a container image (or identifier thereof), the user device may provide an algorithm written in any programming language. The model training and execution system210may then package the algorithm into a container (optionally with other code, such as a “base” ML algorithm supplemented with user-provided code) that is eventually loaded into a virtual machine instance for training a machine learning model. In some embodiments, the model training and execution system210can handle the acquisition and configuration of compute capacity (for example, containers, instances, etc., which are described in greater detail below) based on the information describing the computing machine on which to train a ML model provided by the user device. The model training and execution system210can then train ML models using the compute capacity. To perform the ML model training, in some embodiments, computing resources execute instructions according to hyperparameter values included in the training request. As an illustrative example, a model training and execution system210trains a ML model by identifying values for certain parameters (for example, coefficients, weights, centroids, etc.). The identified values depend on hyperparameters that define how the training is performed. Thus, the computing resources can execute the executable instructions to initiate a ML model training process, where the training process is run using the hyperparameter values included in the training request. Execution can include applying the obtained training data as input parameters to some or all the instructions being executed. In some embodiments, the model training processes generate model data. The model data may be stored, for example, in one or more data files in a model data store and can include characteristics of the ML model being trained, such as a number of layers in the machine learning model, hyperparameters of the machine learning model, coefficients of the machine learning model, weights of the machine learning model, and/or the like. In particular, the generated model data includes values for the characteristics that define the ML model being trained. As shown inFIG.2, one or more action completion time model(s)152B may be generated for the database migration service124that enable the service to generate forecasts of an amount of time needed to perform various actions based on input specifying various application and migration environment features, as described above. In some embodiments, the model training and execution system210further includes a model execution system (which may be part of or separate from the model training system), including a single physical computing device or multiple physical computing devices that are interconnected using one or more computing networks (not shown), where the physical computing device(s) host one or more virtual machine instances. The model training and execution system210can handle the acquisition and configuration of compute capacity (for example, containers, instances, etc.) based on requests to execute trained ML models. The model training and execution system210can then execute ML models using the compute capacity. In some embodiments, a request to execute a ML model is transmitted to the model training and execution system210, where the request includes an input to a ML model (for example, a set of input data). The model training and execution system210or another system executes the code in response to receiving the execution request. In particular, execution of the code causes the executable instructions in the code corresponding to the algorithm to read the model data file (e.g., model data obtained from a model data store), use the input included in the execution request as an input parameter, and generate a corresponding output. As an illustrative example, the algorithm can include coefficients, weights, layers, cluster centroids, and/or the like. The executable instructions in the code corresponding to the algorithm can read the model data file to determine values for the coefficients, weights, layers, cluster centroids, and/or the like. The executable instructions can include input parameters, and the input included in the execution request can be supplied as the input parameters. With the ML model characteristics and the input parameters provided, execution of the executable instructions can be completed resulting in an output. In some embodiments, the output is stored in a data store. Alternatively or in addition, the model training and execution system210transmits the output to a user device that submitted the execution request. In some embodiments, the operating environment supports many different types of machine learning models, such as classification models, multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like. As indicated above, in some embodiments, at circle “5” inFIG.1, a software migration and modernization orchestration service134generates migration and modernization orchestration plan recommendations (sometimes also referred to as migration orchestration plans or modernization orchestration plans) based on user input indicating types of migration and modernization actions to be performed, the application profile data142associated with an application to be migrated and modernized, among other possible input. In some embodiments, a software migration and modernization ontology model is defined and used to describe migration and modernization orchestration plans (including associated actions and action workflows), although generally other types of data structures and models can be used such as decision trees, text-based models, database models, machine learning (ML) based models etc. In general, the software migration and modernization knowledgebase includes data indicating relevant features and constraints associated with various candidate migration and modernization orchestration workflows and associated actions and action workflows. FIG.3is a diagram illustrating the use of a software migration and modernization knowledgebase to identify suitable migration and modernization workflows for a software application according to some embodiments. In some embodiments, the knowledgebase304includes one or more data models describing the features and constraints of various migration and modernization orchestration plans, actions, and action workflows. For example, the data models can include one or more ontology models304A, where an ontology model enables the software migration and modernization domain and its associated resources become semantic, or self-explanatory, to an assessment engine of a software migration and modernization orchestration service134and possibly other tools. The ability to use such ontologies, for example, increases integration, querying, and interoperability of the service. For example, the migration and modernization knowledgebase304is flexibly defined, where modifications to migration and modernization models stored in the knowledge base including adding, removing, or modifying a data model rather than static relationships and restrictions between the resources are hard coded into an application. In some embodiments, a migration and modernization knowledgebase304includes one or more text-based models304B (e.g., text-based descriptions of the features and constraints associated with various types of software architectures), decision tree-based models304C, or any other type of data models. A decision tree-based model, for example, includes nodes and edges that form a flowchart-like structure, where the paths from a root node to leaf nodes represent a set of classification rules for candidate migration and modernization orchestration plans, actions, and action workflows. In this example, a decision tree-based model can be queried to test attributes of the software application being analyzed against various conditions defined by the model and representing the features and constraints of candidate migration and modernization orchestration plans, actions, and action workflows (e.g., server migration actions, containerization actions, refactoring actions, etc.). As indicated above, in some embodiments, a migration and modernization knowledgebase304is a repository of information about the software migration and modernization domain, where the information is defined using a modernization ontology and associated modernization data models (e.g., instances of the modernization ontology used to describe migration and modernization strategy, tools, or other information). In some embodiments, a modernization ontology is a single interconnected ontology, or may be a collection of related ontologies that may not be directly connected to one another. In some embodiments, the migration and modernization ontology is specified at least in part using the Resource Description Framework (RDF), RDF Schema (RDFS), Web Ontology Language (OWL), or any other type of metadata data model. These metadata data models generally can be used to conceptually describe and model the migration and modernization information including, for example, migration and modernization tools, tool features and constraints, development pattern and anti-pattern information (including various types of cloud anti-patterns), software architectures, and so forth. In some embodiments, a software migration and modernization knowledgebase304is stored in a database or other data repository, where the data repository may be managed by the software migration and modernization orchestration service134directly or by another service of a cloud provider network100. For example, depending on the format of the models, the models may be stored in any of a storage service114, a database service110, graph database service302, or any other type of storage resource accessible to the assessment services. In some embodiments, the determination of whether a particular migration or modernization orchestration plan, action, or action workflow is suitable for a given application (e.g., responsive to one or more migration or modernization recommendation requests308) is based at least in part on querying a data model describing migration and modernization actions with values from the application profile data142collected for an application. The orchestration plans, actions, and action workflows can include sets of actions provided by any set of migration and modernization services provided by the provider network100. For example, if the application profile data142indicates that an application136is implemented using the C# programming language, the assessment service may query the software migration and modernization data models to identify (or to rule out) software cloud migration actions provided by various services that support the use of the identified language. Similarly, the software migration and modernization orchestration service134may query the data models to determine whether the amount of memory used by an application is suitable for one or more actions, whether the stateful or stateless nature of an application is supported by one or more actions, whether an application's use of a local filesystem, database, in-memory calls, share-memory interface calls, failure handling, or any other characteristics of an application defined in the application profile data142are suitable for a given migration or modernization orchestration plan, action, or action workflow based on the features and constraints defined in the data models. In some embodiments, the results from querying the data models can be provided as migration or modernization orchestration plan recommendations310for display in status reports or used by other processes. In some embodiments, the identification of recommended or requested migration or modernization orchestration plans, actions, or action workflows includes obtaining associated completion time forecasts. For example, the software migration and modernization orchestration service134can provide at least a portion of application profile data142and identified orchestration plans, actions, or action workflows to be performed as input to relevant completion time model(s)152A-152N to obtain completion time forecasts. In other embodiments, users can directly request such forecasts from the individual services for specified actions. For example, if it is desired to obtain a forecast of an estimated duration of time needed to migrate three servers and two databases based on profile data obtained about the servers and databases, such information can be provided as input to the completion time models of relevant server migration service122and database migration service124. As indicated above, each service can use a model training and execution system to obtain the requested forecasts using the models trained by each service. As indicated above, in some cases it may be desirable to obtain a completion time forecast for an orchestration plan comprising multiple actions or action workflows to be performed by any number of separate services. In some embodiments, the time forecasts obtained for such orchestration plans can include forecasts obtained for each action of an orchestration plan individually or such forecasts may be aggregated to obtain a total forecast for an orchestration plan as a whole. In some embodiments, an aggregate forecast for an orchestration plan may be calculated in part based on determined dependencies among the actions of the orchestration plan. For example, the software migration and modernization orchestration service134may determine, for an orchestration plan, which actions are to be performed sequentially (e.g., because the output of an action is used as input to a second action) and which actions can be performed in parallel. For example, if an orchestration plan includes actions A, B, and C, and actions B and C can be performed in parallel but only once action A completes, then an aggregated forecast for the orchestration plan can be obtained by adding a forecasted time for action A to the longer of the forecasted times for actions B and C. In some embodiments, once the migration and modernization orchestration plan recommendations and associated completion time forecasts are obtained, at circle “6,” the software migration and modernization orchestration service134provides access to a migration and modernization status report154. For example, the status report154may be provided in web-based console or other interface that displays the orchestration plan recommendation information and associated completion time forecasts156, among other possible information.FIG.4illustrates an example graphical interface displaying example migration and modernization orchestration plan recommendations, associated completion time forecasts, and an example interface element indicating an estimated percentage completion for an in-progress orchestration plan according to some embodiments. As shown, the report interface400includes a migration and modernization status report402which, for example, may have been generated responsive to a user request to perform one or more cloud migration actions relative to a software application in the user's computing environment. The status report402, for example, includes profile information about the application and one or more proposed orchestration plans. In some embodiments, a status report402includes the display of a completion time forecast404which, in this example, indicates an estimated duration of time needed to complete the migration and modernization orchestration plan named “Plan 2.” As indicated in the status report402, this example orchestration plan involves the use of several services including a server migration service, a database migration service, and a containerization service. The indicated completion time forecast404thus indicates an expected duration of time needed for each of the services to perform their respective actions or action workflows on the user's application. In some embodiments, a status report402may further display individual completion time forecasts for each of the actions involved in the orchestration plan. In some embodiments, the status report402further illustrates the display of a completion progress indicator406indicating an estimated completion percentage of a migration and modernization orchestration plan that is currently in-progress. For example, execution of the orchestration plan named “Plan 1” was previously initiated and is expected to need 1 hour and 50 minutes to complete. Based on an amount of time that has elapsed since execution of the orchestration plan was initiated, the completion progress indicator406can display an expected completion percentage of the orchestration plan. FIG.5is a flow diagram illustrating operations500of a method for providing action completion time forecasts for various types of software modernization and migration-related actions according to some embodiments. Some or all the operations500(or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations500are performed by the software migration and modernization services of the other figures. The operations500include, at block502, receiving, by a migration or modernization service of a cloud provider network, a request to perform a cloud migration action involving a software application in a user's on-premises computing environment; The operations500further include, at block504, identifying a plurality of features describing characteristics of the software application. The operations500further include, at block506, using the plurality of features as input to a machine learning (ML) model to obtain a result indicating a forecasted duration of time to be used to perform the cloud migration action. The operations500further include, at block508, causing display of a report including the result indicating the forecasted duration of time. In some embodiments, the operations500further include obtaining, by the migration or modernization service, historical data indicating, for each action execution of a plurality of past process executions performed by the migration or modernization service, features associated with a software application upon which the action execution was performed, and a duration of time to complete the action execution; and training the ML model using the historical data to forecast process action completion times based on software application features. In some embodiments, the operations500further include causing display of a progress indicator providing an estimate of a completion percentage of the cloud migration action, wherein the completion percentage is based on the forecasted duration of time to be used to perform the cloud migration action and a duration of time elapsed since execution of the cloud migration action was initiated. In some embodiments, the operations500further include receiving a request to migrate the software application from the user's on-premises computing environment to the cloud provider network, wherein the software application comprises a plurality of computing resources; generating a orchestration plan to be used to migrate the plurality of computing resources to the cloud provider network, wherein the orchestration plan includes a plurality of cloud migration actions including the cloud migration action; obtaining a plurality of results indicating a respective forecasted duration of time to be used to perform each of the plurality of cloud migration actions; determining an orchestration plan completion time forecast based on the plurality of results; and causing display of the orchestration plan completion time forecast. In some embodiments, the operations500further include receiving a request to migrate the software application from the user's on-premises computing environment to the cloud provider network, wherein the software application comprises a plurality of computing resources; identifying a plurality of candidate orchestration plans to be used to migrate the plurality of computing resources to the cloud provider network; determining a plurality of orchestration plan completion time forecasts, wherein each of the orchestration plan completion time forecasts is associated with a respective orchestration plan of the plurality of candidate orchestration plans; and causing display of the plurality of orchestration plan completion time forecasts. In some embodiments, the operations500further include receiving a request to migrate the software application from the user's on-premises computing environment to the cloud provider network, wherein the software application comprises a plurality of computing resources; identifying a plurality of candidate orchestration plans to be used to migrate the plurality of computing resources to the cloud provider network; determining plurality of orchestration plan completion time forecasts, wherein each of the orchestration plan completion time forecasts is associated with a respective orchestration plan of the plurality of candidate orchestration plans; selecting a orchestration plan of the plurality of candidate orchestration plans based on the plurality of orchestration plan completion time forecasts; and executing the orchestration plan to migrate the software application to the cloud provider network. In some embodiments, the operations500further include sending, to a migration and modernization recommendation service, a request for one or more recommended orchestration plans for the software application, wherein the migration and modernization recommendation service queries a data model defining a plurality of orchestration plans to identify a recommended orchestration plan for the software application, and wherein the data model is queried using at least a portion of the plurality of features describing characteristics of the software application; and causing display of information describing the recommended orchestration plan. In some embodiments, the ML model is a first ML model and the result is a first result, wherein the migration or modernization process is part of a migration workflow including a plurality of cloud migration actions, wherein the plurality of migration and modernization actions includes a cloud migration action performed by a software agent running in the user's on-premises computing environment, and wherein the operations500further include: using the plurality of features as input to a second ML model to obtain a second result indicating a forecasted duration of time to be used to perform the cloud migration action performed by the software agent; and providing access to the second result. In some embodiments, the ML model is a first ML model and the result is a first result, wherein the cloud migration action is part of an orchestration plan including a plurality of cloud migration actions, wherein the plurality of migration and modernization actions includes a cloud migration action performed by a software agent running in the user's on-premises computing environment, and wherein the operations500further include: receiving input indicating an amount of computing resources available to a software agent running in the user's on-premises computing environment; using the plurality of features and the input indicating the amount of computing resources available to the software agent as input to a second ML model to obtain a second result indicating a forecasted duration of time to be used to perform the cloud migration action performed by the software agent; and providing access to the second result. In some embodiments, the operations500further include executing the cloud migration action; determining a duration of time used to execute the cloud migration action; and using the plurality of features and the duration of time to further train the ML model. In some embodiments, the migration or modernization service is provided in a plurality of regions of the cloud provider network, wherein the ML model is associated with a region of the plurality of regions, and wherein each region of the plurality of regions is associated with a respective ML model trained based on data obtained from the region. In some embodiments, the cloud migration action is part of an orchestration plan including a plurality of cloud migration actions, and wherein the operations500further include: obtaining a plurality of forecasted durations of time corresponding to the plurality of cloud migration actions; identifying an execution order associated with the plurality of cloud migration actions, wherein identifying the execution order includes identifying actions to be performed sequentially and actions that can be performed in parallel; and generating an orchestration plan completion time forecast based on the plurality of forecasted durations of time and the identified execution order. FIG.6illustrates an example provider network (or “service provider system”) environment according to some embodiments. A provider network600may provide resource virtualization to customers via one or more virtualization services610that allow customers to purchase, rent, or otherwise obtain instances612of virtualized resources, including but not limited to computation and storage resources, implemented on devices within the provider network or networks in one or more data centers. Local Internet Protocol (IP) addresses616may be associated with the resource instances612; the local IP addresses are the internal network addresses of the resource instances612on the provider network600. In some embodiments, the provider network600may also provide public IP addresses614and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that customers may obtain from the provider600. Conventionally, the provider network600, via the virtualization services610, may allow a customer of the service provider (e.g., a customer that operates one or more client networks650A-650C including one or more customer device(s)652) to dynamically associate at least some public IP addresses614assigned or allocated to the customer with particular resource instances612assigned to the customer. The provider network600may also allow the customer to remap a public IP address614, previously mapped to one virtualized computing resource instance612allocated to the customer, to another virtualized computing resource instance612that is also allocated to the customer. Using the virtualized computing resource instances612and public IP addresses614provided by the service provider, a customer of the service provider such as the operator of customer network(s)650A-650C may, for example, implement customer-specific applications and present the customer's applications on an intermediate network640, such as the Internet. Other network entities620on the intermediate network640may then generate traffic to a destination public IP address614published by the customer network(s)650A-650C; the traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the local IP address616of the virtualized computing resource instance612currently mapped to the destination public IP address614. Similarly, response traffic from the virtualized computing resource instance612may be routed via the network substrate back onto the intermediate network640to the source entity620. Local IP addresses, as used herein, refer to the internal or “private” network addresses, for example, of resource instances in a provider network. Local IP addresses can be within address blocks reserved by Internet Engineering Task Force (IETF) Request for Comments (RFC) 1918 and/or of an address format specified by IETF RFC 4193 and may be mutable within the provider network. Network traffic originating outside the provider network is not directly routed to local IP addresses; instead, the traffic uses public IP addresses that are mapped to the local IP addresses of the resource instances. The provider network may include networking devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to local IP addresses and vice versa. Public IP addresses are Internet mutable network addresses that are assigned to resource instances, either by the service provider or by the customer. Traffic routed to a public IP address is translated, for example via1:1NAT, and forwarded to the respective local IP address of a resource instance. Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses. In some embodiments, the mapping of a standard IP address to a local IP address of a resource instance is the default launch configuration for all resource instance types. At least some public IP addresses may be allocated to or obtained by customers of the provider network600; a customer may then assign their allocated public IP addresses to particular resource instances allocated to the customer. These public IP addresses may be referred to as customer public IP addresses, or simply customer IP addresses. Instead of being assigned by the provider network600to resource instances as in the case of standard IP addresses, customer IP addresses may be assigned to resource instances by the customers, for example via an API provided by the service provider. Unlike standard IP addresses, customer IP addresses are allocated to customer accounts and can be remapped to other resource instances by the respective customers as necessary or desired. A customer IP address is associated with a customer's account, not a particular resource instance, and the customer controls that IP address until the customer chooses to release it. Unlike conventional static IP addresses, customer IP addresses allow the customer to mask resource instance or availability zone failures by remapping the customer's public IP addresses to any resource instance associated with the customer's account. The customer IP addresses, for example, enable a customer to engineer around problems with the customer's resource instances or software by remapping customer IP addresses to replacement resource instances. FIG.7is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers, according to some embodiments. Hardware virtualization service720provides multiple compute resources724(e.g., compute instances725such as VMs) to customers. The compute resources724may, for example, be rented or leased to customers of the provider network700(e.g., to a customer that implements customer network750). Each computation resource724may be provided with one or more local IP addresses. Provider network700may be configured to route packets from the local IP addresses of the compute resources724to public Internet destinations, and from public Internet sources to the local IP addresses of compute resources724. Provider network700may provide a customer network750, for example coupled to intermediate network740via local network756, the ability to implement virtual computing systems792via hardware virtualization service720coupled to intermediate network740and to provider network700. In some embodiments, hardware virtualization service720may provide one or more APIs702, for example a web services interface, via which a customer network750may access functionality provided by the hardware virtualization service720, for example via a console794(e.g., a web-based application, standalone application, mobile application, etc.). In some embodiments, at the provider network700, each virtual computing system792at customer network750may correspond to a computation resource724that is leased, rented, or otherwise provided to customer network750. From an instance of a virtual computing system792and/or another customer device790(e.g., via console794), the customer may access the functionality of storage service710, for example via one or more APIs702, to access data from and store data to storage resources718A-718N of a virtual data store716(e.g., a folder or “bucket”, a virtualized volume, a database, etc.) provided by the provider network700. In some embodiments, a virtualized data store gateway (not shown) may be provided at the customer network750that may locally cache at least some data, for example frequently-accessed or critical data, and that may communicate with storage service710via one or more communications channels to upload new or modified data from a local cache so that the primary store of data (virtualized data store716) is maintained. In some embodiments, a user, via a virtual computing system792and/or on another customer device790, may mount and access virtual data store716volumes via storage service710acting as a storage virtualization service, and these volumes may appear to the user as local (virtualized) storage798. While not shown inFIG.7, the virtualization service(s) may also be accessed from resource instances within the provider network700via API(s)702. For example, a customer, appliance service provider, or other entity may access a virtualization service from within a respective virtual network on the provider network700via an API702to request allocation of one or more resource instances within the virtual network or within another virtual network. In some embodiments, a system that implements a portion or all of the techniques described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media, such as computer system800illustrated inFIG.8. In the illustrated embodiment, computer system800includes one or more processors810coupled to a system memory820via an input/output (I/O) interface830. Computer system800further includes a network interface840coupled to I/O interface830. WhileFIG.8shows computer system800as a single computing device, in various embodiments a computer system800may include one computing device or any number of computing devices configured to work together as a single computer system800. In various embodiments, computer system800may be a uniprocessor system including one processor810, or a multiprocessor system including several processors810(e.g., two, four, eight, or another suitable number). Processors810may be any suitable processors capable of executing instructions. For example, in various embodiments, processors810may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, ARM, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors810may commonly, but not necessarily, implement the same ISA. System memory820may store instructions and data accessible by processor(s)810. In various embodiments, system memory820may be implemented using any suitable memory technology, such as random-access memory (RAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above are shown stored within system memory820as software migration and modernization orchestration service code825(e.g., executable to implement, in whole or in part, the software migration and modernization orchestration service134or constituent services thereof) and data826. In one embodiment, I/O interface830may be configured to coordinate I/O traffic between processor810, system memory820, and any peripheral devices in the device, including network interface840or other peripheral interfaces. In some embodiments, I/O interface830may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processor810). In some embodiments, I/O interface830may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface830may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface830, such as an interface to system memory820, may be incorporated directly into processor810. Network interface840may be configured to allow data to be exchanged between computer system800and other devices860attached to a network or networks850, such as other computer systems or devices as illustrated inFIG.1, for example. In various embodiments, network interface840may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface840may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks (SANs) such as Fibre Channel SANs, or via I/O any other suitable type of network and/or protocol. In some embodiments, a computer system800includes one or more offload cards870A or870B (including one or more processors875, and possibly including the one or more network interfaces840) that are connected using an I/O interface830(e.g., a bus implementing a version of the Peripheral Component Interconnect—Express (PCI-E) standard, or another interconnect such as a QuickPath interconnect (QPI) or UltraPath interconnect (UPI)). For example, in some embodiments the computer system800may act as a host electronic device (e.g., operating as part of a hardware virtualization service) that hosts compute resources such as compute instances, and the one or more offload cards870A or870B execute a virtualization manager that can manage compute instances that execute on the host electronic device. As an example, in some embodiments the offload card(s)870A or870B can perform compute instance management operations such as pausing and/or un-pausing compute instances, launching and/or terminating compute instances, performing memory transfer/copying operations, etc. These management operations may, in some embodiments, be performed by the offload card(s)870A or870B in coordination with a hypervisor (e.g., upon a request from a hypervisor) that is executed by the other processors810A-810N of the computer system800. However, in some embodiments the virtualization manager implemented by the offload card(s)870A or870B can accommodate requests from other entities (e.g., from compute instances themselves), and may not coordinate with (or service) any separate hypervisor. In some embodiments, system memory820may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system800via I/O interface830. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g., SDRAM, double data rate (DDR) SDRAM, SRAM, etc.), read only memory (ROM), etc., that may be included in some embodiments of computer system800as system memory820or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface840. Various embodiments discussed or suggested herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general-purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and/or other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of widely-available protocols, such as Transmission Control Protocol/Internet Protocol (TCP/IP), File Transfer Protocol (FTP), Universal Plug and Play (UPnP), Network File System (NFS), Common Internet File System (CIFS), Extensible Messaging and Presence Protocol (XMPP), AppleTalk, etc. The network(s) can include, for example, a local area network (LAN), a wide-area network (WAN), a virtual private network (VPN), the Internet, an intranet, an extranet, a public switched telephone network (PSTN), an infrared network, a wireless network, and any combination thereof. In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including HTTP servers, File Transfer Protocol (FTP) servers, Common Gateway Interface (CGI) servers, data servers, Java servers, business application servers, etc. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, PHP, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, IBM®, etc. The database servers may be relational or non-relational (e.g., “NoSQL”), distributed or non-distributed, etc. Environments disclosed herein can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and/or at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc-Read Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. In the preceding description, various embodiments are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) are used herein to illustrate optional operations that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments. Reference numerals with suffix letters (e.g.,718A-718N) may be used to indicate that there can be one or multiple instances of the referenced entity in various embodiments, and when there are multiple instances, each does not need to be identical but may instead share some general traits or act in common ways. Further, the particular suffixes used are not meant to imply that a particular amount of the entity exists unless specifically indicated to the contrary. Thus, two entities using the same or different suffix letters may or may not have the same number of instances in various embodiments. References to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Moreover, in the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). Similarly, language such as “at least one or more of A, B, and C” (or “one or more of A, B, and C”) is intended to be understood to mean A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, and at least one of C to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or multiple described items. Accordingly, phrases such as “a device configured to” or “a computing device” are intended to include one or multiple recited devices. Such one or more recited devices can be collectively configured to carry out the stated operations. For example, “a processor configured to carry out operations A, B, and C” can include a first processor configured to carry out operation A working in conjunction with a second processor configured to carry out operations B and C. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.
86,079
11861363
DETAILED DESCRIPTION Systems and methods described herein relate to a development landscape build system used to build new computing systems, such as a development or maintenance computing system used to do development and maintenance of a product, a test system to test to confirm the developed product works as it should, a consolidation system to ensure that the product complies to quality standards, among other types of computing systems. As explained above, there are many specific requirements for a computing system for each particular entity or business.FIG.11shows an example of some of the different requirements that can be needed for any computing system. Even when considering only these most basic attributes or characteristics of a system, there are thousands of possible permutations. InFIG.11, the characteristics reach some 24+ million permutations, and given that a system has many hundreds of other specific characteristics of a particular entity and landscape, simple multiplication yields an almost infinite number of theoretically possible permutations. Thus, to build such custom systems in a timely manner and at any reasonable scale is incredibly challenging and limited to what is manually possible. One option for building such systems for various entities is to build only standard computing systems. For example, a template that includes a number of standard components (e.g., one type of cloud, one type of operating system, one database type) can be created that can then be copied to build each new system. Since each system is the same, it is an easier and faster process to build a new system for an entity. These standard computing systems, however, would not account for any special characteristics that an entity requires. Since most entities need numerous special characteristics to run a business, a way to build a custom computing system in a consistent, accurate, and accelerated manner is needed. Accordingly, there are many technical challenges to building a new computing system for an entity as a custom computing system to meet the requirements particular to the entity. For example, there are issues with transparency, scalability, accuracy, and speed of building a custom computing system. As alluded to above, with millions of custom configuration options due to the different types of clouds, operating systems, database systems, product types, and the like, it is not possible to generate all possibilities for a custom system manually. Moreover, manual configuration of a new computing system creates inconsistencies, since each person working on a system has different styles, preferences, levels of detail for documentation, and the like. Further, in addition to limited options, building a system manually introduces many errors, and progress during a build of a new system is not easy to track manually. Example embodiments provide planning tools to specify requirements specific (custom) to an entity, modeling tools for modeling different business cases, and scripts to be used to build the new computing system. Further, example embodiments provide for a several-layer hierarchy used to select scripts to execute to install and configure the new computing system. In this way, customization options for a new computing system are unlimited, each build is consistent regardless of the customization required for the new computing system, and it is easy to detect errors and track progress during a build of the new computing system. Moreover, the entire process makes for a more efficient overall system, thereby conserving computing resources and accelerating a time to build (e.g., install and configure) and new computing system. For instance, embodiments described herein provide for receiving, from a computing device by a computing system, a selected system attribute to be used to build a new computing system for a given entity and generating, by the computing system, a subset of parameters relevant to the selected system attribute, from a plurality of parameters available for a variety of system attributes. The computing system causes display of the subset of parameters on the computing device, receives device values corresponding to each parameter of the subset of the parameters relevant the selected system attribute, and stores as custom parameters in a database the values corresponding to each of the subset of the parameters relevant to the selected system attribute. The computing system further retrieves standard parameters from a set of tables in the database and the custom parameters from the database and inputs the standard parameters and custom parameters into a decision and execution hierarchy having a plurality of levels for execution, wherein a final level in the decision and execution hierarchy comprises a plurality of scripts for execution. The computing system executes a subset of the plurality of scripts for execution based on traversing the decision and execution hierarchy using the standard parameters and custom parameters to install and configure the new computing system for the given entity. FIG.1is a block diagram illustrating a networked system100, according to some example embodiments. The system100may include one or more client devices such as client device110. The client device110may comprise, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDA), smart phone, tablet, ultrabook, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic, game console, set-top box, computer in a vehicle, or any other computing or communication device that a user may utilize to access the networked system100. In some embodiments, the client device110may comprise a display module (not shown) to display information (e.g., in the form of user interfaces). In further embodiments, the client device110may comprise one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user106that is used to access and utilize cloud services or a development landscape build system124, among other applications. One or more users106may be a person, a machine, or other means of interacting with the client device110. In example embodiments, the user106may not be part of the system100but may interact with the system100via the client device110or other means. For instance, the user106may provide input (e.g., touch screen input or alphanumeric input) to the client device110and the input may be communicated to other entities in the system100(e.g., third-party server system130, server system102) via the network104. In this instance, the other entities in the system100, in response to receiving the input from the user106, may communicate information to the client device110via the network104to be presented to the user106. In this way, the user106may interact with the various entities in the system100using the client device110. In one example, the user is a developer of one or more applications (e.g., mobile and desktop web applications), a developer of models or scripts for a new computing system build, or a quality assurance engineer. The system100may further include a network104. One or more portions of network104may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, another type of network, or a combination of two or more such networks. The client device110may access the various data and applications provided by other entities in the system100via web client112(e.g., a browser, such as the Internet Explorer® browser developed by Microsoft® Corporation of Redmond, Washington State) or one or more client applications114. The client device110may include one or more client applications114(also referred to as “apps”) such as, but not limited to, a web browser, a search engine, a messaging application, an electronic mail (email) application, an e-commerce site application, a mapping or location application, an enterprise resource planning (ERP) application, a customer relationship management (CRM) application, a procurement, spend management and supply chain services application, entity matching system, a user interface for a development landscape build system, and the like. In some embodiments, one or more client applications114may be included in a given client device110, and configured to locally provide the user interface and at least some of the functionalities, with the client application(s)114configured to communicate with other entities in the system100(e.g., third-party server system130, server system102, etc.), on an as-needed basis, for data and/or processing capabilities not locally available (e.g., access location information, access software version information, access an ERP system, access a CRM system, access machine learning models, access procurement, spend management and supply chain services, entity matching system, to authenticate a user106, to verify a method of payment, access test data, access a development landscape build system and so forth), to build a new computing system, and so forth. Conversely, one or more applications114may not be included in the client device110, and then the client device110may use its web browser to access the one or more applications hosted on other entities in the system100(e.g., third-party server system130, server system102). A server system102may provide server-side functionality via the network104(e.g., the Internet or wide area network (WAN)) to one or more third-party server system130and/or one or more client devices110. The server system102may include an application program interface (API) server120, a web server122, and a development landscape build system124that may be communicatively coupled with one or more databases126. The one or more databases126may be storage devices that store data related to users of the system100, applications associated with the system100, cloud services, machine learning models, parameters, models and scripts for a new computing system build, and so forth. The one or more databases126may further store information related to third-party server system130, third-party applications132, client devices110, client applications114, users106, and so forth. In one example, the one or more databases126is cloud-based storage. The server system102may be a cloud computing environment, according to some example embodiments. The server system102, and any servers associated with the server system102, may be associated with a cloud-based application, in one example embodiment. The development landscape build system124may provide back-end support for third-party applications132and client applications114, which may include cloud-based applications. The development landscape build system124may provide for generating parameters, models, and scripts for a new computing system build as well as executing such scripts using the parameters and models in a hierarchical manner, as described in further detail below. The development landscape build system124may comprise one or more servers or other computing devices or systems. The system100further includes one or more third-party server system130. The one or more third-party server system130may include one or more third-party application(s). The one or more third-party application(s)132, executing on third-party server(s)130, may interact with the server system102via API server120via a programmatic interface provided by the API server120. For example, one or more of the third-party applications132may request and utilize information from the server system102via the API server120to support one or more features or functions on a website hosted by the third party or an application hosted by the third party. The third-party website or application132, for example, may provide access to functionality and data supported by third-party server system130. In one example embodiment, the third-party website or application132may provide access to functionality that is supported by relevant functionality and data in the third-party server system130. In one example, a third-party server system130is a system associated with an entity that accesses cloud services via server system102. FIG.2is a block diagram illustrating further details of development landscape build system124. The parametrization system202comprises customizing tables204and custom configuration sheet206. The customizing tables204and custom configuration sheet206are data sources used to generate both standard and customized parameters that are fed into the development landscape build system124and used to execute a script or task for installation and configuration of a new computing system210. Specifically, customizing tables204contain standard parameters that are standard across all new computing system builds. These may change with time; any changes or updates would then apply across all further new computing system builds. One example of a standard parameter is a user of the system and the role assignment for the user. For example, a developer must have specified roles or authorizations. This is a standard defined in the customizing tables204. Typically, there are hundreds of customizing tables204in the development landscape build system124. In one example, standard parameters may be specified by product type or system type or system role (e.g., system attribute). In this example, only a subset of the standard parameters may be relevant for a given product type, or system type, or system role. FIG.3illustrates an example customizing table204A that holds data relevant for creation for standard remote function calls (RFCs). This is just one example of hundreds of customization tables that can be used in the development landscape build system124. The example customizing table204A comprises a list of activities302in the first column. Each activity can be referred to as a key field. The development landscape build system124can call the customizing table204A with a specified key and then read the values306in the columns associated with the particular key. For example, the development landscape build system124can call the customizing table204A with the HOTPACKRUL key field304and read the values306associated with the HOTPACKRUL key field304. The development landscape build system124can then generate a JSON input an input form, such as a JSON input, for the HOTPACKRUL activity and populate the input form with the values306read from the customizing table204A.FIG.4illustrates an example table entry or input form400for the HOTPACKRUL activity. Fixed values are entered into the respective fields and variables are maintained with the help of placeholders indicated by parameters with angle brackets < >, such as <SID>, <CLIENT> or $<ALEREMOTE00000020481>. The placeholders will be replaced with real values (e.g., from a custom configuration sheet206) during runtime. A configuration script or task uses the JSON input form, such as JSON input, to execute a creation of an RFC based on the values in the input form, during runtime. For example, the data is retrieved at runtime during the build process via a webservice, such as a Customizing Web Services (CWS) in JSON format, and fed as input into the script or task to be executed. An example CWS for the example input form400may be: https://ldciadl.wdf.sap.corp:44315/sap/dlmoat/r3 rfc?activity=HOTPACKRUL&sid =OAS&client=000&sap-client=600. Calling this customized webservice will then populate the placeholders with values and generate the example output500shown inFIG.5. For instance, the values for placeholders <SID> and <CLIENT> are generated from the custom configuration sheet206and added dynamically to the URL as additional arguments. Moreover, a placeholder identifier from Passvault is used to retrieve a password from Passvault which is decrypted and used at runtime whenever it is needed (e.g., like in this example where the password of the RFC user must be entered into the RFC connection). With data taken from the customizing table204A and the custom configuration sheet206and Passvault, the development landscape build system124generates the example output500. Returning toFIG.2, the custom configuration sheet206is where custom parameters are generated and stored that are specific to the entity for which the new computing system210is being built. These custom parameters include parameters such as a database type, a cloud type, a system type, a client type, a network release, a service level, an installation type, a system role, a product type, or the like. The custom configuration sheet206is specific to the new computing system210to be built and is different for each computing system to be built. The configuration sheet206is a tool for an individual system and client customizing data and acts as a single source of truth for customizing the new computing system210. Only content relevant for the intended purpose and scope of the new computing system210need be supplied in the custom configuration sheet206. In one example, this is achieved by selection of a system type, installation type, and system role, as described in further detail below with respect toFIG.7. The development landscape build system124further comprises a set of scripts208. Each configuration step in the build process for a new computing system210has one script assigned and does one action in the new computing system210, such as creating an RFC, as described above. In this way, modules can be re-used during the build workflow in another context, just by feeding them different parameter values. A build comprising installation and configuration of the new computing system210is executed via one or more scripts208as described in further detail below. The development landscape build system124further comprises a self healer212to automatically address any errors in the new computing system210, and a quality check process214and quality check framework216. The development landscape build system124further comprises output218that is used in the reporting framework220and log framework222. The reporting tool224provides a variety of data including the output from the reporting framework220and quality check framework216. The reporting tool224can further provide key performance indicators, such as delivery data and quality, as well as setup time and other data and metrics corresponding to the build of the new computing system210. The development landscape build system124can also utilize and integrate with other tools226, such as SLIM, SISM, DUCC, Zeus, IdM, Nagios, ITdirect, BCP, ServiceNow, BKPMON, Procon, SWPM, and the like. FIG.6is a flow chart illustrating aspects of a method600for installing and configuring a new computing system customized for a specific entity, according to some example embodiments. For illustrative purposes, method600is described with respect to the block diagrams ofFIG.1andFIG.2. It is to be understood that method600may be practiced with other system configurations in other embodiments. In operation602, a computing system (e.g., server system102or development landscape build system124), receives a selected system attribute (e.g., system role) to be used to build a new computing system (e.g., new computing system210) for a given entity. For example, a user, via a computing device (e.g., client device110), interacts with a user interface on the computing device to select a system attribute to be used to build (e.g., install and configure) the new computing system.FIG.7illustrates an example user interface700for selecting a system role706to create a custom configuration sheet206. In this example, there are three different categories to choose from, including a system type702, an installation type704, and a system role706. It is to be understood that this is just an example; different and more or fewer categories could be used in example embodiments, and different and more or fewer options under each category could be used in example embodiments. In one example, only a system role706is needed to generate a custom configuration sheet206. As shown in the example user interface700, a system type702can include ABAP (Advanced Business Application Programming) and JAVA as options to choose for system type702. Other examples of system type702include J2EE, combination ABAP and J2EE (double stack), TREX Server, BOE Server, Content Server, LiveCache, Web Dispatcher, Standalone DB, HANA Standalone DB, SLT Replication Server. The installation type704can be a new installation, a re-build, a system/database migration, or a database copy, in this example. The system role706can be development, correction, test, consolidation, or translation, in this example. Based on the selected system type702, a suitable installation type704and system role706can be selected. As shown in the example inFIG.7, the user has selected ABAP as the system type702, a new installation as the installation type704, and development as the system role706. After making selections, the user can select to create configsheet708to generate the custom configuration sheet206. The computing device then sends the selections (e.g., the selected system role or attribute) to the computing system. After receiving the selected system role or attribute from the computing device, in operation604, the computing system generates a subset of parameters relevant to the selected system attribute. For example, there may be a plurality of parameters available for a variety of system attributes. The computing system only selects those parameters (a subset) that are relevant to the selected system attribute. In this way, only the subset of parameters relevant to the selected system attribute are then provided to the user, via the computing device, so that the user only has to enter values for those relevant to build the new computing system. The computing system causes display of the subset of parameters on the computing device. The user can then enter, via the display, values for each parameter of the subset of parameters. The computing device sends these entered values to the computing system. In operation604the computing system receives, from the computing device, the values corresponding to each parameter of the subset of parameters relevant to the selected system attribute. In operation606, the computing system stores, as custom parameters in a database (e.g., database126), the values corresponding to each parameter of the subset of parameters relevant to the selected system attribute. In one example, the custom parameters are stored in the database as a custom configuration sheet206. After the custom configuration sheet206is generated with the custom parameters, a business process modeler application is used to define business processes to be executed during the build for the new computing system. Scripts are assigned to each defined business process that is executed during the build for the new computing system. Each configuration step comprises a script and is included in a final level of a hierarchy for execution during a build of a new computing system, as explained further below. To start the build process for the new computing device, the build can be accomplished by a single mouse click or other input to indicate the build should start. To start the build process, the computing system retrieves standard parameters from a set of tables in a database (e.g., database126). As explained above, these standard parameters can be stored in customizing tables204. The computing system also retrieves the custom parameters (e.g., custom configuration sheet206) from the database. In operation608, the computing system inputs the standard parameters and custom parameters into a decision and execution hierarchy. The decision and execution hierarchy comprises a plurality of levels for execution, and the final level of the plurality of the levels comprises scripts for execution. These are the scripts developed previously that are assigned to a configuration step that can be executed during the build process. Only a subset of these scripts is executed depending on custom parameters for the new computing system. FIG.8illustrates an example hierarchy800comprising six levels. The first five levels802-810each comprises sublevels where each sublevel is a decision point for moving to a next level of the hierarchy800. For example, the first level 802, indicated by M1, comprises sublevels W1, W2, W . . . of the second level 804. Each sublevel of the second level 804 comprises specified sublevels of the third sublevel808, and so forth until the final or sixth level 812, which does not contain any sublevels. Instead, the final level 812 comprises a plurality of configuration steps which are scripts for execution based on traversing the decision and execution hierarchy800. It is to be understood that the six levels in the example hierarchy800are used as an example and that any number of levels can be used in example embodiments. In one example, the example hierarchy800comprises an M-level as the highest level, which is the build wrapper calling the phases, the phases (W-level), sub-phases (A-level), configuration scenarios (S-level), configuration processes (P-level), and configuration steps (C-level). To use a specific example, we will step through the hierarchy from the build wrapper phase (M1) through a configuration step Cl to create an RFC to an ABAP Test Cockpit (ATC) reference system. In this example, the wrapper level M1(802) is always performed, there is no exception. In one example, the wrapper level M1(802) checks the operating system (OS) and database (DB) type. For example, if the development landscape build system124only supports Linux systems and not Windows for an operating system and HDB and DB2but not MaxDB or mySQL as database types, phase installation (the second-level, W-level) is only executed for the supported OS and DB. In one example, the flow condition towards the phase installation is:${variables:equals(OSTYPE, “L”) andvariables:containsAny(DBTYPE, “HDB”, “DB6”)} Variables OSTYPE and DBTYPE and their technical values are from custom configuration sheet206in this case. If the OS and DB type in the custom configuration sheet206are not supported, then a prompt to manually configure the system is provided. After determining that the OS and DB type are supported, based on the technical values for the OS type and DB type in the custom configuration sheet206, the process continues to the next level of the hierarchy, which is the W-level 804 in this example. In the W-level phase may comprise a main configuration phase (e.g., W1). From the main configuration phase W1, a sub-phase in the third level (A-level 806) for a standard system and landscape configuration (e.g., S1) is always performed without exception. Therefore, in this example no input is required to take a decision. In the standard system and landscape configuration sub-phase, the need of an ATC configuration is checked. In one example, the condition flow towards configuring an ATC is:${variables:equals(ATC_RUNNING, true)} The value is taken from custom configuration sheet206in this case. If the condition is true, the ATC will be configured, otherwise it is entirely skipped and the development landscape build system124would immediately proceed to a configuration scenario to configure PQP. In this example, assuming that the condition is true, the process continues to the next level of the hierarchy, which is the fourth level (S-level 808) in this example. In the configuration scenario to configure the ATC (e.g., S1), the development landscape build system124checks whether the system is a so-called ATC execution system or a target system. The flow condition towards the ATC execution system is:${variables:equals(ATC_CHECKM_REQ, “01”)} The value is taken from the custom configuration sheet206in this case. In the case where the system is an ATC execution system, the process continues to a configuration process in the next level of the hierarchy, with is the P-level 810 in this example. In this example, the configuration process is “maintain basic ATC settings in ATC execution system” (e.g., P1). For this configuration process to maintain basic ATC settings in ATC execution system, the development landscape build system124checks whether a so-called reference system exists, usually withing the system landscape of a previous release. The flow condition for this is:${variables:equals(CHECKS_PROF_COPY, “X”) andvariables:isEmpty(SETUP_REFERENCE_SID)} Both these values are taken from the custom configuration sheet206in this case. The above flow condition is followed in the case where a reference system is not provided and the copy of the ATC check profile is requested. Assuming that a reference system has been provided, the process continues to a configuration step (e.g., C1) for creating an RFC to an ATC reference system in the C-level of the hierarchy. The configuration step Cl is then executed, as shown an explained below with reference toFIG.10. Returning toFIG.6, the computing system executes a subset of the plurality of scripts for execution based on traversing the decision and execution hierarchy using the standard parameters and custom parameters to install and configure the new computing system for the given entity, in operation610. For example, an input form (as explained above) is generated for each component in the hierarchy. For example, an input form is generated for M1, an input form is generated for W1, an input form is generated for W2, and an input form is generated for each of A1-A4, S1-S2, P11, P12, P21, P22, and C111-C222.FIG.9shows a table900indicating how an input form corresponds to each component in the hierarchy. In the example table900, the input form902for M1, which is the first level in the hierarchy, comprises “Ecs Key”, which is a reference to the custom configuration sheet206for the new computing system. During execution, the computing system uses parameters in the custom configuration sheet206and the parameters in the customizing tables204that correspond to the specified Ecs Key and standard parameters to determine which sublevel (in second level 804) to which to branch. The same process is used for the rest of the levels802-810(e.g., all levels but the last level 812 that comprises the configuration steps) using their corresponding input forms as shown with reference number904inFIG.9. Likewise, each configuration step has an associated input form (such as input form400in shown inFIG.4and described above) and examples of these input forms for C1-C4are shown as reference number906inFIG.9. Note that these input forms also comprise parameters (e.g., par1, par2, par3, par4, par5, par6, par6, par8) that are used in execution of the configuration step (e.g., script). Returning toFIG.8for a high-level summary, in one example the example hierarchy800is traversed by first evaluating standard and custom parameters, using the input form for M1, to determine whether to branch to sublevel W1or W2at second level 804. If the computing system determines, based on the evaluation of the standard and custom parameters, that it should branch to W1, the computing system then evaluates standard and custom parameters using the input form for W1to determine whether it should brand to sublevel A1or A2at third level 806. The computing system uses the same process to then branch to Si in the fourth level 808, P2in the fifth level 810, and then executes the configuration steps from P2that comprise C3and C4using the input forms for each of C3and C4. In one example, indications on which processes can be executed in parallel and which processes are dependent upon another process and thus, must be executed in a particular order, are built into the hierarchy. For example, a parallel gateway or other indicator can be used to indicate parallel flows to execute independent activities in a parallel matter. In another example, a flow from a first process or activity to a second process or activity can indicate that the second process or activity is dependent on the first process or activity. Thus, the second process or activity will only execute when the first process or activity is completed. As explained above, a configuration step is the smallest entity and on the lowest level of the hierarchy. In one example, the hierarchy is maintained in an open-source workflow engine, Flowable. It is to be understood that different environments or tools can be used in example embodiments. In example embodiments, there may be over a thousand or thousands of individual configuration steps. As also explained above, a configuration step is in the form of a script and is a business process which consists of many elements that serve a specific purpose, such as the RFC example shown inFIG.4and described above. FIG.10illustrates an example model1000for a configuration step for creating an RFC connection or destination to other systems. At a start event1002, the computing system reads the parameter values relevant for the script. As explained above, this is done using the input form corresponding to the configuration step. For example, to create an RFC connection, certain values are needed, such as the target system, the target client, the user, the password, and so forth. The values for these parameters are all part of the corresponding input form. Using the values, the RFC is created at operation1004. The configuration can be verified by a quality check1006. The quality check1006is a service task to make sure that the configuration was done successfully. If the quality check1006returns a success (e.g., no errors were found) then the process ends at the end event1008. If the quality check1006detects at least one error, then the computing system sets the specific error text at1010based on return codes returned from the quality check1006and proceeds to the self-healing (e.g., via self-healer212ofFIG.2). For example, the respective error text triggers the required self-healing via respective flow conditions and thus, the system executes a self-healing process specific for the error type. Various self-healing tasks can be included in the configuration step. In this way, some errors can be automatically fixed. After self-healing, a quality check1006is again performed to check if any further errors are detected. If the exact same error occurs a second time, the process flow is directed toward error handling to avoid a loop. For error handling, there are two options, depending on the severity of the error. For example, in the case where the overall build workflow can proceed despite the presence of an error, the error type is a non-fatal error and the computing system directs the flow to the non-fatal error branch1014and proceeds to creating a quality issue and sending a notification at operation1012(e.g., to a recipient such as a system builder or build lead). A non-fatal error can typically be handled at any later point in time. In the case where the build must stop because the error happens at a configuration step which is a prerequisite for the build workflow to proceed and later configuration steps to be successful, the computing system directs the flow to the fatal error branch1016for a manual intervention1018. For example, the computing system creates a quality issue and send an error notification like described above for the non-fatal error. The build resumes only after the root cause of the error has been eliminated (e.g., by a system builder) or after a system builder decides to do the configuration manually in the system or ignore the error. In view of the above disclosure, various examples are set forth below. It should be noted that one or more features of an example, taken in isolation or combination, should be considered withing the disclosure of this application. Example 1. A computer-implemented method comprising:receiving, from a computing device by a computing system, a selected system attribute to be used to build a new computing system for a given entity;generating, by the computing system, a subset of parameters relevant to the selected system attribute, from a plurality of parameters available for a variety of system attributes;causing, by the computing system, display of the subset of the parameters on the computing device;receiving, from the computing device by the computing device, values corresponding to each parameter of the subset of the parameters relevant to the selected system attribute;storing as custom parameters in a database, by the computing system, the values corresponding to each of the subset of the parameters relevant to the selected system attribute;retrieving, by a computing system, standard parameters from a set of tables in the database;retrieving, by the computing system, the custom parameters from the database;inputting, by the computing system, the standard parameters and custom parameters into a decision and execution hierarchy having a plurality of levels for execution, wherein a final level in the decision and execution hierarchy comprises a plurality of scripts for execution; and executing, by the computing system, a subset of the plurality of scripts for execution based on traversing the decision and execution hierarchy using the standard parameters and custom parameters to install and configure the new computing system for the given entity. Example 2. A computer-implemented method according to any of the previous examples, wherein the custom parameters are parameters that are used to build the new computing system as a custom computing system. Example 3. A computer-implemented method according to any of the previous examples, wherein the standard parameters are parameters that are standard for building any final computing system. Example 4. A computer-implemented method according to any of the previous examples, wherein an input form is generated for each of the plurality of scripts, the input form comprising the parameters for execution of a respective script. Example 5. A computer-implemented method according to any of the previous examples, wherein after successful execution of each script of the subset of the plurality of scripts, the method comprises:performing a quality check to determine whether any errors are present;based on determining at least one error is present, determining an error type; and executing a self-healing process specific for the error type. Example 6. A computer-implemented method according to any of the previous examples, further comprising:determining that the error type is a non-fatal error; andgenerating a quality issue and sending a notification corresponding to the error type. Example 7. A computer-implemented method according to any of the previous examples, further comprising:determining that the error type is fatal, indicating that completion of a script is a prerequisite for a build of the new computing system to proceed; andstopping execution of the subset of the plurality of scripts. Example 8. A computer-implemented method according to any of the previous examples, wherein each level of the plurality of levels before the final level comprises sublevels, wherein each sublevel is a decision point for moving to a next level of the plurality of levels. Example 9. A system comprising:a memory that stores instructions; andone or more processors configured by the instructions to perform operations comprising:receiving, from a computing device, a selected system attribute to be used to build a new computing system for a given entity;generating a subset of parameters relevant to the selected system attribute, from a plurality of parameters available for a variety of system attributes;causing display of the subset of the parameters on the computing device;receiving, from the computing device, values corresponding to each parameter of the subset of the parameters relevant to the selected system attribute;storing, as custom parameters in a database, the values corresponding to each of the subset of the parameters relevant to the selected system attribute;retrieving standard parameters from a set of tables in the database;retrieving the custom parameters from the database;inputting the standard parameters and custom parameters into a decision and execution hierarchy having a plurality of levels for execution, wherein a final level in the decision and execution hierarchy comprises a plurality of scripts for execution; andexecuting a subset of the plurality of scripts for execution based on traversing the decision and execution hierarchy using the standard parameters and custom parameters to install and configure the new computing system for the given entity. Example 10. A system according to any of the previous examples, wherein the custom parameters are parameters that are used to build the new computing system as a custom computing system. Example 11. A system according to any of the previous examples, wherein the standard parameters are parameters that are standard for building any final computing system. Example 12. A system according to any of the previous examples, wherein an input form is generated for each of the plurality of scripts, the input form comprising the parameters for execution of a respective script. Example 13. A system according to any of the previous examples, wherein after successful execution of each script of the subset of the plurality of scripts, the operations comprise: performing a quality check to determine whether any errors are present; based on determining at least one error is present, determining an error type; andexecuting a self-healing process specific for the error type. Example 14. A system according to any of the previous examples, further comprising:determining that the error type is a non-fatal error; andgenerating a quality issue and sending a notification corresponding to the error type. Example 15. A system according to any of the previous examples, further comprising:determining that the error type is fatal, indicating that completion of a script is a prerequisite for a build of the new computing system to proceed; andstopping execution of the subset of the plurality of scripts. Example 16. A system according to any of the previous examples, wherein each level of the plurality of levels before the final level comprises sublevels, wherein each sublevel is a decision point for moving to a next level of the plurality of levels. Example 17. A non-transitory computer-readable medium comprising instructions stored thereon that are executable by at least one processor to cause a computing device to perform operations comprising:receiving, from a computing device, a selected system attribute to be used to build a new computing system for a given entity;generating a subset of parameters relevant to the selected system attribute, from a plurality of parameters available for a variety of system attributes;causing display of the subset of the parameters on the computing device;receiving, from the computing device, values corresponding to each parameter of the subset of the parameters relevant to the selected system attribute;storing, as custom parameters in a database, the values corresponding to each of the subset of the parameters relevant to the selected system attribute;retrieving standard parameters from a set of tables in the database;retrieving the custom parameters from the database;inputting the standard parameters and custom parameters into a decision and execution hierarchy having a plurality of levels for execution, wherein a final level in the decision and execution hierarchy comprises a plurality of scripts for execution; andexecuting a subset of the plurality of scripts for execution based on traversing the decision and execution hierarchy using the standard parameters and custom parameters to install and configure the new computing system for the given entity. Example 18. A non-transitory computer-readable medium according to any of the previous examples, wherein an input form is generated for each of the plurality of scripts, the input form comprising the parameters for execution of a respective script. Example 19. A non-transitory computer-readable medium according to any of the previous examples, wherein after successful execution of each script of the subset of the plurality of scripts, the operations comprise:performing a quality check to determine whether any errors are present;based on determining at least one error is present, determining an error type;executing a self-healing process specific for the error type;based on determining that the error type is a non-fatal error, generating a quality issue and sending a notification corresponding to the error type; andbased on determining that the error type is fatal, indicating that completion of a script is a prerequisite for a build of the new computing system to proceed, stopping execution of the subset of the plurality of scripts. Example 20. A non-transitory computer-readable medium according to any of the previous examples, wherein each level of the plurality of levels before the final level comprises sublevels, wherein each sublevel is a decision point for moving to a next level of the plurality of levels. FIG.12is a block diagram1200illustrating software architecture1202, which can be installed on any one or more of the devices described above. For example, in various embodiments, client devices110and servers and systems130,102,120,122, and124may be implemented using some or all of the elements of software architecture1202.FIG.12is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture1202is implemented by hardware such as machine1300ofFIG.13that includes processors1310, memory1330, and I/O components1350. In this example, the software architecture1202can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture1202includes layers such as an operating system1204, libraries1206, frameworks1208, and applications1210. Operationally, the applications1210invoke application programming interface (API) calls1212through the software stack and receive messages1214in response to the API calls1212, consistent with some embodiments. In various implementations, the operating system1204manages hardware resources and provides common services. The operating system1204includes, for example, a kernel1220, services1222, and drivers1224. The kernel1220acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel1220provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1222can provide other common services for the other software layers. The drivers1224are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers1224can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries1206provide a low-level common infrastructure utilized by the applications1210. The libraries1206can include system libraries1230(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1206can include API libraries1232such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and in three dimensions (3D) graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1206can also include a wide variety of other libraries1234to provide many other APIs to the applications1210. The frameworks1208provide a high-level common infrastructure that can be utilized by the applications1210, according to some embodiments. For example, the frameworks1208provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks1208can provide a broad spectrum of other APIs that can be utilized by the applications1210, some of which may be specific to a particular operating system1204or platform. In an example embodiment, the applications1210include a home application1250, a contacts application1252, a browser application1254, a book reader application1256, a location application1258, a media application1260, a messaging application1262, a game application1264, and a broad assortment of other applications such as third-party applications1266and1267. According to some embodiments, the applications1210are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1210, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1266(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1266can invoke the API calls1212provided by the operating system1204to facilitate functionality described herein. FIG.13is a block diagram illustrating components of a machine1300, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.13shows a diagrammatic representation of the machine1300in the example form of a computer system, within which instructions1316(e.g., software, a program, an application1210, an applet, an app, or other executable code) for causing the machine1300to perform any one or more of the methodologies discussed herein can be executed. In alternative embodiments, the machine1300operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine1300may operate in the capacity of a server machine or system130,102,120,122,124, etc., or a client device110in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1300can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1316, sequentially or otherwise, that specify actions to be taken by the machine1300. Further, while only a single machine1300is illustrated, the term “machine” shall also be taken to include a collection of machines1300that individually or jointly execute the instructions1316to perform any one or more of the methodologies discussed herein. In various embodiments, the machine1300comprises processors1310, memory1330, and I/O components1350, which can be configured to communicate with each other via a bus1302. In an example embodiment, the processors1310(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) include, for example, a processor1312and a processor1314that may execute the instructions1316. The term “processor” is intended to include multi-core processors1310that may comprise two or more independent processors1312,1314(also referred to as “cores”) that can execute instructions1316contemporaneously. AlthoughFIG.13shows multiple processors1310, the machine1300may include a single processor1310with a single core, a single processor1310with multiple cores (e.g., a multi-core processor1310), multiple processors1312,1314with a single core, multiple processors1312,1314with multiples cores, or any combination thereof. The memory1330comprises a main memory1332, a static memory1334, and a storage unit1336accessible to the processors1310via the bus1302, according to some embodiments. The storage unit1336can include a machine-readable medium1338on which are stored the instructions1316embodying any one or more of the methodologies or functions described herein. The instructions1316can also reside, completely or at least partially, within the main memory1332, within the static memory1334, within at least one of the processors1310(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1300. Accordingly, in various embodiments, the main memory1332, the static memory1334, and the processors1310are considered machine-readable media1338. As used herein, the term “memory” refers to a machine-readable medium1338able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium1338is shown, in an example embodiment, to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions1316. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions1316) for execution by a machine (e.g., machine1300), such that the instructions1316, when executed by one or more processors of the machine1300(e.g., processors1310), cause the machine1300to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., erasable programmable read-only memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se. The I/O components1350include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. In general, it will be appreciated that the I/O components1350can include many other components that are not shown inFIG.13. The I/O components1350are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components1350include output components1352and input components1354. The output components1352include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor), other signal generators, and so forth. The input components1354include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In some further example embodiments, the I/O components1350include biometric components1356, motion components1358, environmental components1360, or position components1362, among a wide array of other components. For example, the biometric components1356include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components1358include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components1360include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1362include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication can be implemented using a wide variety of technologies. The I/O components1350may include communication components1364operable to couple the machine1300to a network1380or devices1370via a coupling1382and a coupling1372, respectively. For example, the communication components1364include a network interface component or another suitable device to interface with the network1380. In further examples, communication components1364include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices1370may be another machine1300or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, in some embodiments, the communication components1364detect identifiers or include components operable to detect identifiers. For example, the communication components1364include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via the communication components1364, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth. In various example embodiments, one or more portions of the network1380can be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network1380or a portion of the network1380may include a wireless or cellular network, and the coupling1382may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling1382can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. In example embodiments, the instructions1316are transmitted or received over the network1380using a transmission medium via a network interface device (e.g., a network interface component included in the communication components1364) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, in other example embodiments, the instructions1316are transmitted or received using a transmission medium via the coupling1372(e.g., a peer-to-peer coupling) to the devices1370. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions1316for execution by the machine1300, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Furthermore, the machine-readable medium1338is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium1338“non-transitory” should not be construed to mean that the medium is incapable of movement; the medium1338should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium1338is tangible, the medium1338may be considered to be a machine-readable device. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
67,451
11861364
DETAILED DESCRIPTION FIG.1Aillustrates an example computer system101that facilitates enforcing shadow stack violations at module granularity. Computer system101comprises or utilizes a special-purpose or general-purpose computer hardware, such as, for example, one or more processors102, system memory103, and durable storage104, which are communicatively coupled using one or more communications buses105. Embodiments within the scope of the present invention include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that are accessible by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions and/or data structures are computer storage media. Computer-readable media that carry computer-executable instructions and/or data structures are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media. Computer storage media are physical storage media (e.g., system memory103and/or durable storage104) that store computer-executable instructions and/or data structures. Physical storage media include computer hardware, such as RAM, ROM, EEPROM, solid-state drives (“SSDs”), flash memory, phase-change memory (“PCM”), optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage device(s) which can be used to store program code in the form of computer-executable instructions or data structures, which can be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Transmission media can include a network and/or data links that can be used to carry program code in the form of computer-executable instructions or data structures, and which can be accessed by a general-purpose or special-purpose computer system. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer system, the computer system may view the connection as transmission media. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computer system components, program code in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network, or data link can be buffered in RAM within a network interface module, and then eventually transferred to computer system RAM (e.g., system memory103) and/or to less volatile computer storage media (e.g., durable storage104) at the computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at one or more processors, cause a general-purpose computer system, special-purpose computer system, or special-purpose processing device to perform a certain function or group of functions. Computer-executable instructions may be, for example, machine code instructions (e.g., binaries), intermediate format instructions such as assembly language, or even source code. Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. As such, in a distributed system environment, a computer system may include a plurality of constituent computer systems. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed. A cloud computing model can be composed of various characteristics, such as on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud computing model may also come in the form of various service models such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). The cloud computing model may also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. Some embodiments, such as a cloud computing environment, may comprise a system that includes one or more hosts that are each capable of running one or more virtual machines. During operation, virtual machines emulate an operational computer system, supporting an operating system and perhaps one or more other applications as well. In some embodiments, each host includes a hypervisor that emulates virtual resources for the virtual machines using physical resources that are abstracted from view of the virtual machines. The hypervisor also provides proper isolation between the virtual machines. Thus, from the perspective of any given virtual machine, the hypervisor provides the illusion that the virtual machine is interfacing with a physical resource, even though the virtual machine only interfaces with the appearance (e.g., a virtual resource) of a physical resource. Examples of physical resources include processing capacity, memory, disk space, network bandwidth, media drives, and so forth. As shown inFIG.1A, each processor102includes (among other things) one or more processing units106(e.g., processor cores), each of which loads and executes machine code instructions from system memory103(usually via one or more processor caches, not shown). In some embodiments, processor(s)102include hardware and/or microcode that provide shadow stack support107by the processor102. The particular functionality of processor-based shadow stack support107can vary depending on design choices, but example functionality includes the ability to allocate and protect memory regions for shadow stack use (e.g., via page table mappings), the ability to “push” return addresses onto a shadow stack during execution of a procedure prologue (e.g., as part of execution of a “call” instruction), the ability to “pop” a return address from a shadow stack during execution of a procedure epilogue (e.g., as part of execution of a “return” instruction), the ability to compare one return address popped from a call stack with another return address popped from a shadow stack (e.g., as part of execution of a “return” instruction), and/or the ability to trigger an exception when there is a mismatch between the return address popped from a call stack and the return address popped from a shadow stack. However, it will be appreciated that the embodiments herein could be implemented without processor-based shadow stack support107. For example, functionality of shadow stack support107could instead be provided by an operating system and/or could be complied into procedure prologues and epilogues of an application binary. As illustrated, the durable storage104stores computer-executable instructions and/or data structures representing executable software components; correspondingly, during execution of this software at the processor(s)102, one or more portions of these computer-executable instructions and/or data structures are loaded into system memory103. For example, the durable storage104is shown as storing computer-executable instructions and/or data structures corresponding to an operating system108, one or more module(s)109, and one or more application(s)110. The durable storage104also stores data, such as rules111and logs112. The system memory103is capable of storing a broad variety of data, but for purposes of illustrating the embodiments herein, the system memory103is shown as storing at least a portion of code of at least one running application (i.e., application code110a) and at least a portion of code of a module called by that running application (i.e., module code109a), as well as memory allocated to call stacks113and shadow stacks114(including, for example, a call stack and a shadow stack for the running application). FIG.1Billustrates details of operating system108, including example components that facilitate enforcing shadow stack violations at module granularity, according to some embodiments. It will be appreciated that the depicted components—including their identity, sub-components, and arrangement—are presented merely as an aid in describing various embodiments of operating system108described herein, and that these components are non-limiting to how software and/or hardware might implement various embodiments described herein, or of the particular functionality thereof. Operating system108is illustrated as including kernel115that includes a task manager116which is responsible for launching and managing execution of processes (including one or more threads) at processor(s)102based on code of operating system108and application(s)110. The kernel115is also shown as including a shadow stack violation exception handler117(referred to hereinafter as exception handler117) that processes exceptions when a shadow stack violation is detected during execution of a thread. In some embodiments, execution of the exception handler117is triggered by the shadow stack support107of processor(s)102(e.g., via a hardware interrupt) when a mismatch between a call stack return address and a shadow stack return address is detected. However, the exception handler117could be triggered in other ways (e.g., via a software interrupt or exception), such as by code that executes as part of a procedure epilogue and that determines whether or not a call stack return address and a shadow stack return address match. Thus, the exception handler117is capable of being utilized in a wide variety of environments, including those that include hardware support for shadow stacks (e.g., shadow stack support107), and those that lack hardware shadow stack support (e.g., in which shadow stack functionality is implemented entirely in software, such as via specially-configured procedure prologues and epilogues). It is noted that the description of exception handler117herein is focused on handling of exceptions when a module called by a primary application binary causes a shadow stack violation. It will be appreciated that the exception handler117could be invoked in other situations as well, such as when the primary application binary, itself, causes a shadow stack violation. Thus, in addition to the description herein of the handling of exceptions when a module called by a primary application binary causes a shadow stack violation, the exception handler117could also be configured to handle other situations, such as such as when the primary application binary, itself, causes a shadow stack violation. It will be appreciated, therefore, that the description of the exception handler117herein is not limited to those scenarios and functions specifically described herein. The operating system108is also shown as including a logger122, and as potentially including a rule generator123. In general, the logger122generates log entries (e.g., which are stored in logs112) in connection with operation of the exception handler117. In general, the rule generator123, if present, processes logs112, and/or sends logs112to a remote system for processing. As a result of rule generator123, computer system101generates and/or receives rules111that are usable by the exception handler117. Note that while the rule generator123is depicted, for ease in illustration, as being part of operating system108, in some embodiments the rule generator123is part of a separate application110(e.g., a system security application, such as an antivirus application). Further operation of the components ofFIGS.1A,1Bare now described in connection withFIGS.2A-2D(which illustrate an example operation of enforcement of shadow stack violations at module granularity), and in connection withFIG.3(which illustrates a flow chart of an example method300for enforcing a shadow stack violation at module granularity) andFIG.4(which illustrates a flowchart of an example method400for making a shadow stack a circular stack in an audit mode). The following discussion refers to a number of methods and method acts. Although the method acts may be discussed in a certain order, or may be illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is specifically described as being dependent on another act being completed prior to the act being performed. Referring initially toFIG.3, as shown, method300comprises an act301of initiating execution of a thread at a processor, based on an application binary having shadow stack enforcement enabled. In some embodiments, act301comprises initiating execution of a thread at the processor, including initiating execution of first executable code of an application binary that calls second executable code of an external module, the application binary having been enabled for shadow stack enforcement. For example, in an embodiment the task manager116initiates execution of one of applications110at processors(s)102. Initiating execution includes the task manager116causing application code110afor the application110to be loaded into system memory103, creating kernel data structures supporting execution of one or more threads at processing unit(s)106, and creating (or initiating creation of) a call stack (i.e., within call stacks113) for each of these one or more threads. In addition, since shadow stack enforcement is enabled for the application's binary, initiating execution also includes the task manager116creating (or initiating creation of) a shadow stack114(i.e., within shadow stacks114) for each of these one or more threads. In a more particular example,FIG.2Aillustrates an example200athat depicts a representation of an application binary201. While binary formats vary from operating system to operating system, in general, a binary includes a header describing properties and layout of the binary, and a body that comprises application code and data (e.g., in the form a text segment, a data segment, etc.). Thus, inFIG.2A, application binary201is shown as including a header201aportion and a body201bportion. In embodiments, as part of initiating execution of an application's binary (e.g., application binary201), the task manager116reads the binary's header (e.g., header201a) to obtain binary properties and layout, and loads at least a portion of the binary's body (e.g., body201b) into system memory103(e.g., at least a portion of which could correspond to application code110a). InFIG.2A, header201ais shown as including a checkbox containing a checkmark. InFIG.2A, this checkbox represents a flag or other indicator of whether or not the application binary201was compiled with support for, and requests enforcement of, shadow stacks. Note that it is possible that a binary could be compiled to support shadow stacks (i.e., the binary is shadow stacks aware/compliant), but not actually request their enforcement (i.e., the binary has not opted-in to shadow stack enforcement). Thus, while header201ais shown as including a binary indicator (i.e., checked or not), it is possible for header201ato have a more comprehensive set of flags/indicators. In example200a, since header201aincludes checkmark, shadow stack enforcement is enabled for application binary201and/or the binary has opted-in to shadow stack enforcement. Thus, when the task manager116initiates execution of application binary201, task manager116creates shadow stack(s) for any thread(s) created for that binary.FIG.2Ashows that the task manager116has initiated execution of at least one thread for application binary201, since it creates a call stack203and a shadow stack204corresponding to that initiated thread. It is noted that, inFIG.2B, call stack203and shadow stack204“grow” downward; that is, new information is pushed onto the bottom of these stacks, such that the “top” item on the stack is visually shown at the bottom of the call stack203and the shadow stack204. In embodiments, when an application binary is loaded, the task manager116identifies any module(s) that will be accessed by that binary during its execution (e.g., based on information in header201a).FIG.2Adepicts a module202that is called by application binary201. Similar to application binary201, module202is shown as including a header202aportion and a body202bportion. At least a portion of the code of body202bcan be loaded to system memory (e.g., module code109a) in connection with initiating execution of application binary201, or at some later time. InFIG.2A, header202ais shown as having an empty checkbox. Thus, since header201alacks checkmark in the checkbox, shadow stack enforcement is not supported by module202and/or the module has not opted-in to shadow stack enforcement. However, since shadow stack enforcement is enabled for application binary201, and since the code of module202executes within the context of call stack203, computer system101also maintains shadow stack204during execution of the code of module202. Notably, in connection with initiating execution of a thread for application binary201, the task manager116can store a record (e.g., as part of kernel thread data structures) of whether or not shadow stack enforcement is enabled and/or requested for each of application binary201and module202. Thus, in some embodiments, act301comprises storing a record of whether or not the external module is enabled for shadow stack enforcement in connection with initiating execution of the thread. FIG.2Ashows a state of call stack203and shadow stack204after the initiated thread has executed for at least some period of time. For example, call stack203is shown as including four stack frames203a-203dthat were created in connection with execution of code of application binary201(e.g., due to internal procedure calls within that binary), as well as two stack frames203e-203fthat were subsequently created in connection with execution of code of module202(e.g., due to application binary201calling a procedure within module202, and due to module202calling an internal procedure). Each of these stack frames203a-203fis illustrated as storing a corresponding return address (i.e., return address205afor stack frame203a, return address205bfor stack frame203b, return address205cfor stack frame203c, return address205dfor stack frame203d, return address205efor stack frame203e, and return address205ffor stack frame203f). Correspondingly, shadow stack204is shown as also storing these same return addresses205, in the same order. Since the “stack” or return addresses on call stack203match the “stack” of return addresses on shadow stack204, as ofFIG.2Athere are would be no shadow stack violations if stack frames203a-203fwere to be popped from the call stack203. Turning toFIG.2B, the current “top” stack frame203fon call stack203now contains a new return address (i.e., return address205g), which differs from the “top” return address on shadow stack204(i.e., return address205f). This new return address205gcould have been written to stack frame203fdue to a bug within module202, due to intentional ROP behavior by module202, or even due to a malicious attack targeting module202. Turning toFIG.2C, stack frame203fis now being “popped” from call stack203(e.g., due to execution of a procedure epilogue within module202). As a result, the return address205fis also popped from shadow stack204. By operation of shadow stack logic (e.g., as part of a procedure epilogue and/or by shadow stack support107within processor(s)102), a mismatch is detected between return address205gand return address205f. As a result, a shadow stack violation is detected (e.g., by procedure epilogue code and/or by shadow stack support107), triggering a hardware or software exception. Returning toFIG.3, method300also comprises an act of302of processing a shadow stack violation based on execution of the thread. In some embodiments, act301comprises, based at least on execution of the thread at the processor, processing an exception triggered by a mismatch between a first return address popped from a call stack corresponding to the thread and a second return address popped from a shadow stack corresponding to the thread. In an example, as a result of an exception triggered by the return address mismatch described in connection withFIGS.2B and2C, the exception handler117is invoked to handle the exception. The exception handler117is shown as including a variety of components that are usable for handling the exception. These components are described in connection with acts303-306, which are shown inFIG.3as sub-acts of act302. Act302comprises an act of303of determining that the exception was triggered by execution of a module called by the application binary. In some embodiments, act303comprises determining that the exception resulted from execution of an instruction in the second executable code of the external module. For example, the exception handler117is shown as including a module identifier118. In embodiments, the module identifier118operates to determine an identity of a module that triggered the exception, if any. In some embodiments, the module identifier118operates by identifying a memory address corresponding to a “calling site” of an instruction that triggered the exception. For instance, if the instruction that triggered the instruction was a “return” instruction in a procedure epilogue, the memory address corresponding to the “calling site” of this instruction is the memory address at which the “return” instruction is stored in system memory. If this “return” instruction is part of a procedure epilogue of the executing application binary (e.g., application binary201), then the instruction's memory address would be within a range of memory addresses occupied by application code110ain system memory103; in this case, the module identifier118would determine that the calling site address is part of the application binary. On the other hand, if this “return” instruction is part of a procedure epilogue of the module called by the application binary (e.g., module202), then the instruction's memory address would be within a range of memory addresses occupied by module code109ain system memory103; in this case, the module identifier118would determine that the calling site address is part of the module. In additional, or alternative, embodiments, the module identifier118operates by identifying a memory address corresponding to a memory address that was the “target address” of the instruction that triggered the exception. For instance, if the instruction that triggered the instruction was a “return” instruction in a procedure epilogue, the memory address corresponding to the “target address” of this instruction is the saved return address in the call stack frame for this procedure. If this saved return instruction is part of the executing application binary (e.g., application binary201), then the instruction's memory address would be within a range of memory addresses occupied by application code110ain system memory103; in this case, the module identifier118would determine that the target site address is part of the application binary. On the other hand, if this saved return instruction is part of the module called by the application binary (e.g., module202), then the instruction's memory address would be within a range of memory addresses occupied by module code109ain system memory103; in this case, the module identifier118would determine that the target site address is part of the module. As will appreciated, depending on the nature of the stack frame being removed, the module identifier118could identify the same entity for both of the calling site and the target address, or the module identifier118could identify different entities for each of the calling site and the target address. For example, a “return” instruction within module202could have as its target address a return address within module202, in which case both of the calling site and the target address would correspond to the same entity. In another example, a “return” instruction within module202could have as its target a return address within application binary201, or some other entity, in which case the calling site and the target address would correspond to different entities. In view of the foregoing discussion of the module identifier118it will be appreciated that, in some embodiments of act302, determining that the exception resulted from execution of the instruction in the second executable code of the external module comprises determining one or more of, (i) that a calling site address of the instruction corresponds to the second executable code of the external module, or (ii) that a target address of the instruction corresponds to the second executable code of the external module. Referring again to the example ofFIG.2C, during processing of an exception triggered by removal of stack frame203f, in embodiments of act303the module identifier118identifies module202as corresponding to a calling site address (i.e., since it would be a procedure epilogue of module202that removes the procedure epilogue). Depending on which code return address205gcorresponds to (if any), in some embodiments the module identifier118identifies a target site address as corresponding to module202, to application binary201, to some other entity, or to no entity at all. As noted above, the exception handler117could be invoked in situations in which the exception was not triggered by a module, such as when the primary application binary, itself, causes a shadow stack violation. In these situations, the exception handler117might proceed to enforcing the shadow stack violation by terminating the thread, or by permitting the thread to continue executing if an audit mode is enabled. Audit mode is described later in connection with act306. Assuming a module is identified in act303, act302also comprises an act of304of determining whether the module has shadow stack enforcement enabled. In some embodiments, act304comprises determining whether or not the external module is enabled for shadow stack enforcement. For example, the exception handler117is shown as including a shadow stack enforcement identifier119(referred to hereinafter as enforcement identifier119). In embodiments, the enforcement identifier119determines whether or not the module identified in act303has shadow stack enforcement enabled. For example, referring to module202, the enforcement identifier119would determine that module202does not have shadow stack enforcement enabled (i.e., because there is no check mark in the header202aof module202). As discussed in connection withFIG.2A, it is possible that a binary was compiled with support for shadow stacks (i.e., the binary is shadow stacks aware/compliant), but without a request for shadow stack enforcement (i.e., the binary has not opted-in to shadow stack enforcement). Thus, in some embodiments of act304, the enforcement identifier119determines one or more of (i) whether or not the module was compiled with support for shadow stacks, or (ii) whether or not the module opted-in to shadow stack enforcement. In some embodiments, an external module is enabled for shadow stack enforcement when (i) the external module is compiled for shadow stack compliance (i.e., if there is no option to opt-in or opt-out), or (ii) when the external module is compiled to opt-in to shadow stack enforcement (i.e., if there is an option to opt-in or opt-out). Similarly, in some embodiments, an external module is not enabled for shadow stack enforcement when the external module is not compiled for shadow stack compliance (i.e., if there is no option to opt-in or opt-out), or (ii) when the external module is compiled to opt-out of shadow stack enforcement (i.e., if there is an option to opt-in or opt-out). In some embodiments, the enforcement identifier119consults the header of the module identified in act303to determine whether or not the module is enabled for shadow stack enforcement. However, as discussed, in some embodiments act301comprises storing a record of whether or not the external module is enabled for shadow stack enforcement in connection with initiating execution of the thread. In these embodiments, determining whether or not the external module is enabled shadow stack enforcement can comprise the enforcement identifier119consulting this record, rather than needing to actually look at the module header itself. In some embodiments, act302also comprises an act of305of identifying an enforcement rule for the module. In some embodiments, act305comprises identifying a rule based at least on an identity of the external module. For example, the exception handler117is shown as including a rule identifier120. In embodiments, the rule identifier120consults rules111to determine if there exists a rule that specifies whether the exception should be permitted for the external module (i.e., in which case the thread should be permitted to continue executing), or whether the exception for should be disallowed for the external module (i.e., in which case the thread should be terminated). In embodiments, rules111are created based on analysis of prior logging (i.e., by logger122) of shadow stack violations involving the external module at computer system101and/or at another computer system. In embodiments, the rule identifier120identifies a rule further based an identity of the application binary (i.e., in addition to an identity of the external module). Thus, in some embodiments the rules111are specific to a particular combination of application binary and external module. Act302also comprises an act of306of enforcing a shadow stack violation policy for the module. In some embodiments, act306comprises, based on having determined whether the external module is enabled for shadow stack enforcement, performing one of terminating the thread (i.e., act306a) or permitting the thread to continue executing (i.e., act306b). For example, the exception handler117is shown as including a shadow stack policy enforcer121(referred to hereinafter as policy enforcer121). In embodiments, the policy enforcer121either terminates the thread or permits the thread to continue executing depending on whether or not shadow stack enforcement is enabled for the external module, whether or not a rule identified in act305specifies that shadow stack violations should be permitted, and/or whether or not an enforcement mode or an audit mode is enabled. In some embodiments, when the audit mode is enabled, the second return address205fin the shadow stack204is replaced with the first return address205gin the call stack203, such that the shadow stack violation policy is not enforced in the audit mode. Even though the shadow stack violation policy is not enforced in the audit mode, the audit mode is still useful for logging the shadow stack violation, which will be further described below. As shown, act306can invoke act306ato terminate the thread based at least on (i) shadow stack enforcement being enabled for the module, or (ii) shadow stack enforcement not being enabled for the module but a rule (i.e., accessed in act305) specifies that the shadow stack violation should not be permitted. For example, if shadow stack enforcement is enabled for the module, then the module has requested that shadow stack violations by the module be enforced, so the policy enforcer121terminates the thread. If, on the other hand, shadow stack enforcement is not enabled for the module, then the policy enforcer121can default to permitting the thread to continue executing but override that default if a rule so specifies; thus, the module may not be compiled for shadow stack compliance (or may opt-out of shadow stack enforcement), but the policy enforcer121may nonetheless enforce shadow stack violations by the module. On the other hand, and as shown, act306can invoke act306bto permit the thread to continue executing based at least on (i) shadow stack enforcement not being enabled for the module (and there are no rules for the module), or (ii) shadow stack enforcement not being enabled for the module and a rule specifies that the shadow stack violation should be permitted. For example, if shadow stack enforcement is not enabled for the module, then the policy enforcer121can default to permitting the thread to continue executing. In addition, a rule (i.e., rules111) could further specify that shadow stack violations should be permitted for the module. Thus, in some embodiments of act306a, the computer system permits the thread to continue executing when the external module is not enabled for shadow stack enforcement and when the rule specifies that a shadow stack violation should be permitted. Act306aalso shows that the thread could be terminated based at least on enforcement mode being enabled, while act306balso shows that the thread could be permitted to continue executing based at least on audit mode being enabled. In embodiments, the policy enforcer121operates in either an enforcement mode or an audit mode—either globally, or on a per-thread, per-application binary, and/or per-module basis. When operating in enforcement mode, the policy enforcer121terminates a thread, or permits it to execute, based on the policies already described in connection with acts306aand306b. When operating in audit mode, on the other hand, the policy enforcer121permits a thread to continue executing even in cases where it would normally be terminated under the polices described in connection with act306a. When combined with logging by the logger122, audit mode is useful for logging shadow stack violations by executing code (whether that be code within a primary application binary and/or within an external module called by that application binary) without actually terminating a thread when a violation occurs. Method300also comprises an act of307of logging the exception. For example, the logger122can log one or more data items about the exception into logs112. In embodiments, act307is performed both when the policy enforcer121operates in enforcement mode and when it operates in audit mode, though it could be configured to refrain from logging in some situations. As examples, in various embodiments the logger122logs one or more of a calling site address, a target address, an identifier of the external module, an identifier of a process to which the thread belongs, an identifier of the thread, an identifier of an application binary, whether enforcement mode or audit mode is enabled, whether a shadow stack violation was enforced or permitted, etc. In embodiments, when logging an identifier of an external module or an application binary, logger122logs a filesystem path to the external module or the application binary. In some embodiments, the logger122preserves user privacy in these situations by removing or obfuscating personally identifiable information, such as a path portion corresponding to a user's home or profile directory. As mentioned, the rule generator123(if present) processes logs112, and/or sends logs112to a remote system for processing, in order to generate and/or receive rules111that are usable by the exception handler117. As also mentioned, the rule identifier120(if present) consults these rules111to determine if an exception should be permitted for an external module based on an identity of the external module, potentially in combination with an identity of the application binary. The rule generator123(together with any remote system(s) involved) can use a vast variety of techniques to process logs112in order to generate rules111, including the use of any appropriate machine learning techniques. In embodiments, the rule generator123can generate rules based on identifying modules that frequently (or infrequently) cause shadow stack violations, identifying application/module combinations frequently (or infrequently) shadow stack violations, identifying situations in which an allowed shadow stack violation later caused a thread to crash, identifying situations in suspicious behavior was observed after an allowed shadow stack violation, etc. Accordingly, the embodiments herein enforce shadow stack violations at module granularity, rather than at the granularity of an entire thread (or process). Thus, rather than simply terminating a thread/process when a shadow stack violation is detected on the thread's stack, the embodiments herein perform checks to determine if the shadow stack violation occurred during execution of an external module, and if so, whether or not shadow stack enforcement is enabled for that module. If the shadow stack violation occurred during execution of a module, and if shadow stack enforcement is enabled for that module, embodiments proceed to terminate the thread (or the process to which it belongs). However, if shadow stack enforcement is not enabled for that module, some embodiments choose to permit the thread to continue executing, rather than terminating it as would be typical. Enforcement of shadow stack violations at module granularity, rather than at thread/process granularity, can increase overall security in a computer system, as well as increase the adoption of shadow stack technologies. For example, rather than needing to disable shadow stack enforcement on this application due to its interaction with modules that trigger shadow stack violations, the embodiments herein enforce shadow stack violations for the application's code, while permitting shadow stack violations by called module code. In this way, shadow stack enforcement can be enabled for an application even if it calls external modules that intentionally tamper with return addresses or that are not yet shadow stack compatible. Thus, the embodiments herein enable the use of shadow stack enforcement for an application—even in situations where it was previously impractical due to the module(s) upon which the application relies, or due to the environment in which the application executes. Notably, when shadow stack enforcement features are enabled within an ecosystem, executing code is required to utilize its call stack in an expected manner; otherwise, that code may cause a fatal system error, such as a “blue screen,” kernel panic, etc. However, not all code is currently compliant with shadow stack functionality. To ensure an ecosystem and/or drivers are compliant with shadow stack functionality, an audit mode may be enabled to get telemetry on what code breaks when shadow stack enforcement is enabled. In embodiments, when a shadow stack mismatch occurs, the CPU issues an exception (e.g., a control protection exception). In the audit mode, when the exception is issued, instead of enforcing any policies, the mismatched entry in the shadow stack is replaced with the entry in the call stack, and a reporting telemetry may be generated (including, for example, one or more of an application binary identifier, back-trace information from the call stack, back-trace information from the shadow stack, etc.). Examples of application binary identifiers include (but are not limited to) a file name, a hash of a particular portion of the binary, an embedded version metadata of the application binary, and/or a combination thereof. This telemetry can then be utilized by a developer to improve shadow stack compatibility, to add the application binary identifier to a shadow stack block-list, etc. As such, the incompatibilities with shadow stacks can be caught in the audit mode. However, certain shadow stack incompliant software programs, such as (but not limited to) certain games, behave differently, such that a mismatch of return addresses cannot be simply mitigated by replacing the return address in the shadow stack with the return address in the call stack. For example, when some incompliant software programs continuously performs calls without corresponding returns, many addresses are pushed onto both the call stack and the shadow stack. As another example, some incompliant software programs manually adjust their own call stack, such as to allow more repeated function calls. Because such a phenomenon does not trigger the CPU to issue a traditional control protection exception, the shadow stack may overflow its allocated memory buffer and cause a fatal system error. As another example, a software program (that is not aware of the shadow stack) may self-manage its call stack (including return addresses) to avoid deep recursion, causing the call stack to run out of a memory region allocated to the call stack. In some cases, the software program may repeatedly use “call”-like instructions (e.g., function calls) which also use shadow stack without intervening “ret”-like instructions. Since the “ret” like instructions are configured to pop and validate the entries in the shadow stack, such “call”-like instructions without “ret”-like instructions would cause the shadow stack to keep on growing. When such “call”-like instructions or function calls are excessive, their returns can cause exhaustion of the shadow stack (also referred to as “shadow stack overflow”). When the shadow stack overflow occurs, a fatal CPU exception may occur. The embodiments described herein prevent the fatal CPU exception, even though it may mean corrupting the shadow stack (by making a portion of it circular). As another example, rather than using a “ret” instruction, the software program uses a branch instruction to return from a function call. In such a case, the corresponding return address doesn't pop from the shadow stack, and shadow stack overflow may also occur. To solve the problem caused by shadow stack overflow, and to safely enable the shadow stack functionality in an audit mode, at least some embodiments described herein are directed to methods, systems, and computer program products that enable at least a portion of a shadow stack to be a circular stack in an audit mode, such that when the usage of the shadow stack has reached a defined usage threshold, contents in at least a portion of the shadow stack are overwritten. As such, in the audit mode, a computer system is able to get data on certain application binaries (such as, but not limited to, certain drivers) that currently do not work with the shadow stack, while preventing the shadow stack from overflowing a memory region allocated to the shadow stack. FIG.4illustrates a flowchart of an example method400for making a shadow stack a circular stack in an audit mode. The method400includes initiating execution of a thread at a processor (act410). In some embodiments, the act410includes initiating execution of executable code of an application binary as part of a thread (act412) and enabling shadow stack functionality of the thread in an audit mode (act414). For example, in an embodiment, the task manager116ofFIG.1Binitiates execution of one of applications110at processors(s)102. Initiating execution includes the task manager116causes application code110afor the application110to be loaded into system memory103, creating kernel data structures supporting execution of one or more threads at processing unit(s)106, and creating (or initiating creation of) a call stack (i.e., within call stacks113) for each of these one or more threads. In some embodiments, an audit mode is enabled by default. During the audit mode, shadow stack functionality is enabled. Since shadow stack functionality is enabled for the application's binary, initiating execution also includes the task manager116creating (or initiating creation of) a shadow stack114for each of these one or more threads. In a more particular example,FIG.2Aillustrates an example200athat depicts a representation of an application binary201. While binary formats vary from operating system to operating system, in general, a binary includes a header describing properties and layout of the binary, and a body that comprises application code and data (e.g., in the form a text segment, a data segment, etc.). Thus, inFIG.2A, application binary201is shown as including a header201aportion and a body201bportion. In embodiments, as part of initiating execution of an application's binary (e.g., application binary201), the task manager116reads the binary's header (e.g., header201a) to obtain binary properties and layout, and loads at least a portion of the binary's body (e.g., body201b) into system memory103(e.g., at least a portion of which could correspond to application code110a).FIG.2Ashows the state after the task manager116has initiated execution of at least one thread for application binary201, and after it created a call stack203and a shadow stack204corresponding to that initiated thread. It is noted that, inFIG.2B, call stack203and shadow stack204“grow” downward; that is, new information is pushed onto the bottom of these stacks, such that the “top” item on the stack is visually shown at the bottom of the call stack203and the shadow stack204. FIG.2Ashows a state of call stack203and shadow stack204after the initiated thread has executed for at least some period of time. For example, call stack203is shown as including four stack frames203a-203dthat were created in connection with execution of code of application binary201(e.g., due to internal procedure calls within that binary), as well as two stack frames203e-203fthat were subsequently created in connection with execution of code of module202(e.g., due to application binary201calling a procedure within module202, and due to module202calling an internal procedure). Each of these stack frames203a-203fis illustrated as storing a corresponding return address (i.e., return address205afor stack frame203a, return address205bfor stack frame203b, return address205cfor stack frame203c, return address205dfor stack frame203d, return address205efor stack frame203e, and return address205ffor stack frame203f). Correspondingly, shadow stack204is shown as also storing these same return addresses205, in the same order. Since the “stack” or return addresses on call stack203matches the “stack” of return addresses on shadow stack204, as ofFIG.2A, there would be no shadow stack violations if stack frames203a-203fwere to be popped from the call stack203. Turning toFIG.2B, the current “top” stack frame203fon call stack203now contains a new return address (i.e., return address205g), which differs from the “top” return address on shadow stack204(i.e., return address205f). This new return address205gcould have been written to stack frame203fdue to a bug within module202, due to intentional ROP behavior by module202, or even due to a malicious attack targeting module202. By operation of shadow stack logic or functionality (e.g., as part of a procedure epilogue and/or by shadow stack support107within processor(s)102), a mismatch is detected between return address205gand return address205f. As a result, a shadow stack violation is detected (e.g., by procedure epilogue code and/or by shadow stack support107), triggering a hardware or software exception (such as a control protection exception). Returning toFIG.4, in some embodiments, processing a shadow stack violation in the audit mode (act430) includes replacing the return address in the shadow stack with the return address in the call stack (act432). For example, in the audit mode, when the exception is triggered by a mismatch between a first return address205gin a call stack203corresponding to a thread and a second address205fin a shadow stack204corresponding to the same thread, the second return address205fin the shadow stack204is replaced with the first return address205g. As such, the violation is not enforced in the audit mode. However, telemetries associated with the exception may still be logged in the audit mode. In some embodiments, processing the exception in the audit mode (act430) also includes logging at least one of (1) an identification of a process to which the thread belongs, (2) an identifier of the thread, or (3) an identifier of the application binary (act436). In some embodiments, the application binary is a device driver. Examples of application binary identifiers include (but are not limited to) a file name, a hash of a particular portion of the binary, an embedded version metadata of the application binary, and/or a combination thereof. As discussed above, although the replacing the mismatched entry in the shadow stack with the entry in the call stack can catch incompatibilities with kernel shadow stacks, this mechanism does not handle a shadow stack overflow. For example, when certain software programs, such as (but not limited to) certain games are running in the audit mode, they may continuously perform calls, pushing addresses onto both the call stack and the shadow stack. In some cases, the software program has instructions that manually check if the call stack is nearing its end to prevent the call stack from exceeding its allocated space. When the call stack is nearing its end, the application may manually start using an earlier portion of its own stack. In some cases, the application may not properly unwind the stack using standard methods, and therefore the shadow stack will continue to grow, causing the shadow stack to overflow. For example, rather than using a “ret” instruction, the software program may use a branch instruction to return to the caller of a function. In such a case, the corresponding return address on the shadow stack would not be popped, effectively “leaking” one shadow stack location each time, and shadow stack overflow may occur. To solve the problem caused by shadow stack overflow, at least some embodiments described herein enable at least a portion of the shadow stack to be a circular stack in the audit mode (act420), such that when usage of the shadow stack has reached a defined usage threshold, contents in at least a portion of the shadow stack are overwritten. In some embodiments, the method400further includes determining whether usage of the shadow stack has reached a defined usage threshold (act422). In response to determining that the usage of the shadow stack has reached the defined usage threshold, one or more entries of the shadow stack are overwritten, preventing the shadow stack from overflowing a memory region allocated to the shadow stack (act424). In some embodiments, a flag (e.g., a Boolean value) is set to indicate whether at least one entry of the shadow stack has been overwritten. When the flag indicates that at least one entry of the shadow stack has been overwritten, it is understood that at least a portion of the shadow stack is corrupted; thus, the data contained in at least the portion of the shadow stack is not usable for enforcement purposes. In some embodiments, checking whether the shadow stack has reached the defined threshold is performed during processing of shadow stack violations.FIG.5illustrates a flowchart of an example of method500for checking whether the shadow stack has reached the defined threshold during processing of a shadow stack violation, which corresponds to the act430of processing shadow stack violation in audit mode inFIG.4. As illustrated inFIG.5, when a shadow stack violation is triggered by a mismatch between a first address in a call stack and a second address in a shadow stack, an exception is received (act510). In response to receiving the exception and a second address in the shadow stack in the audit mode, the method500includes replacing the second return address in the shadow stack with the first return address (act510). The method500also includes enabling at least a portion of the shadow stack to be a circular stack (act530) and determining whether usage of the shadow stack has reached a defined usage threshold (act540). In response to determining that the usage of the shadow stack has reached the defined usage threshold, one or more entries of the shadow stack is overwritten to prevent the shadow stack from overflowing a memory region allocated to the shadow stack (act550). In some embodiments, regardless of whether the shadow stack is overwritten, telemetry data associated with the thread is logged (act560). Such telemetry data includes (but is not limited to) (1) an identification of a process to which the thread belongs, (2) an identifier of the thread, and/or (3) an identifier of the application binary. In some embodiments, the application binary is a device driver. Examples of application binary identifiers include (but are not limited to) a file name, a hash of a particular portion of the binary, an embedded version metadata of the application binary, and/or a combination thereof. In some embodiments, the shadow stack comprises a maximum number of spaces for entries of return addresses based on a size of the memory region allocated to the shadow stack. One or more entries of return addresses are entered in one or more spaces among the maximum number of sequential spaces until usage of the shadow stack has reached the defined usage threshold. FIG.2Dillustrates an example of a shadow stack220(e.g., which corresponds to the shadow stack204ofFIGS.2A-2C). A memory region is allocated to the shadow stack220, which allows the shadow stack220to store a maximum number M of entries of return addresses, where M is a natural number. As illustrated, a first space221is shown at the top of the shadow stack220, and a last space (i.e., a Mth space231) is shown at the bottom of the shadow stack220. As illustrated, the shadow stack220grows downward; that is, newly generated return addresses are pushed onto a lower side of the shadow stack220. For example, a first generated return address215ais stored at the first space221, a second generated return address215bis stored at the second space222, the (N−1)th return address215cis stored at the (N−1)th space223, and so on and so forth. In some embodiments, determining that usage of the shadow stack has reached the defined usage threshold comprises determining that a number of entries of return addresses in the shadow stack has reached (e.g., equal to or greater than) a predetermined limit (also referred to a “first predetermined limit”). For example, as illustrated, the first predetermined limit is P, where P is a natural number. When an entry of return address is stored in the Pth space227in the shadow stack220, e.g., when the Pth space is filled in with return address215gor215k, it is determined that usage of the shadow stack220has reached the defined usage threshold. In some embodiments, determining that usage of the shadow stack has reached the defined usage threshold comprises determining that a ratio between the number of entries of return addresses in the shadow stack and a maximum number M of spaces for entries in the shadow stack is greater than a predetermined limit (also referred to as a second predetermined limit). For example, in some embodiments, the second predetermined limit is 80%. If the maximum number M=512, P=80%×512≈409. As such, when an entry of the return address is stored in the 409thspace in the shadow stack220, it is determined that the usage of the shadow stack has reached the defined usage threshold. In some embodiments, determining that usage of the shadow stack has reached the defined usage threshold comprises determining that a number of available spaces among the maximum number M of spaces in the shadow stack220is lower than a predetermined limit (also referred to a “third predetermined limit”). For example, in some embodiments, the third predetermined limit=100. If the maximum number M=512, P=512−100=412. As such, when an entry of return addresses is stored in the 412thspace in the shadow stack220, it is determined that the usage of the shadow stack has reached the defined usage threshold. In some embodiments, an existing entry that is stored at a particular numbered space (e.g., N) is first to be overwritten, where N is a natural number, and N<P. For example, the (P+1)th return address215hwill be stored at an Nth space224where an existing entry215dhas been previously entered; the (P+2)th return address215iwill be stored at an (N+1)th space225where an existing entry215ehas been previously entered, and so on and so forth, until the Pth space is reached again. As illustrated, when the return address215koverwrites the return address215gin the Pth space, the usage of the shadow stack has reached the defined usage threshold again. In such a case, the circular stack232will circulate again, and one or more existing entries of return addresses in the circular stack232will be overwritten again by one or more new entries. For example, the return address215l(following the return address215k) will be stored in the Nth space again, overwriting the currently stored return address215hin the Nth space; the return address215m(following the return address215l) will be stored in the (N+1)th space again, overwriting the currently stored return address215iin the Nth space. This process may repeat, and the return addresses stored in the spaces between (and including) the Nth space and the Pth space may be overwritten as many times as necessary, such that the spaces between (and including) the Nth space and the Pth space forms a circular stack232. In some embodiments, an operating system of computer system is configured to enable at least a portion of the shadow stack to be a circular stack. In some embodiments, a CPU is configured to allow privileged instructions to directly set a next shadow stack location of an active shadow stack, setting it to N. In some embodiments, a CPU may be configured with a shadow stack end address, and generate an exception when pushing a value to the shadow stack would write to the end address. In some embodiments, a CPU may be configured with a shadow stack end address and a shadow stack circular start address. In such an embodiment, rather than cause an exception at shadow stack overflow, the CPU may set a flag (e.g., a Boolean value) to indicate whether at least one entry of the shadow stack has been overwritten, while still causing an exception on a shadow stack mismatch. In some embodiments, the starting point N of the circular stack232and/or the ending point P of the circular stack232are predetermined numbers. In some embodiments, the starting point N of the circular stack232and/or the ending point P of the circular stack232are based on a ratio N/M of between the starting point N and the maximum space M of the shadow stack220and/or a ratio P/M of between the ending point P and the maximum space M of the shadow stack220. In some embodiments, the starting point N and/or the ending point P may be randomly selected within a range of numbers. As shown inFIG.2D, the starting point N of the circular stack232may be any point in the shadow stack. In some embodiments, a beginning portion (from the first to the (N−1)th entry) is preserved, i.e. not overwritten. It is advantageous to preserve a beginning portion of the shadow stack220because the beginning portion might provide more useful information about what has happened. In some embodiments, the third predetermined limit and/or the ending point P are determined based on a maximum number of entries of return addresses that can be simultaneously entered in the shadow stack when a particular event occurs. For example, when an intercept occurs, there may be up to 30 return addresses generated substantially simultaneously. Thus, to prevent shadow stack overflow from occurring, at least 30 spaces should be left after the ending point P of the circular stack232, i.e., M-P>30. In some embodiments, out of an abundance of caution, about 100 spaces are left after the ending point P of the circular point, i.e., M-P>100; as such, it is almost certain that the shadow stack would never overflow. In some embodiments, statistical data may be gathered to determine the maximum number of return addresses that can be simultaneously entered in the shadow stack, and P is determined based on the gathered statistical data. As long as the shadow stack does not overflow, the computer system can continue to gather and log telemetry data associated with the thread. Returning toFIG.4again, in some embodiments, in response to overwriting one or more currently used entries of the shadow stack, one or more telemetries associated with the thread is logged (act426). In some embodiments, the one or more telemetries include (but are not limited to) (1) an identification of a process to which the thread belongs, (2) an identifier of the thread, and/or (3) an identifier of an application binary (act426). In some embodiments, the application binary is a device driver. Examples of application binary identifiers include (but are not limited to) a file name, a hash of a particular portion of the binary, an embedded version metadata of the application binary, and/or a combination thereof. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above, or the order of the acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. When introducing elements in the appended claims, the articles “a,” “an,” “the,” and “said” are intended to mean there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
64,150
11861365
DETAILED DESCRIPTION Overview Disclosed herein are implementations of macro-op fusion. In a first aspect, the subject matter described in this specification can be embodied in integrated circuit for executing instructions that include one or more execution resource circuits configured to execute micro-ops to support an instruction set including macro-ops; an instruction decode buffer configured to store macro-ops fetched from memory; and an instruction decoder circuit configured to: detect a sequence of macro-ops stored in the instruction decode buffer, the sequence of macro-ops including a control-flow macro-op followed by one or more additional macro-ops, determine a micro-op that is equivalent to the detected sequence of macro-ops, and forward the micro-op to at least one of the one or more execution resource circuits for execution. In a second aspect, the subject matter described in this specification can be embodied in methods that include fetching macro-ops from memory and storing the macro-ops in an instruction decode buffer; detecting a sequence of macro-ops stored in the instruction decode buffer, the sequence of macro-ops including a control-flow macro-op followed by one or more additional macro-ops; determining a micro-op that is equivalent to the detected sequence of macro-ops; and forwarding the micro-op to at least one execution resource circuit for execution. In a third aspect, the subject matter described in this specification can be embodied in integrated circuits for executing instructions that include one or more execution resource circuits configured to execute micro-ops to support an instruction set including macro-ops; an instruction decode buffer configured to store macro-ops fetched from memory; a fusion predictor circuit configured to: detect a prefix of a sequence of macro-ops in the instruction decode buffer, determine a prediction of whether the sequence of macro-ops will be completed in a next fetch of macro-ops from memory and fused, and, based on the prediction, delay execution of the prefix until after the next fetch to enable fusion of the sequence of macro-ops; and an instruction decoder circuit configured to: detect the sequence of macro-ops stored in the instruction decode buffer, determine a micro-op that is equivalent to the detected sequence of macro-ops, and forward the micro-op to at least one of the one or more execution resource circuits for execution. These and other aspects of the present disclosure are disclosed in the following detailed description, the appended claims, and the accompanying figures. Systems and methods for macro-op fusion are disclosed. An integrated circuit (e.g., a processor or microcontroller) may decode and execute macro-op instructions of an instruction set architecture (ISA) (e.g., a RISC V instruction set). A sequence of multiple macro-ops decoded by the integrated circuit may be fused (i.e., combined) into a single equivalent micro-op that is executed by the integrated circuit. In some implementations, a control-flow instruction may be fused with subsequent data-independent instructions to form an instruction that does not require a control-flow event in the pipeline. For example, a branch macro-op instruction may be replaced with a non-branch micro-op. Performance may be improved by effectively removing control-flow instructions through macro-op fusion. For example, performance degradation associated with branch prediction misses may be avoided. In some conventional processors, a conditional branch would be predicted, and if predicted as taken, would normally initiate a pipeline flush. If the taken prediction was wrong, the pipeline would be flushed again to restart on a sequential path. If the conditional branch was predicted not-taken, but was actually taken, the pipeline would also be flushed. Only if the conditional branch was predicted not-taken and the branch was actually not-taken is the pipeline flush avoided. TABLE 1 below shows the number of pipeline flushes that may be carried out by a conventional processor using branch prediction. TABLE 1PredictedActual# Pipeline flushesTT1TN2NT1NN0 In some cases, where the branch may be difficult to predict, the branch can not only cause many pipeline flushes but can pollute the branch predictor, reducing performance for other predictable branches. For example, an unconditional jump with a short forward offset may be fused with one or more subsequent instructions. The unconditional jump plus the instructions that are skipped over may be fused into a single non-jump micro-op that has no effect on the machine except advancing the program counter by the jump offset. A benefit may include replacing the pipeline flush that would typically be required to execute a jump with a no-operation (NOP) instruction that just advances the program counter without a pipeline flush. In some implementations, more than one instruction may be skipped over. In some implementations, one or more target instructions may also be fused into a micro-op. For example, a conditional branch over one or more instructions may be fused. In some implementations, a conditional branch is fused with a following instruction such that the combination is executed as a single non-branch instruction. For example, the internal micro-op can either disable the write to the destination if the condition is false, or can be defined to always write the destination, which may simplify the operation of an out-of-order superscalar machine with register renaming. A fused micro-op may execute as a non-branch instruction, so that it avoids pipeline flushes, and in addition, avoids polluting branch predictor state. A sequence of macro-ops to be fused may include multiple instructions following a control-flow instruction. For example, a branch instruction and a single jump instruction may be fused. Unconditional jump instructions can target instructions much further away than conditional branches, but sometimes a conditional branch to a far away target is desired. This can be accomplished with a sequence of instructions, which may be fused internally by a processor. For example, a branch instruction and a function-call sequence may be fused. Function call instructions are not conditional, so they may be paired with a separate branch to make them conditional. For example, a branch instruction and a long jump sequence may be fused. Unconditional jump instructions also have limited reach. To branch arbitrarily far away, a 3-instruction sequence and a scratch register may be utilized, which can be fused into a single micro-op. For example, a branch instruction and a long function-call sequence may be fused. In some implementations, a dynamic fusion predictor may be used to facilitate macro-op fusion across instruction fetch boundaries in an instruction decode buffer. As instructions are fetched into the instruction decode buffer, there may be situations where the prefix of a potentially fusible sequence is present in the fetch buffer but the processor will have to wait to fetch additional instructions from memory before knowing for certain whether there is a fusible sequence. In some situations it may be beneficial to send the existing buffered prefix instructions into execution, while in other situations it may be beneficial to wait for the remaining instructions in the fusible sequence to be fetched and then fused with the buffered instructions. In general, there could be a performance or power advantage to either eagerly executing the prefix or waiting for the trailing instructions. A fixed policy may result in suboptimal performance. For example, a dynamic “beneficial fusion” predictor may be utilized to inform the processor whether to delay executing the current instruction, or instructions, in the fetch buffer and to wait until additional instructions are fetched. In some implementations, the fusion predictor is only consulted and updated if one or more of the buffered instructions in the potential fusion sequence could have been sent into execution (i.e., execution resources were available), otherwise, the predictor is neither consulted nor updated. For example, the fusion predictor entries can be indexed and/or tagged using one of many forms, such as, indexed by a program counter; indexed by hash of a current program counter and a program counter history; tagged, where each entry is tagged with a program counter; or tagless, where each entry is used without considering the program counter. For example, a program counter used to index the fusion predictor can be that used to fetch the last group of instructions, or the program counter of the potential fusion prefix, or the program counter of the next group to be fetched. For example, the entries in the fusion predictor might contain K-bit counters (K>=1) to provide hysteresis. The system may execute instruction sequences correctly regardless of the prediction made by the beneficial fusion predictor, and so a misprediction recovery mechanism may be omitted from the system. A beneficial fusion predictor may be updated based on a performance model that inspects the instructions that are fetched after the potential fusion sequence to determine if waiting for these additional instructions would be beneficial. The performance model may includes a number of potential components, such as: 1) Can the newly fetched instruction fuse with the buffered instructions? 2) Would fusion prevent parallel issue of instructions that follow the fusible sequence in the new fetch group? 3) Are there instructions in the new fetch group that depend on instructions in the buffered fusion prefix such that stalls are created that would have been obviated by eagerly executing the prefix instructions? As used herein, the term “circuit” refers to an arrangement of electronic components (e.g., transistors, resistors, capacitors, and/or inductors) that is structured to implement one or more functions. For example, a circuit may include one or more transistors interconnected to form logic gates that collectively implement a logical function. The term “macro-op” is used to describe an instruction held in a format described by the processor's instruction set architecture (ISA). Macro-ops are the instruction format in which software is encoded for a machine and all processors implementing the same ISA use the same encoding for macro-ops. The term “micro-op” is used to describe an internal processor-specific encoding of the operations used to control execution resources, and can vary widely between different implementations of the same ISA. In various circumstances, the correspondence between macro-ops and micro-ops used a by a processor to implement supported macro-ops may be one-to-one, one-to-many, or many-to-one. For example, a single macro-op can be cracked into one or more internal micro-ops, and multiple macro-ops can also be fused into a single internal micro-op. Details FIG.1is block diagram of an example of a system100for executing instructions from an instruction set with macro-op fusion. The system100includes a memory102storing instructions and an integrated circuit110configured to execute the instructions. For example, the integrated circuit may be a processor or a microcontroller. The integrated circuit110includes an instruction fetch circuit112; a program counter register114; an instruction decode buffer120configured to stores macro-ops122that have been fetched from the memory102; an instruction decoder circuit130configured to decode macro-ops from the instruction decode buffer120to generate corresponding micro-ops132that are passed to one or more execution resource circuits (140,142,144, and146) for execution. For example, the integrated circuit110may be configured to implement the process400ofFIG.4. The correspondence between macro-ops122and micro-ops is not always one-to-one. The instruction decoder circuit130is configured to fuse certain sequences of macro-ops122detected in the instruction decode buffer120, determining a single equivalent micro-op132for execution using the one or more execution resource circuits (140,142,144, and146). The instruction fetch circuit112is configured to fetch macro-ops from the memory102and store them in the instruction decode buffer120while the macro-op s122are processed by a pipelined architecture of the integrated circuit110. The program counter register114may be configured to store a pointer to a next macro-op in memory. A program counter value stored in the program counter register114may be updated based on the progress of execution by the integrated circuit110. For example, when an instruction is executed the program counter may be updated to point to a next instruction to be executed. For example, the program counter may be updated by a control-flow instruction to one of multiple possible values based on a result of testing a condition. For example, the program counter may be updated to a target address. The integrated circuit110includes an instruction decode buffer120configured to store macro-ops fetched from memory102. For example, the instruction decode buffer120may have a depth (e.g., 4, 8, 12, 16, or 24 instructions) that facilitates a pipelined and/or superscalar architecture of the integrated circuit110. The macro-ops may be members of an instruction set (e.g., a RISC V instruction set, an x86 instruction set, an ARM instruction set, or a MIPS instruction set) supported by the integrated circuit110. The integrated circuit110includes one or more execution resource circuits (140,142,144, and146) configured to execute micro-ops to support an instruction set including macro-ops. For example, the instruction set may be a RISC V instruction set. For example, the one or more execution resource circuits (140,142,144, and146) may include an adder, a shift register, a multiplier, and/or a floating point unit. The one or more execution resource circuits (140,142,144, and146) may update the state of the integrated circuit110, including internal registers and/or flags or status bits (not explicitly shown inFIG.1) based on results of executing a micro-op. Results of execution of a micro-op may also be written to the memory102(e.g., during subsequent stages of a pipelined execution). The integrated circuit110includes an instruction decoder circuit130configured to decode the macro-ops122in the instruction decode buffer120. The instruction decode buffer120may convert the macro-ops into corresponding micro-ops132that are internally executed by the integrated circuit using the one or more execution resource circuits (140,142,144, and146). The instruction decoder circuit130is configured to implement macro-op fusion, where multiple macro-ops are converted to a single micro-op for execution. For example, the instruction decoder circuit130may be configured to detect a sequence of macro-ops stored in the instruction decode buffer120. For example, detecting the sequence of macro-ops may include detecting a sequence of opcodes as portions of the respective macro-ops. The sequence of macro-ops may include a control-flow macro-op (e.g., a branch instruction or a call instruction) followed by one or more additional macro-ops. The instruction decoder circuit130may determine a micro-op that is equivalent to the detected sequence of macro-ops. The instruction decoder circuit130may forward the micro-op to at least one of the one or more execution resource circuits (140,142,144, and146) for execution. In some implementations, the control-flow macro-op is a branch instruction and the micro-op is not a branch instruction. For example, the sequence of macro-ops may include an unconditional jump and one or more macro-ops that will be skipped, and the micro-op may be a NOP that advances the program counter to a target of the unconditional jump. For example, an unconditional jump with a short forward offset may be fused with one or more subsequent instructions. The unconditional jump plus the instructions that are skipped over may be fused into a single non-jump micro-op that has no effect on the machine except advancing the program counter by the jump offset. A benefit may include removing the pipeline flush that would typically be required to execute a jump with a no-operation (NOP) instruction that just advances the program counter without a pipeline flush. For example, the sequence of macro-ops:j targetadd x3, x3, 4target: <next instruction> may be replaced with the fused micro-op:nop_pc+8 #Advance program counter#over skipped instruction In some implementations, more than one instruction may be skipped over. In some implementations, one or more target instructions may also be fused into a micro-op:<next instruction>_pc+12 In some implementations, the sequence of macro-ops includes an unconditional jump, one or more macro-ops that will be skipped, and a macro-op at a target of the unconditional jump; and the micro-op performs a function of the macro-op at the target of the unconditional jump and advances the program counter to point to a next macro-op after the target of the unconditional jump. For example, a conditional branch over one or more instructions may be fused. In some implementations, the sequence of macro-ops includes a conditional branch and one or more macro-ops after the conditional branch and before a target of the conditional branch, and the micro-op advances the program counter to the target of the conditional branch. In some implementations, a conditional branch is fused with a following instruction such that the combination is executed as a single non-branch instruction. For example, the sequence of macro-ops:bne x1, x0, targetaddi x3, x3, 1target: <unrelated instruction> may be replaced with the fused micro-op: ifeqz_addi x3, x3, x1, 1 # If x1==0, x3=x3+1,# else x3=x3; PC+=8 For example, the internal micro-op can either disable the write to the destination if the condition is false, or can be defined to always write the destination, which may simplify the operation of an out-of-order superscalar machine with register renaming. A fused micro-op may execute as a non-branch instruction, so that it avoids pipeline flushes, and in addition, avoids polluting a branch predictor state. For example, the sequence of macro-ops:bne x2, x3, targetsub x5, x7, x8target: <unrelated instruction> may be replaced with the fused micro-op: ifeq_sub x5, x7, x8, x2, x3 # If x2==x3,# x5=x7−x8,# else x5=x5; PC+=8 In some implementations, the sequence of macro-ops includes a conditional branch and one or more macro-ops after the conditional branch and before a target of the conditional branch, and a macro-op at the target of the conditional branch; and the micro-op advances the program counter to point to a next macro-op after the target of the conditional branch. A sequence of macro-ops to be fused may include multiple instructions following a control-flow instruction. For example, the sequence of macro-ops:bne x2, x0, targetslli x1, x1, 2ori x1, x1, 1target: <unrelated instruction> may be replaced with the fused micro-op: ifeqz_sllori x1, x2, 2, 1 # If x2==0,# then x1=(x1<<2)|1# else x1=x1;PC+=12 For example, a branch instruction and a single jump instruction may be fused. Unconditional jump instructions can target instructions much further away than conditional branches, but sometimes a conditional branch to a far away target is desired. This can be accomplished with a sequence of instructions, which may be fused internally by a processor. In some implementations, the sequence of macro-op s includes a conditional branch followed by an unconditional jump. For example, the sequence of macro-ops:beq x8, x9, skipj targetskip: <unrelated instruction> . . . .target: <unrelated instruction> may be replaced with the fused micro-op: jne x8, x9, target # If x8 !=x9,# then PC=target else PC+=8 For example, a branch instruction and a function-call sequence may be fused. Function call instructions are not conditional, so may be paired with a separate branch to make conditional. In some implementations, the sequence of macro-ops includes a conditional branch followed by a jump and link. For example, the sequence of macro-ops:c.bnez x8, skip #2-byte compressed branchjal x1, subroutineskip: <unrelated instruction> . . . . subroutine: <unrelated instruction> may be replaced with the fused micro-op: jalez x1, x8, subroutine # If x8==0,# then x1=PC+6,# PC=subroutine# else x1=x1, PC=PC+6 For example, a branch instruction and a long jump sequence may be fused. Unconditional jump instructions also have limited reach. To branch arbitrarily far away, a 3-instruction sequence and a scratch register may be utilized, which can be fused into a single micro-op. In some implementations, the sequence of macro-ops includes a conditional branch followed by a pair of macro-ops implementing a long jump and link. For example, the sequence of macro-ops:c.beqz x8, skip1: auipc x6, %pcrel_hi(target)jalr x0, %pcrel_lo(1b)(x6)skip: <unrelated instruction> . . . .target: <unrelated instruction> may be replaced with the fused micro-op: jnez_far x6, x8, target_hi, target# If x8 != 0, then# x6=target_hi, PC=target# else x6=x6, PC=PC+10 For example, a branch instruction and a long function-call sequence may be fused. In some implementations, the sequence of macro-op s includes a conditional branch followed by a pair of macro-ops implementing a long unconditional jump. For example, the sequence of macro-ops:blt x8, x0, skip1: auipc x1, %pcrel_hi(subroutine)jalr x1, %pcrel_lo(1b)(x1)skip: <unrelated instruction> . . . . subroutine: <unrelated instruction> may be replaced with the fused micro-op: jalgez_far x1, x8, subroutine # If x8 >= 0, then# x1=PC+12, PC=subroutine# else x1=x1, PC=PC+12 The instruction decoder circuit130may be configured to detect and fuse multiple different sequences of macro-ops that include a control-flow instruction, such as the sequences macro-ops described above. In some implementations (not shown inFIG.1), the memory102may be included in the integrated circuit110. FIG.2is block diagram of an example of a system200for executing instructions from an instruction set with macro-op fusion with fusion prediction. The system200is similar to the system100ofFIG.1, with the addition of fusion predictor circuit210configured to facilitate detection and beneficial fusion of candidate sequences of macro-ops. For example, the system200may be used to implement the process400ofFIG.4. For example, the system200may be used to implement the process500ofFIG.5. For example, the fusion predictor circuit210may include the fusion predictor circuit310ofFIG.3. The system200includes a fusion predictor circuit210configured to detect a prefix of a sequence of macro-ops in the instruction decode buffer. For example, where the instruction decoder circuit130is configured to detect a sequence of macro-op instructions consisting of instructions 1 through N (e.g., N=2, 3, 4, or 5) when it occurs in the instruction decode buffer120, the fusion predictor circuit210may be configured to detect prefixes including the one or more macro-op instructions 1 through m, where 1<=m<N, when they occur in the instruction decode buffer120. The fusion predictor circuit210is configured to determine a prediction of whether the sequence of macro-ops will be completed in a next fetch of macro-ops from memory and fused. For example, the prediction may be determined using a table of prediction counters that is maintained by the fusion predictor circuit210. The prediction counters may serve as estimates of a likelihood that a prefix will be part of a sequence of macro-ops that is completed and fused. For example, the prediction counters may be K bit counters with K>1 (e.g., K=2) to provide some hysteresis. In some implementations, the table of prediction counters is indexed by a program counter stored in the program counter register114. In some implementations, the table of prediction counters is tagged with program counter values. Maintaining the table of prediction counters may include updating a prediction counter after a corresponding prefix is detected and the next set of instructions is fetched from memory. For example, the fusion predictor circuit210may be configured to update the table of prediction counters based on whether the sequence of macro-ops is completed by the next fetch of macro-ops from memory. For example, the fusion predictor circuit210may be configured to update the table of prediction counters based on whether there are instructions in the next fetch that depend on instructions in the prefix. For example, the fusion predictor circuit210may be configured to update the table of prediction counters based on whether fusion would prevent parallel issue of instructions that follow the fusible sequence in the next fetch group. The fusion predictor circuit210is configured to, based on the prediction, delay execution of the prefix until after the next fetch to enable fusion of the sequence of macro-ops, or commence execution of the prefix before the next fetch and forego any possible fusion of a sequence including the prefix. In some implementations (not shown inFIG.2), the fusion predictor circuit210is implemented as part of the instruction decoder circuit130. FIG.3is block diagram of an example of a system300for fusion prediction. The system300includes an instruction decode buffer120and a fusion predictor circuit310. The fusion predictor circuit310may be configured to examine macro-op instructions in the instruction decode buffer120to determine a prediction332of whether the sequence of macro-ops including a detected prefix will be completed in a next fetch of macro-ops from memory and fused. The fusion predictor circuit310includes a prefix detector circuit320, a prediction determination circuit330, a table of prediction counters340, and a prediction update circuit350. The fusion predictor circuit310may also be configured to examine macro-op instructions in the instruction decode buffer120to maintain a table of prediction counters340. For example, the system300may be used as part of a larger system (e.g., the system200ofFIG.2) to implement the process500ofFIG.5. The fusion predictor circuit310includes a prefix detector circuit320that is configured to detect a prefix of a sequence of macro-ops in the instruction decode buffer120. For example, where an instruction decoder (e.g., the instruction decoder circuit130) is configured to detect a sequence of macro-op instructions consisting of instructions 1 through N (e.g., N=2, 3, 4, or 5) when it occurs in the instruction decode buffer120, the prefix detector circuit320may be configured to detect prefixes including the one or more macro-op instructions 1 through m, where 1<=m<N, when they occur in the instruction decode buffer120. For example, the prefix detector circuit320may include a network of logic gates configured to set a flag when a sequence of m opcodes corresponding a prefix is read in the last m macro-ops stored in the instruction buffer. The fusion predictor circuit310includes a prediction determination circuit330that is configured to determine a prediction332of whether a sequence of macro-ops will be completed in a next fetch of macro-ops from memory and fused. For example, the prediction332may include a binary value indicating whether a fusion with the detected prefix is expected to occur after the next fetch of macro-ops. For example, the prediction332may include an identifier of the prefix that has been detected. The prediction332may be determined by looking up a corresponding prediction counter in the table of prediction counters340, and determining the prediction based on the value of the prediction counter. The prediction counters may serve as estimates of a likelihood that a prefix will be part of a sequence of macro-ops that is completed and fused. For example, the prediction counters stored in the table of prediction counters340may be K bit counters with K>1 (e.g., K=2) to provide some hysteresis. For example, a prediction332may be determined as true if a corresponding prediction counter has a current value >=2{circumflex over ( )}K (e.g., the last bit of the counter is a one), and determined as false otherwise. For example, the prediction determination circuit330may determine a binary portion of a prediction as the most significant bit of a corresponding K-bit prediction counter of the table of prediction counters340. In some implementations, the table of prediction counters340is indexed by a program counter. In some implementations, the table of prediction counters340is indexed by a hash of a program counter and program counter history. In some implementations, the table of prediction counters340is tagged with program counter values. For example, a program counter used to index the table of prediction counters340can be that used to fetch the last group of instructions, or the program counter of the potential fusion prefix, or the program counter of the next group to be fetched. In some implementations, the table of prediction counters340is tagless where the entries are used without considering a program counter. In some implementations, where multiple sequences of macro-ops and/or prefixes are sought for potential fusion, the table of prediction counters340may be tagged or indexed by an identifier of the detected prefix (e.g., a concatenation of one or more opcodes for the prefix or an index value associated with the prefix). The fusion predictor circuit310includes a prediction update circuit350, which may be configured to maintain the table of prediction counters340. For example, the prediction update circuit350may be configured to update the table of prediction counters based on whether the sequence of macro-ops is completed by the next fetch of macro-ops from memory. For example, the prediction update circuit350may be configured to update the table of prediction counters based on whether there are instructions in the next fetch that depend on instructions in the prefix. For example, the prediction update circuit350may be configured to update the table of prediction counters based on whether fusion would prevent parallel issue of instructions that follow the fusible sequence in the next fetch group. In some implementations, the table of prediction counters340is only consulted and updated if one or more of the buffered macro-ops of the prefix of the potential fusion sequence could have been sent into execution (i.e., execution resources were available), otherwise, the table of prediction counters340is neither consulted nor updated. The fusion predictor circuit310may, based on the prediction, delay execution of the prefix until after the next fetch to enable fusion of the sequence of macro-ops. For example, the delaying execution may include holding the one or more macro-ops of the prefix in a decode stage of a pipeline for multiple clock cycles. For example, the system300may be part of a larger system, such as an integrated circuit (e.g., a processor or a microcontroller) for executing instructions. The instruction decode buffer120may be configured to store macro-ops fetched from memory. The integrated circuit may also include one or more execution resource circuits configured to execute micro-ops to support an instruction set (e.g., a RISC V instruction set, an x86 instruction set, an ARM instruction set, or a MIPS instruction set) including macro-ops. The integrated circuit may also include an instruction decoder circuit configured to detect the sequence of macro-ops stored in the instruction decode buffer, determine a micro-op that is equivalent to the detected sequence of macro-ops, and forward the micro-op to at least one of the one or more execution resource circuits for execution. FIG.4is flow chart of an example of a process400for executing instructions from an instruction set with macro-op fusion. The process400includes fetching410macro-ops from memory; detecting420a sequence of macro-ops stored in the instruction decode buffer, the sequence of macro-ops including a control-flow macro-op followed by one or more additional macro-ops; determining430a micro-op that is equivalent to the detected sequence of macro-ops; and forwarding440the micro-op to at least one execution resource circuit for execution. For example, the process400may be implemented using the system100ofFIG.1. For example, the process400may be implemented using the system200ofFIG.2. The process400includes fetching410macro-ops from memory and storing the macro-ops in an instruction decode buffer (e.g., the instruction decode buffer120). The instruction decode buffer may be configured to store macro-ops fetched from memory while the macro-ops are processed by a pipelined architecture of an integrated circuit (e.g. a processor or microcontroller). For example, the instruction decode buffer may have a depth (e.g., 4, 8, 12, 16, or 24 instructions) that facilitates a pipelined and/or superscalar architecture of the integrated circuit. The macro-ops may be members of an instruction set (e.g., a RISC V instruction set, an x86 instruction set, an ARM instruction set, or a MIPS instruction set) supported by the integrated circuit. The process400includes detecting420a sequence of macro-ops stored in the instruction decode buffer, the sequence of macro-ops including a control-flow macro-op followed by one or more additional macro-ops. For example, detecting420the sequence of macro-ops may include detecting a sequence of opcodes as portions of the respective macro-ops. The sequence of macro-ops may include a control-flow macro-op (e.g., a branch instruction or a procedure call instruction) and one or more additional macro-ops. In some implementations, detecting420the sequence of macro-ops in time to facilitate macro-op fusion is enabled by using a fusion predictor (e.g., the fusion predictor circuit310ofFIG.3) to first detect a prefix of the sequence and delay execution of the prefix until the remainder of the sequence on macro-ops is fetched410from memory. For example, the process500ofFIG.5may be implemented to facilitate detection and fusing of the sequence of macro-ops. The process400includes determining430a micro-op that is equivalent to the detected sequence of macro-ops. For example, the control-flow instruction may effectively be removed from the program where the micro-op that is determined430does not include a control-flow aspect. In some implementations, the control-flow macro-op is a branch instruction and the micro-op is not a branch instruction. Removing branches or other control-flow instructions may improve performance of an integrated circuit executing a program including the macro-ops. For example, performance may be improved by avoiding pipeline flushes associated with control-flow instructions and/or avoiding polluting a branch predictor state. For example, the sequence of macro-ops may include an unconditional jump and one or more macro-ops that will be skipped, and the micro-op may be a NOP that advances the program counter to a target of the unconditional jump. For example, an unconditional jump with a short forward offset may be fused with one or more subsequent instructions. The unconditional jump plus the instructions that are skipped over may be fused into a single non-jump micro-op that has no effect on the machine except advancing the program counter by the jump offset. A benefit may include removing the pipeline flush that would typically be required to execute a jump with a no-operation (NOP) instruction that just advances the program counter without a pipeline flush. For example, for the sequence of macro-ops:j targetadd x3, x3, 4target: <next instruction> the micro-op may be determined430as: nop_pc+8 # Advance program counter over# skipped instruction In some implementations, more than one instruction may be skipped over. In some implementations, one or more target instructions may also be fused into a micro-op. For example, the micro-op may be determined430as:<next instruction>_pc+12 In some implementations, the sequence of macro-ops includes an unconditional jump, one or more macro-ops that will be skipped, and a macro-op at a target of the unconditional jump; and the micro-op performs a function of the macro-op at the target of the unconditional jump and advances the program counter to point to a next macro-op after the target of the unconditional jump. For example, a conditional branch over one or more instructions may be fused. In some implementations, the sequence of macro-ops includes a conditional branch and one or more macro-ops after the conditional branch and before a target of the conditional branch, and the micro-op advances the program counter to the target of the conditional branch. In some implementations, a conditional branch is fused with a following instruction such that the combination is executed as a single non-branch instruction. For example, for the sequence of macro-ops:bne x1, x0, targetaddi x3, x3, 1target: <unrelated instruction> the micro-op may be determined430as: ifeqz_addi x3, x3, x1, 1 # If x1==0, x3=x3+1,# else x3=x3; PC+=8 For example, the internal micro-op can either disable the write to the destination if the condition is false, or can be defined to always write the destination, which may simplify the operation of an out-of-order superscalar machine with register renaming. A fused micro-op may execute as a non-branch instruction, so that it avoids pipeline flushes, and in addition, avoids polluting a branch predictor state. For example, for the sequence of macro-ops:bne x2, x3, targetsub x5, x7, x8target: <unrelated instruction> the micro-op may be determined430as: ifeq_sub x5, x7, x8, x2, x3 # If x2==x3,# x5=x7-x8, else# x5=x5; PC+=8 In some implementations, the sequence of macro-ops includes a conditional branch and one or more macro-ops after the conditional branch and before a target of the conditional branch, and a macro-op at the target of the conditional branch; and the micro-op advances the program counter to point to a next macro-op after the target of the conditional branch. A sequence of macro-ops to be fused may include multiple instructions following a control-flow instruction. For example, for the sequence of macro-ops:bne x2, x0, targetslli x1, x1, 2ori x1, x1, 1target: <unrelated instruction> the micro-op may be determined430as: ifeqz_sllori x1, x2, 2, 1 # If x2==0 then# x1=(x1<<2)|1 else# x1=x1;PC+=12 For example, a branch instruction and a single jump instruction may be fused. Unconditional jump instructions can target instructions much further away than conditional branches, but sometimes a conditional branch to a far away target is desired. This can be accomplished with a sequence of instructions, which may be fused internally by a processor. In some implementations, the sequence of macro-op s includes a conditional branch followed by an unconditional jump. For example, for the sequence of macro-ops:beq x8, x9, skipj targetskip: <unrelated instruction> . . . .target: <unrelated instruction> the micro-op may be determined430as: jne x8, x9, target # If x8 != x9, then# PC=target else PC+=8 For example, a branch instruction and a function-call sequence may be fused. Function call instructions are not conditional, so may be paired with a separate branch to make conditional. In some implementations, the sequence of macro-ops includes a conditional branch followed by a jump and link. For example, for the sequence of macro-ops:c.bnez x8, skip #2-byte compressed branchjal x1, subroutineskip: <unrelated instruction> . . . . subroutine: <unrelated instruction> the micro-op may be determined430as: jalez x1, x8, subroutine # If x8==0, then# x1=PC+6, PC=subroutine# else x1=x1, PC=PC+6 For example, a branch instruction and a long jump sequence may be fused. Unconditional jump instructions also have limited reach. To branch arbitrarily far away, a 3-instruction sequence and a scratch register may be utilized, which can be fused into a single micro-op. In some implementations, the sequence of macro-ops includes a conditional branch followed by a pair of macro-ops implementing a long jump and link. For example, for the sequence of macro-ops:c.beqz x8, skip1: auipc x6, %pcrel_hi(target)jalr x0, %pcrel_lo(1b)(x6)skip: <unrelated instruction> . . . .target: <unrelated instruction> the micro-op may be determined430as: jnez_far x6, x8, target_hi, target # If x8 != 0,# then x6=target hi, PC=target# else x6=x6, PC=PC+10 For example, a branch instruction and a long function-call sequence may be fused. In some implementations, the sequence of macro-op s includes a conditional branch followed by a pair of macro-ops implementing a long unconditional jump. For example, for the sequence of macro-ops:blt x8, x0, skip1: auipc x1, %pcrel_hi(subroutine)jalr x1, %pcrel_lo(1b)(x1)skip: <unrelated instruction> . . . . subroutine: <unrelated instruction> the micro-op may be determined430as: jalgez_far x1, x8, subroutine # If x8 >= 0,# then x1=PC+12, PC=subroutine# else x1=x1, PC=PC+12 The process400includes forwarding440the micro-op to at least one execution resource circuit for execution. The at least one execution resource circuit (e.g.,140,142,144, and/or146ofFIG.1) may be configured to execute micro-ops to support an instruction set including macro-ops. For example, the instruction set may be a RISC V instruction set. For example, the at least one execution resource circuit may include an adder, a shift register, a multiplier, and/or a floating point unit. The at least one execution resource circuit may update the state of an integrated circuit (e.g., a processor or microcontroller) that is implementing the process400, including internal registers and/or flags or status bits based on results of executing a micro-op. Results of execution of a micro-op may also be written to the memory (e.g., during subsequent stages of a pipelined execution). FIG.5is flow chart of an example of a process500for predicting beneficial macro-op fusion. The process500includes detecting510a prefix of the sequence of macro-ops; determining520a prediction of whether the sequence of macro-ops will be completed in a next fetch of macro-ops from memory and fused; when no fusion is predicted, commence530execution of the prefix prior to fetching532a next batch of one or more macro-ops; when fusion is predicted, delaying540execution of the prefix until after fetching542a next batch of one or more macro-ops; if the complete sequence of macro-ops is detected545, fusing548the sequence of macro-ops including the prefix; and updating550a table of prediction counters. For example, the process500may be implemented using the fusion predictor circuit210ofFIG.2. For example, the process500may be implemented using the fusion predictor circuit310ofFIG.3. The process500may be utilized to facilitate fusion of many different types of sequences of macro-ops, including sequences that may lack a control-flow instruction. The process500includes detecting510a prefix of the sequence of macro-ops in an instruction decode buffer (e.g., the instruction decode buffer120). For example, where an instruction decoder is configured to detect a sequence of macro-op instructions that includes instructions 1 through N (e.g., N=2, 3, 4, or 5) when it occurs in the instruction decode buffer, prefixes including the one or more macro-op instructions 1 through m, where 1<=m<N, may be detected510when they occur in the instruction decode buffer. For example, detecting510the prefix may include detecting a sequence of opcodes as portions of the respective macro-ops of the prefix. The process500includes determining520a prediction of whether the sequence of macro-ops will be completed in a next fetch of macro-ops from memory and fused. For example, the prediction may be determined520using a table of prediction counters that is maintained by a fusion predictor circuit. The prediction counters may serve as estimates of a likelihood that a prefix will be part of a sequence of macro-ops that is completed and fused. For example, the prediction counters may be K bit counters with K>1 (e.g., K=2) to provide some hysteresis. For example, a prediction may be determined520as yes or true if a corresponding prediction counter has a current value >=2{circumflex over ( )}K (e.g., the last bit of the counter is a one), and determined520as no or false otherwise. In some implementations, the table of prediction counters is indexed by a program counter. In some implementations, the table of prediction counters is indexed by a hash of a program counter and program counter history. In some implementations, the table of prediction counters is tagged with program counter values. For example, a program counter used to index the table of prediction counters can be that used to fetch the last group of instructions, or the program counter of the potential fusion prefix, or the program counter of the next group to be fetched. In some implementations, the table of prediction counters is tagless where the entries are used without considering a program counter. The process500includes, if (at operation525) no fusion is predicted to occur, then execution of the prefix is commenced530prior to fetching532a next batch of one or more macro-ops. For example, the commencing530execution of the prefix may include forwarding a micro-op version of a macro-op of the prefix to one or more execution resources for execution. The process500includes, if (at operation525) a fusion is predicted to occur, based on the prediction, delaying540execution of the prefix until after a next fetch to enable fusion of the sequence of macro-ops. For example, the delaying540execution may include holding the one or more macro-ops of the prefix in a decode stage of a pipeline for multiple clock cycles. After fetching542a next batch of one or more macro-ops, if (at operation545) the complete sequence of macro-ops is detected, then the complete sequence of macro-ops, including the prefix, is fused548to form a single micro-op for execution. For example, the sequence of macro-ops may be fused548using the process400ofFIG.4. If (at operation545) the complete sequence of macro-ops is not detected, then execution proceeds as normal, starting with the delayed540instructions of the prefix. The process500includes maintaining a table of prediction counters that is used for determining520predictions. For example, the process500include updating550the table of prediction counters after detecting510a prefix a fetching (532or542) a next batch of one or more macro-ops. For example, the table of prediction counters may be updated550based on whether the sequence of macro-ops is completed by the next fetch of macro-ops from memory. For example, the table of prediction counters may be updated550based on whether there are instructions in the next fetch that depend on instructions in the prefix. For example, the table of prediction counters may be updated550based on whether fusion would prevent parallel issue of instructions that follow the fusible sequence in the next fetch group. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
47,728
11861366
DETAILED DESCRIPTION Recent advances in materials, devices, and integration technology, can be leveraged to provide memory-centric compute topologies. Such topologies can realize advances in compute efficiency and workload throughput, for example, for applications constrained by size, weight, or power requirements. The topologies can be used to facilitate low-latency compute near, or inside of, memory or other data storage elements. The approaches can be particularly well-suited for various compute-intensive operations with sparse lookups, such as in transform computations (e.g., fast Fourier transform computations (FFT)), or in applications such as neural networks or artificial intelligence (Al), financial analytics, or simulations or modeling such as for computational fluid dynamics (CFD), Enhanced Acoustic Simulator for Engineers (EASE), Simulation Program with Integrated Circuit Emphasis (SPICE), and others. Systems, devices, and methods discussed herein can include or use memory-compute systems with processors, or processing capabilities, that are provided in, near, or integrated with memory or data storage components. Such systems are referred to generally herein as compute-near-memory (CNM) systems. A CNM system can be a node-based system with individual nodes in the systems coupled using a system scale fabric. Each node can include or use specialized or general purpose processors, and user-accessible accelerators, with a custom compute fabric to facilitate intensive operations, particularly in environments where high cache miss rates are expected. In an example, each node in a CNM system can have a host processor or processors. Within each node, a dedicated hybrid threading processor can occupy a discrete endpoint of an on-chip network. The hybrid threading processor can have access to some or all of the memory in a particular node of the system, or a hybrid threading processor can have access to memories across a network of multiple nodes via the system scale fabric. The custom compute fabric, or hybrid threading fabric, at each node can have its own processor(s) or accelerator(s) and can operate at higher bandwidth than the hybrid threading processor. Different nodes in a compute-near-memory system can be differently configured, such as having different compute capabilities, different types of memories, different interfaces, or other differences. However, the nodes can be commonly coupled to share data and compute resources within a defined address space. In an example, a compute-near-memory system, or a node within the system, can be user-configured for custom operations (also called instructions). A user can provide instructions using a high-level programming language, such as C/C++, that can be compiled and mapped directly into a dataflow architecture of the system, or of one or more nodes in the CNM system. That is, the nodes in the system can include hardware blocks (e.g., memory controllers, atomic units, other customer accelerators, etc.) that can be configured to directly implement or support user instructions to thereby enhance system performance and reduce latency. In an example, a compute-near-memory system can be particularly suited for implementing a hierarchy of instructions and nested loops (e.g., two, three, or more, loops deep, or multiple-dimensional loops). A standard compiler can be used to accept high-level language instructions and, in turn, compile directly into the dataflow architecture of one or more of the nodes. For example, a node in the system can include a hybrid threading fabric accelerator. The hybrid threading fabric accelerator can execute in a user space of the CNM system and can initiate its own threads or sub-threads, which can operate in parallel. Each thread can map to a different loop iteration to thereby support multi-dimensional loops. With the capability to initiate such nested loops, among other capabilities, the CNM system can realize significant time savings and latency improvements for compute-intensive operations. A compute-near-memory system, or nodes or components of a compute-near-memory system, can include or use various memory devices, controllers, and interconnects, among other things. In an example, the system can comprise various interconnected nodes and the nodes, or groups of nodes, can be implemented using chiplets. Chiplets are an emerging technique for integrating various processing functionality. Generally, a chiplet system is made up of discrete chips (e.g., integrated circuits (ICs) on different substrate or die) that are integrated on an interposer and packaged together. This arrangement is distinct from single chips (e.g., ICs) that contain distinct device blocks (e.g., intellectual property (IP) blocks) on one substrate (e.g., single die), such as a system-on-a-chip (SoC), or discretely packaged devices integrated on a board. In general, chiplets provide production benefits than single die chips, including higher yields or reduced development costs.FIG.6AandFIG.6B, discussed below, illustrate generally an example of a chiplet system such as can comprise a compute-near-memory system. A Coarse Grain Reconfigurable Array (CGRA) is an array of a large number of processing elements (also called tiles) interconnected by a mesh network. The Hybrid Threading Fabric (HTF) is a type of Course Grain Reconfigurable Array (CGRA) in which the tiles are connected in a grid configuration with each tile connected to its horizontal and vertical neighbors. The processing elements (PE) of each tile in the HTF may include one or more multiply/shift units and one or more arithmetic/logic units (ALUs). Each tile is scheduled in a modulo fashion that can be thought of as time-slicing the PE. Each time slice is referred to as a “spoke.” The number of spokes is called an initiation interval, and is the number of commands that can be in the pipeline simultaneously. The operation of the PE takes multiple clock cycles to perform a task that depends on the pipeline depth. For example, on a four stage pipeline, data starting in the first stage moves through progressive stages until after the fourth stage, the data goes to another tile, the same tile, or the memory interface. Additionally, on each successive clock, the internal PE pipeline either transitions to the next stage or goes to a different tile or to external memory. For example, the data for the first spoke moves to the second stage and data for the second spoke can begin stage zero. In architectures such as the HTF, the number of spokes (the initiation interval) is configurable on a per-tile basis. That is, for a given workload, the number of spokes may be adjusted for the workload. For certain workloads, such as nested loops, the inventors have realized that increased efficiency may be attained by altering the initiation interval of a plurality of PEs by making a first initiation interval of a first PE a multiple of a second initiation interval of a second PE. For example, because an outer loop of a nested loop may be executed at a lower frequency than the inner loop, assigning the inner loop to a PE with a lower initiation interval means that the instructions of the inner loop will execute faster than the instructions of the outer loop. This reduces time that the PE would otherwise wait for execution of loop instructions by optimizing the execution frequency of each component of the nested loop. Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which provide for more efficient CGRA execution by assigning different initiation intervals to different PEs executing a same code base. The initiation intervals may be a multiple of each other and the PE with the lowest initiation interval may be used to execute instructions of the code that is to be executed at a greater frequency than other instructions than other instructions that may be assigned to PEs with higher initiation intervals. FIG.1illustrates generally a first example of a compute-near-memory system, or CNM system102. The example of the CNM system102includes multiple different memory-compute nodes, such as can each include various compute-near-memory devices. Each node in the system can operate in its own operating system (OS) domain (e.g., Linux, among others). In an example, the nodes can exist collectively in a common OS domain of the CNM system102. The example ofFIG.1includes an example of a first memory-compute node104of the CNM system102. The CNM system102can have multiple nodes, such as including different instances of the first memory-compute node104, that are coupled using a scale fabric106. In an example, the architecture of the CNM system102can support scaling with up to n different memory-compute nodes (e.g., n=4096) using the scale fabric106. As further discussed below, each node in the CNM system102can be an assembly of multiple devices. The CNM system102can include a global controller for the various nodes in the system, or a particular memory-compute node in the system can optionally serve as a host or controller to one or multiple other memory-compute nodes in the same system. The various nodes in the CNM system102can thus be similarly or differently configured. In an example, each node in the CNM system102can comprise a host system that uses a specified operating system. The operating system can be common or different among the various nodes in the CNM system102. In the example ofFIG.1, the first memory-compute node104comprises a host system108, a first switch110, and a first memory-compute device112. The host system108can comprise a processor, such as can include an X86, ARM, RISC-V, or other type of processor. The first switch110can be configured to facilitate communication between or among devices of the first memory-compute node104or of the CNM system102, such as using a specialized or other communication protocol, generally referred to herein as a chip-to-chip protocol interface (CTCPI). That is, the CTCPI can include a specialized interface that is unique to the CNM system102, or can include or use other interfaces such as the compute express link (CXL) interface, the peripheral component interconnect express (PCIe) interface, or the chiplet protocol interface (CPI), among others. The first switch110can include a switch configured to use the CTCPI. For example, the first switch110can include a CXL switch, a PCIe switch, a CPI switch, or other type of switch. In an example, the first switch110can be configured to couple differently configured endpoints. For example, the first switch110can be configured to convert packet formats, such as between PCIe and CPI formats, among others. The CNM system102is described herein in various example configurations, such as comprising a system of nodes, and each node can comprise various chips (e.g., a processor, a switch, a memory device, etc.). In an example, the first memory-compute node104in the CNM system102can include various chips implemented using chiplets. In the below-discussed chiplet-based configuration of the CNM system102, inter-chiplet communications, as well as additional communications within the system, can use a CPI network. The CPI network described herein is an example of the CTCPI, that is, as a chiplet-specific implementation of the CTCPI. As a result, the below-described structure, operations, and functionality of CPI can apply equally to structures, operations, and functions as may be otherwise implemented using non-chiplet-based CTCPI implementations. Unless expressly indicated otherwise, any discussion herein of CPI applies equally to CTCPI. A CPI interface includes a packet-based network that supports virtual channels to enable a flexible and high-speed interaction between chiplets, such as can comprise portions of the first memory-compute node104or the CNM system102. The CPI can enable bridging from intra-chiplet networks to a broader chiplet network. For example, the Advanced eXtensible Interface (AXI) is a specification for intra-chip communications. AXI specifications, however, cover a variety of physical design options, such as the number of physical channels, signal timing, power, etc. Within a single chip, these options are generally selected to meet design goals, such as power consumption, speed, etc. However, to achieve the flexibility of a chiplet-based memory-compute system, an adapter, such as using CPI, can interface between the various AXI design options that can be implemented in the various chiplets. By enabling a physical channel-to-virtual channel mapping and encapsulating time-based signaling with a packetized protocol, CPI can be used to bridge intra-chiplet networks, such as within a particular memory-compute node, across a broader chiplet network, such as across the first memory-compute node104or across the CNM system102. The CNM system102is scalable to include multiple-node configurations. That is, multiple different instances of the first memory-compute node104, or of other differently configured memory-compute nodes, can be coupled using the scale fabric106, to provide a scaled system. Each of the memory-compute nodes can run its own operating system and can be configured to jointly coordinate system-wide resource usage. In the example ofFIG.1, the first switch110of the first memory-compute node104is coupled to the scale fabric106. The scale fabric106can provide a switch (e.g., a CTCPI switch, a PCIe switch, a CPI switch, or other switch) that can facilitate communication among and between different memory-compute nodes. In an example, the scale fabric106can help various nodes communicate in a partitioned global address space (PGAS). In an example, the first switch110from the first memory-compute node104is coupled to one or multiple different memory-compute devices, such as including the first memory-compute device112. The first memory-compute device112can comprise a chiplet-based architecture referred to herein as a compute-near-memory (CNM) chiplet. A packaged version of the first memory-compute device112can include, for example, one or multiple CNM chiplets. The chiplets can be communicatively coupled using CTCPI for high bandwidth and low latency. In the example ofFIG.1, the first memory-compute device112can include a network on chip (NOC) or first NOC118. Generally, a NOC is an interconnection network within a device, connecting a particular set of endpoints. InFIG.1, the first NOC118can provide communications and connectivity between the various memory, compute resources, and ports of the first memory-compute device112. In an example, the first NOC118can comprise a folded Clos topology, such as within each instance of a memory-compute device, or as a mesh that couples multiple memory-compute devices in a node. The Clos topology, such as can use multiple, smaller radix crossbars to provide functionality associated with a higher radix crossbar topology, offers various benefits. For example, the Clos topology can exhibit consistent latency and bisection bandwidth across the NOC. The first NOC118can include various distinct switch types including hub switches, edge switches, and endpoint switches. Each of the switches can be constructed as crossbars that provide substantially uniform latency and bandwidth between input and output nodes. In an example, the endpoint switches and the edge switches can include two separate crossbars, one for traffic headed to the hub switches, and the other for traffic headed away from the hub switches. The hub switches can be constructed as a single crossbar that switches all inputs to all outputs. In an example, the hub switches can have multiple ports each (e.g., four or six ports each), such as depending on whether the particular hub switch participates in inter-chip communications. A number of hub switches that participates in inter-chip communications can be set by an inter-chip bandwidth requirement. The first NOC118can support various payloads (e.g., from 8 to 64-byte payloads; other payload sizes can similarly be used) between compute elements and memory. In an example, the first NOC118can be optimized for relatively smaller payloads (e.g., 8-16 bytes) to efficiently handle access to sparse data structures. In an example, the first NOC118can be coupled to an external host via a first physical-layer interface114, a PCIe subordinate module116or endpoint, and a PCIe principal module126or root port. That is, the first physical-layer interface114can include an interface to allow an external host processor to be coupled to the first memory-compute device112. An external host processor can optionally be coupled to one or multiple different memory-compute devices, such as using a PCIe switch or other, native protocol switch. Communication with the external host processor through a PCIe-based switch can limit device-to-device communication to that supported by the switch. Communication through a memory-compute device-native protocol switch such as using CTCPI, in contrast, can allow for more full communication between or among different memory-compute devices, including support for a partitioned global address space, such as for creating threads of work and sending events. In an example, the CTCPI protocol can be used by the first NOC118in the first memory-compute device112, and the first switch110can include a CTCPI switch. The CTCPI switch can allow CTCPI packets to be transferred from a source memory-compute device, such as the first memory-compute device112, to a different, destination memory-compute device (e.g., on the same or other node), such as without being converted to another packet format. In an example, the first memory-compute device112can include an internal host processor122. The internal host processor122can be configured to communicate with the first NOC118or other components or modules of the first memory-compute device112, for example, using the internal PCIe principal module126, which can help eliminate a physical layer that would consume time and energy. In an example, the internal host processor122can be based on a RISC-V ISA processor, and can use the first physical-layer interface114to communicate outside of the first memory-compute device112, such as to other storage, networking, or other peripherals to the first memory-compute device112. The internal host processor122can control the first memory-compute device112and can act as a proxy for operating system-related functionality. The internal host processor122can include a relatively small number of processing cores (e.g., 2-4 cores) and a host memory device124(e.g., comprising a DRAM module). In an example, the internal host processor122can include PCI root ports. When the internal host processor122is in use, then one of its root ports can be connected to the PCIe subordinate module116. Another of the root ports of the internal host processor122can be connected to the first physical-layer interface114, such as to provide communication with external PCI peripherals. When the internal host processor122is disabled, then the PCIe subordinate module116can be coupled to the first physical-layer interface114to allow an external host processor to communicate with the first NOC118. In an example of a system with multiple memory-compute devices, the first memory-compute device112can be configured to act as a system host or controller. In this example, the internal host processor122can be in use, and other instances of internal host processors in the respective other memory-compute devices can be disabled. The internal host processor122can be configured at power-up of the first memory-compute device112, such as to allow the host to initialize. In an example, the internal host processor122and its associated data paths (e.g., including the first physical-layer interface114, the PCIe subordinate module116, etc.) can be configured from input pins to the first memory-compute device112. One or more of the pins can be used to enable or disable the internal host processor122and configure the PCI (or other) data paths accordingly. In an example, the first NOC118can be coupled to the scale fabric106via a scale fabric interface module136and a second physical-layer interface138. The scale fabric interface module136, or SIF, can facilitate communication between the first memory-compute device112and a device space, such as a partitioned global address space (PGAS). The PGAS can be configured such that a particular memory-compute device, such as the first memory-compute device112, can access memory or other resources on a different memory-compute device (e.g., on the same or different node), such as using a load/store paradigm. Various scalable fabric technologies can be used, including CTCPI, CPI, Gen-Z, PCI, or Ethernet bridged over CXL. The scale fabric106can be configured to support various packet formats. In an example, the scale fabric106supports orderless packet communications, or supports ordered packets such as can use a path identifier to spread bandwidth across multiple equivalent paths. The scale fabric106can generally support remote operations such as remote memory read, write, and other built-in atomics, remote memory atomics, remote memory-compute device send events, and remote memory-compute device call and return operations. In an example, the first NOC118can be coupled to one or multiple different memory modules, such as including a first memory device128. The first memory device128can include various kinds of memory devices, for example, LPDDR5 or GDDR6, among others. In the example ofFIG.1, the first NOC118can coordinate communications with the first memory device128via a memory controller130that can be dedicated to the particular memory module. In an example, the memory controller130can include a memory module cache and an atomic operations module. The atomic operations module can be configured to provide relatively high-throughput atomic operators, such as including integer and floating-point operators. The atomic operations module can be configured to apply its operators to data within the memory module cache (e.g., comprising SRAM memory side cache), thereby allowing back-to-back atomic operations using the same memory location, with minimal throughput degradation. The memory module cache can provide storage for frequently accessed memory locations, such as without having to re-access the first memory device128. In an example, the memory module cache can be configured to cache data only for a particular instance of the memory controller130. In an example, the memory controller130includes a DRAM controller configured to interface with the first memory device128, such as including DRAM devices. The memory controller130can provide access scheduling and bit error management, among other functions. In an example, the first NOC118can be coupled to a hybrid threading processor (HTP140), a hybrid threading fabric (HTF142) and a host interface and dispatch module (HIF120). The HIF120can be configured to facilitate access to host-based command request queues and response queues. In an example, the HIF120can dispatch new threads of execution on processor or compute elements of the HTP140or the HTF142. In an example, the HIF120can be configured to maintain workload balance across the HTP140module and the HTF142module. The hybrid threading processor, or HTP140, can include an accelerator, such as can be based on a RISC-V instruction set. The HTP140can include a highly threaded, event-driven processor in which threads can be executed in single instruction rotation, such as to maintain high instruction throughput. The HTP140comprises relatively few custom instructions to support low-overhead threading capabilities, event send/receive, and shared memory atomic operators. The hybrid threading fabric, or HTF142, can include an accelerator, such as can include a non-von Neumann, coarse-grained, reconfigurable processor. The HTF142can be optimized for high-level language operations and data types (e.g., integer or floating point). In an example, the HTF142can support data flow computing. The HTF142can be configured to use substantially all of the memory bandwidth available on the first memory-compute device112, such as when executing memory-bound compute kernels. The HTP and HTF accelerators of the CNM system102can be programmed using various high-level, structured programming languages. For example, the HTP and HTF accelerators can be programmed using C/C++, such as using the LLVM compiler framework. The HTP accelerator can leverage an open source compiler environment, such as with various added custom instruction sets configured to improve memory access efficiency, provide a message passing mechanism, and manage events, among other things. In an example, the HTF accelerator can be designed to enable programming of the HTF142using a high-level programming language, and the compiler can generate a simulator configuration file or a binary file that runs on the HTF142hardware. The HTF142can provide a mid-level language for expressing algorithms precisely and concisely, while hiding configuration details of the HTF accelerator itself. In an example, the HTF accelerator tool chain can use an LLVM front-end compiler and the LLVM intermediate representation (IR) to interface with an HTF accelerator back end. FIG.2illustrates generally an example of a memory subsystem200of a memory-compute device, according to an embodiment. The example of the memory subsystem200includes a controller202, a programmable atomic unit208, and a second NOC206. The controller202can include or use the programmable atomic unit208to carry out operations using information in a memory device204. In an example, the memory subsystem200comprises a portion of the first memory-compute device112from the example ofFIG.1, such as including portions of the first NOC118or of the memory controller130. In the example ofFIG.2, the second NOC206is coupled to the controller202and the controller202can include a memory control module210, a local cache module212, and a built-in atomics module214. In an example, the built-in atomics module214can be configured to handle relatively simple, single-cycle, integer atomics. The built-in atomics module214can perform atomics at the same throughput as, for example, normal memory read or write operations. In an example, an atomic memory operation can include a combination of storing data to the memory, performing an atomic memory operation, and then responding with load data from the memory. The local cache module212, such as can include an SRAM cache, can be provided to help reduce latency for repetitively-accessed memory locations. In an example, the local cache module212can provide a read buffer for sub-memory line accesses. The local cache module212can be particularly beneficial for compute elements that have relatively small or no data caches. The memory control module210, such as can include a DRAM controller, can provide low-level request buffering and scheduling, such as to provide efficient access to the memory device204, such as can include a DRAM device. In an example, the memory device204can include or use a GDDR6 DRAM device, such as having 16 Gb density and 64 Gb/sec peak bandwidth. Other devices can similarly be used. In an example, the programmable atomic unit208can comprise single-cycle or multiple-cycle operator such as can be configured to perform integer addition or more complicated multiple-instruction operations such as bloom filter insert. In an example, the programmable atomic unit208can be configured to perform load and store-to-memory operations. The programmable atomic unit208can be configured to leverage the RISC-V ISA with a set of specialized instructions to facilitate interactions with the controller202to atomically perform user-defined operations. Programmable atomic requests, such as received from an on-node or off-node host, can be routed to the programmable atomic unit208via the second NOC206and the controller202. In an example, custom atomic operations (e.g., carried out by the programmable atomic unit208) can be identical to built-in atomic operations (e.g., carried out by the built-in atomics module214) except that a programmable atomic operation can be defined or programmed by the user rather than the system architect. In an example, programmable atomic request packets can be sent through the second NOC206to the controller202, and the controller202can identify the request as a custom atomic. The controller202can then forward the identified request to the programmable atomic unit208. FIG.3illustrates generally an example of a programmable atomic unit302for use with a memory controller, according to an embodiment. In an example, the programmable atomic unit302can comprise or correspond to the programmable atomic unit208from the example ofFIG.2. That is,FIG.3illustrates components in an example of a programmable atomic unit302(PAU), such as those noted above with respect toFIG.2(e.g., in the programmable atomic unit208), or toFIG.1(e.g., in an atomic operations module of the memory controller130). As illustrated inFIG.3, the programmable atomic unit302includes a PAU processor or PAU core306, a PAU thread control304, an instruction SRAM308, a data cache310, and a memory interface312to interface with the memory controller314. In an example, the memory controller314comprises an example of the controller202from the example ofFIG.2. In an example, the PAU core306is a pipelined processor such that multiple stages of different instructions are executed together per clock cycle. The PAU core306can include a barrel-multithreaded processor, with thread control304circuitry to switch between different register files (e.g., sets of registers containing current processing state) upon each clock cycle. This enables efficient context switching between currently executing threads. In an example, the PAU core306supports eight threads, resulting in eight register files. In an example, some or all of the register files are not integrated into the PAU core306, but rather reside in a local data cache310or the instruction SRAM308. This reduces circuit complexity in the PAU core306by eliminating the traditional flip-flops used for registers in such memories. The local PAU memory can include instruction SRAM308, such as can include instructions for various atomics. The instructions comprise sets of instructions to support various application-loaded atomic operators. When an atomic operator is requested, such as by an application chiplet, a set of instructions corresponding to the atomic operator are executed by the PAU core306. In an example, the instruction SRAM308can be partitioned to establish the sets of instructions. In this example, the specific programmable atomic operator being requested by a requesting process can identify the programmable atomic operator by the partition number. The partition number can be established when the programmable atomic operator is registered with (e.g., loaded onto) the programmable atomic unit302. Other metadata for the programmable instructions can be stored in memory (e.g., in partition tables) in memory local to the programmable atomic unit302. In an example, atomic operators manipulate the data cache310, which is generally synchronized (e.g., flushed) when a thread for an atomic operator completes. Thus, aside from initial loading from the external memory, such as from the memory controller314, latency can be reduced for most memory operations during execution of a programmable atomic operator thread. A pipelined processor, such as the PAU core306, can experience an issue when an executing thread attempts to issue a memory request if an underlying hazard condition would prevent such a request. Here, the memory request is to retrieve data from the memory controller314, whether it be from a cache on the memory controller314or off-die memory. To resolve this issue, the PAU core306is configured to deny the memory request for a thread. Generally, the PAU core306or the thread control304can include circuitry to enable one or more thread rescheduling points in the pipeline. Here, the denial occurs at a point in the pipeline that is beyond (e.g., after) these thread rescheduling points. In an example, the hazard occurred beyond the rescheduling point. Here, a preceding instruction in the thread created the hazard after the memory request instruction passed the last thread rescheduling point prior to the pipeline stage in which the memory request could be made. In an example, to deny the memory request, the PAU core306is configured to determine (e.g., detect) that there is a hazard on memory indicated in the memory request. Here, hazard denotes any condition such that allowing (e.g., performing) the memory request will result in an inconsistent state for the thread. In an example, the hazard is an in-flight memory request. Here, whether or not the data cache310includes data for the requested memory address, the presence of the in-flight memory request makes it uncertain what the data in the data cache310at that address should be. Thus, the thread must wait for the in-flight memory request to be completed to operate on current data. The hazard is cleared when the memory request completes. In an example, the hazard is a dirty cache line in the data cache310for the requested memory address. Although the dirty cache line generally indicates that the data in the cache is current and the memory controller version of this data is not, an issue can arise on thread instructions that do not operate from the cache. An example of such an instruction uses a built-in atomic operator, or other separate hardware block, of the memory controller314. In the context of a memory controller, the built-in atomic operators can be separate from the programmable atomic unit302and do not have access to the data cache310or instruction SRAM308inside the PAU. If the cache line is dirty, then the built-in atomic operator will not be operating on the most current data until the data cache310is flushed to synchronize the cache and the other or off-die memories. This same situation could occur with other hardware blocks of the memory controller, such as cryptography block, encoder, etc. FIG.4illustrates an example of a hybrid threading processor (HTP) accelerator, or HTP accelerator400. The HTP accelerator400can comprise a portion of a memory-compute device, according to an embodiment. In an example, the HTP accelerator400can include or comprise the HTP140from the example ofFIG.1. The HTP accelerator400includes, for example, a HTP core402, an instruction cache404, a data cache406, a translation block408, a memory interface410, and a thread controller412. The HTP accelerator400can further include a dispatch interface414and a NOC interface416, such as for interfacing with a NOC such as the first NOC118from the example ofFIG.1, the second NOC206from the example ofFIG.2, or other NOC. In an example, the HTP accelerator400includes a module that is based on a RISC-V instruction set, and can include a relatively small number of other or additional custom instructions to support a low-overhead, threading-capable Hybrid Threading (HT) language. The HTP accelerator400can include a highly-threaded processor core, the HTP core402, in which, or with which, threads can be executed in a single instruction rotation, such as to maintain high instruction throughput. In an example, a thread can be paused when it waits for other, pending events to complete. This can allow the compute resources to be efficiently used on relevant work instead of polling. In an example, multiple-thread barrier synchronization can use efficient HTP-to-HTP and HTP-to/from-Host messaging, such as can allow thousands of threads to initialize or wake in, for example, tens of clock cycles. In an example, the dispatch interface414can comprise a functional block of the HTP accelerator400for handling hardware-based thread management. That is, the dispatch interface414can manage dispatch of work to the HTP core402or other accelerators. Non-HTP accelerators, however, are generally not able to dispatch work. In an example, work dispatched from a host can use dispatch queues that reside in, e.g., host main memory (e.g., DRAM-based memory). Work dispatched from the HTP accelerator400, on the other hand, can use dispatch queues that reside in SRAM, such as within the dispatches for the target HTP accelerator400within a particular node. In an example, the HTP core402can comprise one or more cores that execute instructions on behalf of threads. That is, the HTP core402can include an instruction processing block. The HTP core402can further include, or can be coupled to, the thread controller412. The thread controller412can provide thread control and state for each active thread within the HTP core402. The data cache406can include cache for a host processor (e.g., for local and remote memory-compute devices, including for the HTP core402), and the instruction cache404can include cache for use by the HTP core402. In an example, the data cache406can be configured for read and write operations, and the instruction cache404can be configured for read only operations. In an example, the data cache406is a small cache provided per hardware thread. The data cache406can temporarily store data for use by the owning thread. The data cache406can be managed by hardware or software in the HTP accelerator400. For example, hardware can be configured to automatically allocate or evict lines as needed, as load and store operations are executed by the HTP core402. Software, such as using RISC-V instructions, can determine which memory accesses should be cached, and when lines should be invalidated or written back to other memory locations. Data caching on the HTP accelerator400has various benefits, including making larger accesses more efficient for the memory controller, allowing an executing thread to avoid stalling. However, there are situations when using the cache causes inefficiencies. An example includes accesses where data is accessed only once, and causes thrashing of the cache lines. To help address this problem, the HTP accelerator400can use a set of custom load instructions to force a load instruction to check for a cache hit, and on a cache miss to issue a memory request for the requested operand and not put the obtained data in the data cache406. The HTP accelerator400thus includes various different types of load instructions, including non-cached and cache line loads. The non-cached load instructions use the cached data if dirty data is present in the cache. The non-cached load instructions ignore clean data in the cache, and do not write accessed data to the data cache. For cache line load instructions, the complete data cache line (e.g., comprising 64 bytes) can be loaded from memory into the data cache406, and can load the addressed memory into a specified register. These loads can use the cached data if clean or dirty data is in the data cache406. If the referenced memory location is not in the data cache406, then the entire cache line can be accessed from memory. Use of the cache line load instructions can reduce cache misses when sequential memory locations are being referenced (such as memory copy operations) but can also waste memory and bandwidth at the NOC interface416if the referenced memory data is not used. In an example, the HTP accelerator400includes a custom store instruction that is non-cached. The non-cached store instruction can help avoid thrashing the data cache406with write data that is not sequentially written to memory. In an example, the HTP accelerator400further includes a translation block408. The translation block408can include a virtual-to-physical translation block for local memory of a memory-compute device. For example, a host processor, such as in the HTP core402, can execute a load or store instruction, and the instruction can generate a virtual address. The virtual address can be translated to a physical address of the host processor, such as using a translation table from the translation block408. The memory interface410, for example, can include an interface between the HTP core402and the NOC interface416. FIG.5illustrates an example of a representation of a hybrid threading fabric (HTF), or HTF500, of a memory-compute device, according to an embodiment. In an example, the HTF500can include or comprise the HTF142from the example ofFIG.1. The HTF500is a coarse-grained, reconfigurable compute fabric that can be optimized for high-level language operand types and operators (e.g., using C/C++ or other high-level language). In an example, the HTF500can include configurable, n-bit wide (e.g., 512-bit wide) data paths that interconnect hardened SIMD arithmetic units. In an example, the HTF500comprises an HTF cluster502that includes multiple HTF tiles, including an example tile504, or Tile N. Each HTF tile can include one or more compute elements with local memory and arithmetic functions. For example, each tile can include a compute pipeline with support for integer and floating-point operations. In an example, the data path, compute elements, and other infrastructure can be implemented as hardened IP to provide maximum performance while minimizing power consumption and reconfiguration time. In the example ofFIG.5, the tiles comprising the HTF cluster502are linearly arranged, and each tile in the cluster can be coupled to one or multiple other tiles in the HTF cluster502. In the example ofFIG.5, the example tile504, or Tile N, is coupled to four other tiles, including to a base tile510(e.g., Tile N−2) via the port labeled SF IN N−2, to an adjacent tile512(e.g., Tile N−1) via the port labeled SF IN N−1, and to a Tile N+1 via the port labeled SF IN N+1 and to a Tile N+2 via the port labeled SF IN N+2. The example tile504can be coupled to the same or other tiles via respective output ports, such as those labeled SF OUT N−1, SF OUT N−2, SF OUT N+1, and SF OUT N+2. In this example, the ordered list of names for the various tiles are notional indications of the positions of the tiles. In other examples, the tiles comprising the HTF cluster502can be arranged in a grid or other configuration, with each tile similarly coupled to one or several of its nearest neighbors in the grid. Tiles that are provided at an edge of a cluster can optionally have fewer connections to neighboring tiles. For example, Tile N−2, or the base tile510in the example ofFIG.5, can be coupled only to the adjacent tile512(Tile N−1) and to the example tile504(Tile N). Fewer or additional inter-tile connections can similarly be used. The HTF cluster502can further include memory interface modules, including a first memory interface module506. The memory interface modules can couple the HTF cluster502to a NOC, such as the first NOC118. In an example, the memory interface modules can allow tiles within a cluster to make requests to other locations in a memory-compute system, such as in the same or different node in the system. That is, the representation of the HTF500can comprise a portion of a larger fabric that can be distributed across multiple nodes, such as with one or more HTF tiles or HTF clusters at each of the nodes. Requests can be made between tiles or nodes within the context of the larger fabric. In the example ofFIG.5, the tiles in the HTF cluster502are coupled using a synchronous fabric (SF). The synchronous fabric can provide communication between a particular tile and its neighboring tiles in the HTF cluster502, as described above. Each HTF cluster502can further include an asynchronous fabric (AF) that can provide communication among, e.g., the tiles in the cluster, the memory interfaces in the cluster, and a dispatch interface508in the cluster. In an example, the synchronous fabric can exchange messages that include data and control information. The control information can include, among other things, instruction RAM address information or a thread identifier. The control information can be used to set up a data path, and a data message field can be selected as a source for the path. Generally, the control fields can be provided or received earlier, such that they can be used to configure the data path. For example, to help minimize any delay through the synchronous domain pipeline in a tile, the control information can arrive at a tile a few clock cycles before the data field. Various registers can be provided to help coordinate dataflow timing in the pipeline. In an example, each tile in the HTF cluster502can include multiple memories. Each memory can have the same width as the data path (e.g., 512 bits) and can have a specified depth, such as in a range of 512 to 1024 elements. The tile memories can be used to store data that supports data path operations. The stored data can include constants loaded as part of a kernel's cluster configuration, for example, or can include variables calculated as part of the data flow. In an example, the tile memories can be written from the asynchronous fabric as a data transfer from another synchronous domain, or can include a result of a load operation such as initiated by another synchronous domain. The tile memory can be read via synchronous data path instruction execution in the synchronous domain. In an example, each tile in an HTF cluster502can have a dedicated instruction RAM (INST RAM). In an example of an HTF cluster502with sixteen tiles, and instruction RAM instances with sixty-four entries, the cluster can allow algorithms to be mapped with up to 1024 multiply-shift and/or ALU operations. The various tiles can optionally be pipelined together, such as using the synchronous fabric, to allow data flow compute with minimal memory access, thus minimizing latency and reducing power consumption. In an example, the asynchronous fabric can allow memory references to proceed in parallel with computation, thereby providing more efficient streaming kernels. In an example, the various tiles can include built-in support for loop-based constructs and can support nested looping kernels. The synchronous fabric can allow multiple tiles to be pipelined, such as without a need for data queuing. Tiles that participate in a synchronous domain can, for example, act as a single pipelined data path. A first or base tile (e.g., Tile N−2, in the example ofFIG.5) of a synchronous domain can initiate a thread of work through the pipelined tiles. The base tile can be responsible for starting work on a predefined cadence referred to herein as a Spoke Count. For example, if the Spoke Count is 3, then the base tile can initiate work every third clock cycle. In an example, the synchronous domain comprises a set of connected tiles in the HTF cluster502. Execution of a thread can begin at the domain's base tile and can progress from the base tile, via the synchronous fabric, to other tiles in the same domain. The base tile can provide the instruction to be executed for the first tile. The first tile can, by default, provide the same instruction for the other connected tiles to execute. However, in some examples, the base tile, or a subsequent tile, can conditionally specify or use an alternative instruction. The alternative instruction can be chosen by having the tile's data path produce a Boolean conditional value, and then can use the Boolean value to choose between an instruction set of the current tile and the alternate instruction. The asynchronous fabric can be used to perform operations that occur asynchronously relative to a synchronous domain. Each tile in the HTF cluster502can include an interface to the asynchronous fabric. The inbound interface can include, for example, a FIFO buffer or queue (e.g., AF IN QUEUE) to provide storage for message that cannot be immediately processed. Similarly, the outbound interface of the asynchronous fabric can include a FIFO buffer or queue (e.g., AF OUT QUEUE) to provide storage for messages that cannot be immediately sent out. In an example, messages in the asynchronous fabric can be classified as data messages or control messages. Data messages can include a SIMD width data value that is written to either tile memory 0 (MEM_0) or memory 1 (MEM_1). Control messages can be configured to control thread creation, to free resources, or to issue external memory references. A tile in the HTF cluster502can perform various compute operations for the HTF. The compute operations can be performed by configuring the data path within the tile. In an example, a tile includes two functional blocks that perform the compute operations for the tile: a Multiply and Shift Operation block (MS OP) and an Arithmetic, Logical, and Bit Operation block (ALB OP). The two blocks can be configured to perform pipelined operations such as a Multiply and Add, or a Shift and Add, among others. In an example, each instance of a memory-compute device in a system can have a complete supported instruction set for its operator blocks (e.g., MS OP and ALB OP). In this case, binary compatibility can be realized across all devices in the system. However, in some examples, it can be helpful to maintain a base set of functionality and optional instruction set classes, such as to meet various design tradeoffs, such as die size. The approach can be similar to how the RISC-V instruction set has a base set and multiple optional instruction subsets. In an example, the example tile504can include a Spoke RAM. The Spoke RAM can be used to specify which input (e.g., from among the four SF tile inputs and the base tile input) is the primary input for each clock cycle. The Spoke RAM read address input can originate at a counter that counts from zero to Spoke Count minus one. In an example, different spoke counts can be used on different tiles, such as within the same HTF cluster502, to allow a number of slices, or unique tile instances, used by an inner loop to determine the performance of a particular application or instruction set. In an example, the Spoke RAM can specify when a synchronous input is to be written to a tile memory, for instance when multiple inputs for a particular tile instruction are used and one of the inputs arrives before the others. The early-arriving input can be written to the tile memory and can be later read when all of the inputs are available. In this example, the tile memory can be accessed as a FIFO memory, and FIFO read and write pointers can be stored in a register-based memory region or structure in the tile memory. FIG.6AandFIG.6Billustrate generally an example of a chiplet system that can be used to implement one or more aspects of the CNM system102. As similarly mentioned above, a node in the CNM system102, or a device within a node in the CNM system102, can include a chiplet-based architecture or compute-near-memory (CNM) chiplet. A packaged memory-compute device can include, for example, one, two, or four CNM chiplets. The chiplets can be interconnected using high-bandwidth, low-latency interconnects such as using a CPI interface. Generally, a chiplet system is made up of discrete modules (each a “chiplet”) that are integrated on an interposer and, in many examples, are interconnected as desired through one or more established networks to provide a system with the desired functionality. The interposer and included chiplets can be packaged together to facilitate interconnection with other components of a larger system. Each chiplet can include one or more individual integrated circuits (ICs), or “chips,” potentially in combination with discrete circuit components, and can be coupled to a respective substrate to facilitate attachment to the interposer. Most or all chiplets in a system can be individually configured for communication through established networks. The configuration of chiplets as individual modules of a system is distinct from such a system being implemented on single chips that contain distinct device blocks (e.g., intellectual property (IP) blocks) on one substrate (e.g., single die), such as a system-on-a-chip (SoC), or multiple discrete packaged devices integrated on a printed circuit board (PCB). In general, chiplets provide better performance (e.g., lower power consumption, reduced latency, etc.) than discrete packaged devices, and chiplets provide greater production benefits than single die chips. These production benefits can include higher yields or reduced development costs and time. Chiplet systems can include, for example, one or more application (or processor) chiplets and one or more support chiplets. Here, the distinction between application and support chiplets is simply a reference to the likely design scenarios for the chiplet system. Thus, for example, a synthetic vision chiplet system can include, by way of example only, an application chiplet to produce the synthetic vision output along with support chiplets, such as a memory controller chiplet, a sensor interface chiplet, or a communication chiplet. In a typical use case, the synthetic vision designer can design the application chiplet and source the support chiplets from other parties. Thus, the design expenditure (e.g., in terms of time or complexity) is reduced because by avoiding the design and production of functionality embodied in the support chiplets. Chiplets also support the tight integration of IP blocks that can otherwise be difficult, such as those manufactured using different processing technologies or using different feature sizes (or utilizing different contact technologies or spacings). Thus, multiple ICs or IC assemblies, with different physical, electrical, or communication characteristics can be assembled in a modular manner to provide an assembly with various desired functionalities. Chiplet systems can also facilitate adaptation to suit needs of different larger systems into which the chiplet system will be incorporated. In an example, ICs or other assemblies can be optimized for the power, speed, or heat generation for a specific function—as can happen with sensors—can be integrated with other devices more easily than attempting to do so on a single die. Additionally, by reducing the overall size of the die, the yield for chiplets tends to be higher than that of more complex, single die devices. FIG.6AandFIG.6Billustrate generally an example of a chiplet system, according to an embodiment.FIG.6Ais a representation of the chiplet system602mounted on a peripheral board604, that can be connected to a broader computer system by a peripheral component interconnect express (PCIe), for example. The chiplet system602includes a package substrate606, an interposer608, and four chiplets, an application chiplet610, a host interface chiplet612, a memory controller chiplet614, and a memory device chiplet616. Other systems can include many additional chiplets to provide additional functionalities as will be apparent from the following discussion. The package of the chiplet system602is illustrated with a lid or cover618, though other packaging techniques and structures for the chiplet system can be used.FIG.6Bis a block diagram labeling the components in the chiplet system for clarity. The application chiplet610is illustrated as including a chiplet system NOC620to support a chiplet network622for inter-chiplet communications. In example embodiments the chiplet system NOC620can be included on the application chiplet610. In an example, the first NOC118from the example ofFIG.1can be defined in response to selected support chiplets (e.g., host interface chiplet612, memory controller chiplet614, and memory device chiplet616) thus enabling a designer to select an appropriate number or chiplet network connections or switches for the chiplet system NOC620. In an example, the chiplet system NOC620can be located on a separate chiplet, or within the interposer608. In examples as discussed herein, the chiplet system NOC620implements a chiplet protocol interface (CPI) network. In an example, the chiplet system602can include or comprise a portion of the first memory-compute node104or the memory first memory-compute device112. That is, the various blocks or components of the first memory-compute device112can include chiplets that can be mounted on the peripheral board604, the package substrate606, and the interposer608. The interface components of the first memory-compute device112can comprise, generally, the host interface chiplet612, the memory and memory control-related components of the first memory-compute device112can comprise, generally, the memory controller chiplet614, the various accelerator and processor components of the first memory-compute device112can comprise, generally, the application chiplet610or instances thereof, and so on. The CPI interface, such as can be used for communication between or among chiplets in a system, is a packet-based network that supports virtual channels to enable a flexible and high-speed interaction between chiplets. CPI enables bridging from intra-chiplet networks to the chiplet network622. For example, the Advanced eXtensible Interface (AXI) is a widely used specification to design intra-chip communications. AXI specifications, however, cover a great variety of physical design options, such as the number of physical channels, signal timing, power, etc. Within a single chip, these options are generally selected to meet design goals, such as power consumption, speed, etc. However, to achieve the flexibility of the chiplet system, an adapter, such as CPI, is used to interface between the various AXI design options that can be implemented in the various chiplets. By enabling a physical channel to virtual channel mapping and encapsulating time-based signaling with a packetized protocol, CPI bridges intra-chiplet networks across the chiplet network622. CPI can use a variety of different physical layers to transmit packets. The physical layer can include simple conductive connections, or can include drivers to increase the voltage, or otherwise facilitate transmitting the signals over longer distances. An example of one such a physical layer can include the Advanced Interface Bus (AIB), which in various examples, can be implemented in the interposer608. AIB transmits and receives data using source synchronous data transfers with a forwarded clock. Packets are transferred across the AIB at single data rate (SDR) or dual data rate (DDR) with respect to the transmitted clock. Various channel widths are supported by AIB. The channel can be configured to have a symmetrical number of transmit (TX) and receive (RX) input/outputs (I/Os), or have a non-symmetrical number of transmitters and receivers (e.g., either all transmitters or all receivers). The channel can act as an AIB principal or subordinate depending on which chiplet provides the principal clock. AIB I/O cells support three clocking modes: asynchronous (i.e. non-clocked), SDR, and DDR. In various examples, the non-clocked mode is used for clocks and some control signals. The SDR mode can use dedicated SDR only I/O cells, or dual use SDR/DDR I/O cells. In an example, CPI packet protocols (e.g., point-to-point or routable) can use symmetrical receive and transmit I/O cells within an AIB channel. The CPI streaming protocol allows more flexible use of the AIB I/O cells. In an example, an AIB channel for streaming mode can configure the I/O cells as all TX, all RX, or half TX and half RX. CPI packet protocols can use an AIB channel in either SDR or DDR operation modes. In an example, the AIB channel is configured in increments of 80 I/O cells (i.e. 40 TX and 40 RX) for SDR mode and 40 I/O cells for DDR mode. The CPI streaming protocol can use an AIB channel in either SDR or DDR operation modes. Here, in an example, the AIB channel is in increments of 40 I/O cells for both SDR and DDR modes. In an example, each AIB channel is assigned a unique interface identifier. The identifier is used during CPI reset and initialization to determine paired AIB channels across adjacent chiplets. In an example, the interface identifier is a 20-bit value comprising a seven-bit chiplet identifier, a seven-bit column identifier, and a six-bit link identifier. The AIB physical layer transmits the interface identifier using an AIB out-of-band shift register. The 20-bit interface identifier is transferred in both directions across an AIB interface using bits32-51of the shift registers. AIB defines a stacked set of AIB channels as an AIB channel column. An AIB channel column has some number of AIB channels, plus an auxiliary channel. The auxiliary channel contains signals used for AIB initialization. All AIB channels (other than the auxiliary channel) within a column are of the same configuration (e.g., all TX, all RX, or half TX and half RX, as well as having the same number of data I/O signals). In an example, AIB channels are numbered in continuous increasing order starting with the AIB channel adjacent to the AUX channel. The AIB channel adjacent to the AUX is defined to be AIB channel zero. Generally, CPI interfaces on individual chiplets can include serialization-deserialization (SERDES) hardware. SERDES interconnects work well for scenarios in which high-speed signaling with low signal count are desirable. SERDES, however, can result in additional power consumption and longer latencies for multiplexing and demultiplexing, error detection or correction (e.g., using block level cyclic redundancy checking (CRC)), link-level retry, or forward error correction. However, when low latency or energy consumption is a primary concern for ultra-short reach, chiplet-to-chiplet interconnects, a parallel interface with clock rates that allow data transfer with minimal latency can be utilized. CPI includes elements to minimize both latency and energy consumption in these ultra-short reach chiplet interconnects. For flow control, CPI employs a credit-based technique. A recipient, such as the application chiplet610, provides a sender, such as the memory controller chiplet614, with credits that represent available buffers. In an example, a CPI recipient includes a buffer for each virtual channel for a given time-unit of transmission. Thus, if the CPI recipient supports five messages in time and a single virtual channel, the recipient has five buffers arranged in five rows (e.g., one row for each unit time). If four virtual channels are supported, then the recipient has twenty buffers arranged in five rows. Each buffer holds the payload of one CPI packet. When the sender transmits to the recipient, the sender decrements the available credits based on the transmission. Once all credits for the recipient are consumed, the sender stops sending packets to the recipient. This ensures that the recipient always has an available buffer to store the transmission. As the recipient processes received packets and frees buffers, the recipient communicates the available buffer space back to the sender. This credit return can then be used by the sender allow transmitting of additional information. The example ofFIG.6Aincludes a chiplet mesh network624that uses a direct, chiplet-to-chiplet technique without a need for the chiplet system NOC620. The chiplet mesh network624can be implemented in CPI, or another chiplet-to-chiplet protocol. The chiplet mesh network624generally enables a pipeline of chiplets where one chiplet serves as the interface to the pipeline while other chiplets in the pipeline interface only with themselves. Additionally, dedicated device interfaces, such as one or more industry standard memory interfaces (such as, for example, synchronous memory interfaces, such as DDR5, DDR6), can be used to connect a device to a chiplet. Connection of a chiplet system or individual chiplets to external devices (such as a larger system can be through a desired interface (for example, a PCIe interface). Such an external interface can be implemented, in an example, through the host interface chiplet612, which in the depicted example, provides a PCIe interface external to chiplet system. Such dedicated chiplet interfaces626are generally employed when a convention or standard in the industry has converged on such an interface. The illustrated example of a Double Data Rate (DDR) interface connecting the memory controller chiplet614to a dynamic random access memory (DRAM) memory device chiplet616is just such an industry convention. Of the variety of possible support chiplets, the memory controller chiplet614is likely present in the chiplet system due to the near omnipresent use of storage for computer processing as well as sophisticated state-of-the-art for memory devices. Thus, using memory device chiplets616and memory controller chiplets614produced by others gives chiplet system designers access to robust products by sophisticated producers. Generally, the memory controller chiplet614provides a memory device-specific interface to read, write, or erase data. Often, the memory controller chiplet614can provide additional features, such as error detection, error correction, maintenance operations, or atomic operator execution. For some types of memory, maintenance operations tend to be specific to the memory device chiplet616, such as garbage collection in NAND flash or storage class memories, temperature adjustments (e.g., cross temperature management) in NAND flash memories. In an example, the maintenance operations can include logical-to-physical (L2P) mapping or management to provide a level of indirection between the physical and logical representation of data. In other types of memory, for example DRAM, some memory operations, such as refresh can be controlled by a host processor or of a memory controller at some times, and at other times controlled by the DRAM memory device, or by logic associated with one or more DRAM devices, such as an interface chip (in an example, a buffer). Atomic operators are a data manipulation that, for example, can be performed by the memory controller chiplet614. In other chiplet systems, the atomic operators can be performed by other chiplets. For example, an atomic operator of “increment” can be specified in a command by the application chiplet610, the command including a memory address and possibly an increment value. Upon receiving the command, the memory controller chiplet614retrieves a number from the specified memory address, increments the number by the amount specified in the command, and stores the result. Upon a successful completion, the memory controller chiplet614provides an indication of the command success to the application chiplet610. Atomic operators avoid transmitting the data across the chiplet mesh network624, resulting in lower latency execution of such commands. Atomic operators can be classified as built-in atomics or programmable (e.g., custom) atomics. Built-in atomics are a finite set of operations that are immutably implemented in hardware. Programmable atomics are small programs that can execute on a programmable atomic unit (PAU) (e.g., a custom atomic unit (CAU)) of the memory controller chiplet614. The memory device chiplet616can be, or include any combination of, volatile memory devices or non-volatile memories. Examples of volatile memory devices include, but are not limited to, random access memory (RAM)—such as DRAM) synchronous DRAM (SDRAM), graphics double data rate type 6 SDRAM (GDDR6 SDRAM), among others. Examples of non-volatile memory devices include, but are not limited to, negative-and-(NAND)-type flash memory, storage class memory (e.g., phase-change memory or memristor based technologies), ferroelectric RAM (FeRAM), among others. The illustrated example includes the memory device chiplet616as a chiplet, however, the device can reside elsewhere, such as in a different package on the peripheral board604. For many applications, multiple memory device chiplets can be provided. In an example, these memory device chiplets can each implement one or multiple storage technologies, and may include integrated compute hosts. In an example, a memory chiplet can include, multiple stacked memory die of different technologies, for example one or more static random access memory (SRAM) devices stacked or otherwise in communication with one or more dynamic random access memory (DRAM) devices. In an example, the memory controller chiplet614can serve to coordinate operations between multiple memory chiplets in the chiplet system602, for example, to use one or more memory chiplets in one or more levels of cache storage, and to use one or more additional memory chiplets as main memory. The chiplet system602can include multiple memory controller chiplet614instances, as can be used to provide memory control functionality for separate hosts, processors, sensors, networks, etc. A chiplet architecture, such as in the illustrated system, offers advantages in allowing adaptation to different memory storage technologies; and different memory interfaces, through updated chiplet configurations, such as without requiring redesign of the remainder of the system structure. FIG.7illustrates generally an example of a chiplet-based implementation for a memory-compute device, according to an embodiment. The example includes an implementation with four compute-near-memory, or CNM, chiplets, and each of the CNM chiplets can include or comprise portions of the first memory-compute device112or the first memory-compute node104from the example ofFIG.1. The various portions can themselves include or comprise respective chiplets. The chiplet-based implementation can include or use CPI-based intra-system communications, as similarly discussed above in the example chiplet system602fromFIG.6AandFIG.6B. The example ofFIG.7includes a first CNM package700comprising multiple chiplets. The first CNM package700includes a first chiplet702, a second chiplet704, a third chiplet706, and a fourth chiplet708coupled to a CNM NOC hub710. Each of the first through fourth chiplets can comprise instances of the same, or substantially the same, components or modules. For example, the chiplets can each include respective instances of an HTP accelerator, an HTF accelerator, and memory controllers for accessing internal or external memories. In the example ofFIG.7, the first chiplet702includes a first NOC hub edge714coupled to the CNM NOC hub710. The other chiplets in the first CNM package700similarly include NOC hub edges or endpoints. The switches in the NOC hub edges facilitate intra-chiplet, or intra-chiplet-system, communications via the CNM NOC hub710. The first chiplet702can further include one or multiple memory controllers716. The memory controllers716can correspond to respective different NOC endpoint switches interfaced with the first NOC hub edge714. In an example, the memory controller716comprises the memory controller chiplet614or comprises the memory controller130, or comprises the memory subsystem200, or other memory-compute implementation. The memory controllers716can be coupled to respective different memory devices, for example including a first external memory module712aor a second external memory module712b. The external memory modules can include, e.g., GDDR6 memories that can be selectively accessed by the respective different chiplets in the system. The first chiplet702can further include a first HTP chiplet718and second HTP chiplet720, such as coupled to the first NOC hub edge714via respective different NOC endpoint switches. The HTP chiplets can correspond to HTP accelerators, such as the HTP140from the example ofFIG.1, or the HTP accelerator400from the example ofFIG.4. The HTP chiplets can communicate with the HTF chiplet722. The HTF chiplet722can correspond to an HTF accelerator, such as the HTF142from the example ofFIG.1, or the HTF500from the example ofFIG.5. The CNM NOC hub710can be coupled to NOC hub instances in other chiplets or other CNM packages by way of various interfaces and switches. For example, the CNM NOC hub710can be coupled to a CPI interface by way of multiple different NOC endpoints on the first CNM package700. Each of the multiple different NOC endpoints can be coupled, for example, to a different node outside of the first CNM package700. In an example, the CNM NOC hub710can be coupled to other peripherals, nodes, or devices using CTCPI or other, non-CPI protocols. For example, the first CNM package700can include a PCIe scale fabric interface (PCIE/SFI) or a CXL interface (CXL) configured to interface the first CNM package700with other devices. In an example, devices to which the first CNM package700is coupled using the various CPI, PCIe, CXL, or other fabric, can make up a common global address space. In the example ofFIG.7, the first CNM package700includes a host interface724(HIF) and a host processor (R5). The host interface724can correspond to, for example, the HIF120from the example ofFIG.1. The host processor, or R5, can correspond to the internal host processor122from the example ofFIG.1. The host interface724can include a PCI interface for coupling the first CNM package700to other external devices or systems. In an example, work can be initiated on the first CNM package700, or a tile cluster within the first CNM package700, by the host interface724. For example, the host interface724can be configured to command individual HTF tile clusters, such as among the various chiplets in the first CNM package700, into and out of power/clock gate modes. FIG.8illustrates an example tiling of memory-compute devices, according to an embodiment. InFIG.8, a tiled chiplet example800includes four instances of different compute-near-memory clusters of chiplets, where the clusters are coupled together. Each instance of a compute-near-memory chiplet can itself include one or more constituent chiplets (e.g., host processor chiplets, memory device chiplets, interface chiplets, and so on). The tiled chiplet example800includes, as one or multiple of its compute-near-memory (CNM) clusters, instances of the first CNM package700from the example ofFIG.7. For example, the tiled chiplet example800can include a first CNM cluster802that includes a first chiplet810(e.g., corresponding to the first chiplet702), a second chiplet812(e.g., corresponding to the second chiplet704), a third chiplet814(e.g., corresponding to the third chiplet706), and a fourth chiplet816(e.g., corresponding to the fourth chiplet708). The chiplets in the first CNM cluster802can be coupled to a common NOC hub, which in turn can be coupled to a NOC hub in an adjacent cluster or clusters (e.g., in a second CNM cluster804or a fourth CNM cluster808). In the example ofFIG.8, the tiled chiplet example800includes the first CNM cluster802, the second CNM cluster804, a third CNM cluster806, and the fourth CNM cluster808. The various different CNM chiplets can be configured in a common address space such that the chiplets can allocate and share resources across the different tiles. In an example, the chiplets in the cluster can communicate with each other. For example, the first CNM cluster802can be communicatively coupled to the second CNM cluster804via an inter-chiplet CPI interface818, and the first CNM cluster802can be communicatively coupled to the fourth CNM cluster808via another or the same CPI interface. The second CNM cluster804can be communicatively coupled to the third CNM cluster806via the same or other CPI interface, and so on. In an example, one of the compute-near-memory chiplets in the tiled chiplet example800can include a host interface (e.g., corresponding to the host interface724from the example ofFIG.7) that is responsible for workload balancing across the tiled chiplet example800. The host interface can facilitate access to host-based command request queues and response queues, such as from outside of the tiled chiplet example800. The host interface can dispatch new threads of execution using hybrid threading processors and the hybrid threading fabric in one or more of the compute-near-memory chiplets in the tiled chiplet example800. FIG.9illustrates a logical diagram of a processing element (i.e., tile) as it processes instructions according to some examples of the present disclosure. As noted, tiles may have a number of processing stages for execution of a single instruction. In the example ofFIG.9, there are four stages (stages 0-3) and three spokes (the initiation interval is three). An instruction begins with stage 0 and progresses through stage 1, stage 2, and finally stage 3. At the end of stage 3 the instruction is complete. In some examples, the instruction progresses to the next stage after a defined number of clock cycles (e.g., one clock cycle). Once an instruction has finished a particular stage, another instruction may be processed at that stage. This concurrent operation can be visualized as a form of time-slicing the PE. Each timeslice is referred to as a “spoke” and the number of spokes may be referred to herein as an initiation interval. As shown inFIG.9, at a first time900the tile has an instruction912that enters the first stage on spoke 0. At a second time (e.g., second clock cycle)910, the instruction912progresses to stage 1 from stage 0. A new instruction914may start at stage zero. Moving along to a third time920, the instruction912now moves to stage 2, instruction914moves to stage 1, and a third instruction916may begin execution at stage 0. At a fourth time930, instruction912moves to stage 3, instruction914moves to stage 2, and instruction916moves to stage 1. Because we only have three spokes, a new instruction cannot start in stage 0 as spoke 0 is still processing instruction912. If there were four spokes, a new instruction could start at stage 0. FIG.10illustrates a logical diagram of a processing element (i.e., tile) as it processes instructions according to some examples of the present disclosure.FIG.10continues fromFIG.9with a fifth time1000. Instruction912fromFIG.9has completed on spoke 0 so a new instruction1040enters stage zero from spoke zero. Instructions914and916advance to the next stages. At the sixth time1010, instruction914completes and instruction916advances to stage 3. Instruction1040advances to stage 1. Since spoke 1 has completed instruction914and since stage zero is free, spoke 1 begins execution of instruction1044at stage zero. This continues until all instructions are completed. As previously noted, the inventors have realized that increased efficiency for certain workloads may be attained by altering the initiation interval (i.e., spoke count) of the HTF by making a first initiation interval of a first PE a multiple of a second initiation interval of a second PE. For example, inner loop instructions may be placed on a PE with a lower initiation interval than instructions of an outer loop. This means that the inner loop may execute instructions more frequently in comparison to the outer loop instructions. For example, a configuration processor, may (prior to execution of a piece of code on the CGRA) identify code that benefits from unequal scheduling. For example, a nested loop in a set of instructions to be executed on the CGRA. The code can be identified based upon one or more patterns of one or more instructions. The patterns may be pre-specified, e.g., by an administrator. In response to identifying the code that benefits from unequal scheduling, the configuration processor may configure a first initiation interval of a first processing element of the CGRA to a first value and a second initiation interval of a second processing element of the CGRA to a second value, the second value a multiple of the first value. As noted, the initiation interval specifies a spoke count on a processing element. Thus, the initiation interval is the number of cycles that must elapse between issuing two operations of a given type. Specifically for HTF it is the minimum number of clocks the must elapse between processing of new data items. The configuration processor may then assign instructions of a first portion of the code (e.g., instructions of an inner loop of a nested loop) to the first processing element and instructions of a second portion of the code (e.g., instructions of the outer loop) to the second processing element. The instructions may then be executed by the CGRA. In some examples, the configuration processor may be a processor executing a compiler (which may be a CNM system on which the HTF resides, or may be a separate device from the CNM system) which may embed information on the instruction interval and other information to cause execution of the loops in the manner disclosed. In other examples, the configuration processor may be, or be part of, a dispatch interface that may control execution of tasks on one or more tiles of an HTF, or the like. The above methods increase the utilization of the CGRA by more efficiently allocating instructions to PEs. To illustrate, consider the following nested loop code with five total instructions. Each instruction is labelled with a letter for ease of illustration. for i k=i+5;  a m=k*3;  b for j s=j+mc t=s*4  d u+=t−2;  e return u; As can be appreciated there are two instructions in the outer loop and three in the inner loop. Each instruction is mapped to a particular PE and an instruction index. For simplicity, instruction pipelining delays are ignored and a next instruction index is programmable as a function of the running clock. In a system with instruction intervals that are the same for each PE, the product of the instruction interval and the number of PEs defines the maximum number of instructions available. For the code above, an instruction interval >=3 for each PE would be required if the PEs had equal instruction intervals. With the disclosed invention, the outer loop can be executed on a PE with an instruction interval of four and the inner loop can be executed on a PE with an instruction interval of two, resulting in more efficient data processing. A possible mapping of instructions of the above code to PEs is: PE2RunningPE1PE1 NextInstructionClockIndex,InstructionIndex,Pe2 Next(RC)InstructionIndexInstructionInst Id00, c(RC + 1)0, a(RC + 2)% 4 = 1 *% 4 = 211, eEnd1, d(RC + 2)% 2 = 1 *20, c(RC + 1)2, bEnd% 4 = 3 *31, eEnd3, d(RC + 2)% 2 = 1 *40, c(RC + 1)0, a(RC + 2)% 4 = 1 *% 4 = 251, eEnd1, d(RC + 2)% 2 = 1 *60, c(RC + 1)2, bEnd% 4 = 3 *71, eEnd3, d(RC + 2)% 2 = 1 * Each row of the above table represents an instruction slot of both the first and second PEs. More formally, an instruction slot may be defined as an instruction that begins executing at a particular running clock, or at a particular position in a sequence of instructions. The next instruction identifier is the calculation for the next instruction spoke index. The calculation for the next instruction index is the calculation for the next instruction spoke index. As a simplification each PE has a fixed programmable delay from one instruction to the next. For this example, PE1 has a delay of 1 and PE2 has a delay of 2. To calculate the next instruction spoke index; take the running clock, add the PE programmable delay, and mod with the II of the starting instruction. In this case instructions a and b have II of 4. Instructions c, d, and e have II of 2. The outer loop begins on PE2 instruction index zero and the inner loop starts on PE1 instruction index zero. The instruction index indicates the spoke number that is beginning execution on a PE at the current RC. The instruction index of each PE is defined as the modulo of the running clock (RC) with the initiation interval, such that instruction index=RC % initiation interval. PE1, having an instruction interval of two, alternates between instruction indices of zero and one. PE2, having an initiation interval of four, alternates between instruction indices of 0, 1, 2, and 3. The next instruction index in the above table specifies which instruction the data from the currently executing instruction flows to next. An asterisk denotes that the data flows from the current PE to the other PE. The table above andFIG.11illustrates the execution of eight clocks according to some examples of the present disclosure.FIG.11, illustrates an instruction flow of the above example according to some examples of the present disclosure. As execution of the tile will not begin until the necessary data for the instructions is ready and loaded to the tile, the first time PE1 executes, instructions c and e won't execute until m is calculated by instructions a, and b. At RC=0, PE1 begins executing instruction c (when the data for the instruction is ready as noted) which is instruction1110inFIG.11; and PE2 begins executing instruction a which is instruction1112inFIG.11. The data from the instruction a, when ready, flows to instruction index 2 on PE2 as indicated by the arrow inFIG.11from instruction1112to instruction1118. The data from instruction c on PE1 flows to instruction don PE2—either instruction1116or1120, depending on where the PE2 is currently executing. Instruction d is duplicated because PE1 is processing new instructions at twice the rate of PE2 and thus PE2 may be at either instruction1116or1120when PE1 has completed instruction1110. As shown in the table, at RC=0 and 4, the data flows from instruction1110to instruction1116; and at RC=2, and 6, the data flows from instruction1110to instruction1120. This aligns with the fact that the instruction interval of PE1 is two, but there are three instructions in the inner loop. As a result of this, the instruction d is scheduled twice for PE2 resulting in all three instructions of the inner loop being executed with a higher frequency (two times the frequency) than the two instructions of the outer loop are executed. At RC=1, PE 1 begins executing instruction e and PE2 begins executing instruction d. InFIG.11, this is instruction1114, and1116. Once instruction e is executed, one run through of the inner loop is finished and so the data does not flow to another instruction. The result of instruction d for RC 1 flows to PE1, instruction e (instruction index 1). This is shown inFIG.11by the arrow going from instruction1116to instruction1114. At RC=2, PE1 begins execution of instruction c (instruction1110inFIG.9) and PE2 begins instruction b (instruction1118inFIG.11.) As instruction b ends the outer loop, the next instruction is listed as END in the table, although the data is used in an inner loop later. At RC=3, PE1 begins executing instruction e (instruction1114inFIG.11) and PE2 begins execution of instruction d (instruction1120inFIG.11). The result of instruction e ends processing of the inner loop. The result of instruction d is passed to the next iteration of instruction e of PE1. FIG.12illustrates a flowchart of a method1200of improving efficiency in a processing system according to some examples of the present disclosure. At instruction1202, the system may identify a nested loop in a set of instructions that is to be executed on a CGRA. The instructions may be source code in a higher level language such as C, C++, Java, or the like. In other examples, the instructions may be assembly code. In still other examples, the instructions may be object code, executable code, or the like. A nested loop may be identified by string matching on loop statements and determining when a second loop block begins prior to termination of a first loop block. At instruction1204, the system may configure a first initiation interval of a first processing element to a first value and at instruction1206may configure a second initiation interval of a second processing element to a second value. The second value may be a multiple of the first value. In some examples, the second value and the first value may be determined based upon a number of PEs in the CGRA that will be used to execute the code, a number of instructions in the inner loop, a number of instructions in the outer loop, a number of loops in the outer loop, and/or a number of loops in the inner loop. For example, if the inner loop has a loop count of 1000 (e.g., it is expected to loop 1000 times), than the outer loop is only executed once for every 1000 iterations of the inner. So even if the inner loop only had a single instruction a 1000:1 ratio between the number of spokes allocated to the inner loop vs the outer loop would be ideal. In practice, due to limitations on the number of PEs available, ratios of 2, 3 or 4:1 may be commonly used. One or more of the operations ofFIG.12may be performed by a compiler. In other examples, one or more of the operations ofFIG.12may be performed by a dispatch processor unit that dispatches work to the processing elements. In examples in whichFIG.12is performed by a compiler, the configuration of the initiation interval may be accomplished by including instructions to a dispatch processor unit or other component that configures the processing elements in the object, executable, or other code, or in metadata for that code. In examples in which the dispatch processor unit is performing the operations ofFIG.12, the dispatch processor unit may send one or more messages to the processing elements to configure them or set one or more configuration registers or settings. At operation1210, the system may assign instructions of an inner loop of the nested loop to the first processing element. At operation1212the system may assign instructions of the outer loop of the nested loop to the second processing element. In some examples, one or more instructions of the inner loop may be assigned to the second processing element if space is needed, but the one or more instructions may be duplicated such that they are run in multiple spokes. The number of duplicates may be determined based upon the quotient of the first and second values (the initiation intervals of each PE). For example, if the first value is 2 and the second value is 4, then the instruction may be duplicated once (two same instructions). If the first value is 2 and the second value is 8, then the instruction may be duplicated twice (four same instructions). Instructions may be assigned by specifying which PE is to perform the instruction by a dispatch processing unit, or by information in object or executable code. At operation1214, the instructions are executed by the CGRA and the processing elements. For example, the dispatch processing unit may send a dispatch message with the appropriate instructions for each spoke at the appropriate time. While the above method1200was used to detect and optimize nested loops, other code structures may benefit from unequal initiation intervals on PEs of a CGRA. One of ordinary skill in the art with the benefit of the present disclosure will understand that method1200may be applied to those code structures as well. For example, any time a set of instructions is executed at a different rate from another set of instructions. These could be an independent producer/consumer relationship where the producer executes more/less frequently than the consumer. FIG.13illustrates an instruction processor1310according to some examples of the present disclosure. Instruction processor1310may perform one or more operations of the method ofFIG.12. Machine1400ofFIG.14may be configured to perform the functions of the instruction processor1310. Instruction parser1312may take instructions1305and parse them for processing. The instruction parser1312may identify one or more nested loops or other code structures that may benefit from unequal initiation intervals. Initiation interval calculator1314may calculate initiation intervals for one or more PEs for the nested loops. The PEs may have different initiation intervals. For example, the initiation intervals may be multiples of each other. In examples in which more than two PEs are used, the PEs may be grouped together in PE groups. For example, a first PE group of one or more PEs may execute a first portion of instructions1305(e.g., an outer loop portion of a nested loop) and a second PE group of one or more PEs may execute a second portion of instructions1305(e.g., the inner loop of the nested loop). In some examples, the initiation interval of all the PEs of a same PE group may be the same. Thus, the initiation interval of all PEs within the first PE group may be a first value that is a multiple of the initiation intervals of the PEs of the second group which may be a second value. Instruction assignment component1316may assign portions of instructions1305to one or more PEs within the CGRA. As noted, instructions of an inner loop may be assigned to a PE with a lower initiation interval than instructions of an outer loop. As previously noted, instructions of the inner loop may be assigned to the PE with a higher instruction interval if space within a CGRA is needed, but such an instruction may need to be executed in multiple spokes of that PE. Communication component1318may cause the execution of the instructions1305. In some examples communication component1318may dispatch the instructions to the determined PEs. In other examples, the communication component1318may encode the selection of the PEs for each instruction as well as the initiation interval within an object or executable code that is then communicated and dispatched to the CGRA and cause the PEs to execute the code. FIG.14illustrates a block diagram of an example machine1400with which, in which, or by which any one or more of the techniques (e.g., methodologies) discussed herein can be implemented. Examples, as described herein, can include, or can operate by, logic or a number of components, or mechanisms in the machine1400. Circuitry (e.g., processing circuitry) is a collection of circuits implemented in tangible entities of the machine1400that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership can be flexible over time. Circuitries include members that can, alone or in combination, perform specified operations when operating. In an example, hardware of the circuitry can be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry can include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a machine readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, in an example, the machine-readable medium elements are part of the circuitry or are communicatively coupled to the other components of the circuitry when the device is operating. In an example, any of the physical components can be used in more than one member of more than one circuitry. For example, under operation, execution units can be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time. Additional examples of these components with respect to the machine1400. In alternative embodiments, the machine1400can operate as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine1400can operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine1400can act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine1400can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. The machine1400(e.g., computer system) can include a hardware processor1402(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory1404, a static memory1406(e.g., memory or storage for firmware, microcode, a basic-input-output (BIOS), unified extensible firmware interface (UEFI), etc.), and mass storage device1408(e.g., hard drives, tape drives, flash storage, or other block devices) some or all of which can communicate with each other via an interlink1430(e.g., bus). The machine1400can further include a display device1410, an alphanumeric input device1412(e.g., a keyboard), and a user interface (UI) Navigation device1414(e.g., a mouse). In an example, the display device1410, the input device1412, and the UI navigation device1414can be a touch screen display. The machine1400can additionally include a mass storage device1408(e.g., a drive unit), a signal generation device1418(e.g., a speaker), a network interface device1420, and one or more sensor(s)1416, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine1400can include an output controller1428, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). Registers of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408can be, or include, a machine-readable media1422on which is stored one or more sets of data structures or instructions1424(e.g., software) embodying or used by any one or more of the techniques or functions described herein. The instructions1424can also reside, completely or at least partially, within any of registers of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408during execution thereof by the machine1400. In an example, one or any combination of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408can constitute the machine-readable media1422. While the machine-readable media1422is illustrated as a single medium, the term “machine-readable medium” can include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions1424. The term “machine readable medium” can include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine1400and that cause the machine1400to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples can include solid-state memories, optical media, magnetic media, and signals (e.g., radio frequency signals, other photon-based signals, sound signals, etc.). In an example, a non-transitory machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass, and thus are compositions of matter. Accordingly, non-transitory machine-readable media are machine readable media that do not include transitory propagating signals. Specific examples of non-transitory machine readable media can include: non-volatile memory, such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. In an example, information stored or otherwise provided on the machine-readable media1422can be representative of the instructions1424, such as instructions1424themselves or a format from which the instructions1424can be derived. This format from which the instructions1424can be derived can include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions1424in the machine-readable media1422can be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions1424from the information (e.g., processing by the processing circuitry) can include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions1424. In an example, the derivation of the instructions1424can include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions1424from some intermediate or preprocessed format provided by the machine-readable media1422. The information, when provided in multiple parts, can be combined, unpacked, and modified to create the instructions1424. For example, the information can be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages can be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable etc.) at a local machine, and executed by the local machine. The instructions1424can be further transmitted or received over a communications network1426using a transmission medium via the network interface device1420utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), plain old telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®, IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device1420can include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the network1426. In an example, the network interface device1420can include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine1400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. A transmission medium is a machine readable medium. To better illustrate the methods and apparatuses described herein, a non-limiting set of Example embodiments are set forth below as numerically identified Examples. Example 1 is a method comprising: using a processor: identifying a nested loop in a set of instructions; configuring a first initiation interval of a first processing element of a set of interconnected processing elements to a first value and a second initiation interval of a second processing element of the set of interconnected processing elements to a second value, the second value a multiple of the first value, the first and second initiation intervals specifying a number of consecutive instructions allowed within a processing pipeline of each respective processing element; assigning instructions of an inner loop of the nested loop to the first processing element and instructions of an outer loop of the nested loop to the second processing element; and causing execution of the set of instructions by the first and second processing elements. In Example 2, the subject matter of Example 1 includes, wherein at least one same instruction of the inner loop is assigned to at least two instruction slots of the first processing element. In Example 3, the subject matter of Example 2 includes, wherein the at least two instruction slots are selected based upon an instruction slot of a preceding instruction in the nested loop and the multiple of the second value over the first value. In Example 4, the subject matter of Examples 1-3 includes, determining the first initiation interval and the second initiation interval based upon a number of instructions in the inner loop and outer loop and a number of processing elements. In Example 5, the subject matter of Examples 1-4 includes, wherein the set of interconnected processing elements is a coarse grained reconfigurable array (CGRA) of a compute-near-memory system. In Example 6, the subject matter of Examples 1-5 includes, wherein causing execution of the instructions by the first and second processing elements comprises encoding an indication of the initiation interval and the assignment of instructions into machine code representing the set of instructions or into metadata included along with the machine code. In Example 7, the subject matter of Examples 1-6 includes, wherein the processor implements a dispatch interface and wherein causing execution of the instructions by the first and second processing elements comprises configuring the first and second processing elements, loading the set of instructions according to the instruction assignments, and initiating execution of the instructions. Example 8 is a device comprising: a processor; a memory, the memory storing instructions, which when executed by the processor, causes the device to perform operations comprising: identifying a nested loop in a set of instructions; configuring a first initiation interval of a first processing element of a set of interconnected processing elements to a first value and a second initiation interval of a second processing element of the set of interconnected processing elements to a second value, the second value a multiple of the first value, the first and second initiation intervals specifying a number of consecutive instructions allowed within a processing pipeline of each respective processing element; assigning instructions of an inner loop of the nested loop to the first processing element and instructions of an outer loop of the nested loop to the second processing element; and causing execution of the set of instructions by the first and second processing elements. In Example 9, the subject matter of Example 8 includes, wherein at least one same instruction of the inner loop is assigned to at least two instruction slots of the first processing element. In Example 10, the subject matter of Example 9 includes, wherein the at least two instruction slots are selected based upon an instruction slot of a preceding instruction in the nested loop and the multiple of the second value over the first value. In Example 11, the subject matter of Examples 8-10 includes, wherein the operations further comprise: determining the first initiation interval and the second initiation interval based upon a number of instructions in the inner loop and outer loop and a number of processing elements. In Example 12, the subject matter of Examples 8-11 includes, wherein the set of interconnected processing elements is a coarse grained reconfigurable array (CGRA) of a compute-near-memory system. In Example 13, the subject matter of Examples 8-12 includes, wherein the operations of causing execution of the instructions by the first and second processing elements comprises encoding an indication of the initiation interval and the assignment of instructions into machine code representing the set of instructions or into metadata included along with the machine code. In Example 14, the subject matter of Examples 8-13 includes, wherein the processor implements a dispatch interface and wherein the operations of causing execution of the instructions by the first and second processing elements comprises configuring the first and second processing elements, loading the set of instructions according to the instruction assignments, and initiating execution of the instructions. Example 15 is a non-transitory machine-readable medium, storing instructions, which when executed cause a processor to perform operations comprising: identifying a nested loop in a set of instructions; configuring a first initiation interval of a first processing element of a set of interconnected processing elements to a first value and a second initiation interval of a second processing element of the set of interconnected processing elements to a second value, the second value a multiple of the first value, the first and second initiation intervals specifying a number of consecutive instructions allowed within a processing pipeline of each respective processing element; assigning instructions of an inner loop of the nested loop to the first processing element and instructions of an outer loop of the nested loop to the second processing element; and causing execution of the set of instructions by the first and second processing elements. In Example 16, the subject matter of Example 15 includes, wherein at least one same instruction of the inner loop is assigned to at least two instruction slots of the first processing element. In Example 17, the subject matter of Example 16 includes, wherein the at least two instruction slots are selected based upon an instruction slot of a preceding instruction in the nested loop and the multiple of the second value over the first value. In Example 18, the subject matter of Examples 15-17 includes, determining the first initiation interval and the second initiation interval based upon a number of instructions in the inner loop and outer loop and a number of processing elements. In Example 19, the subject matter of Examples 15-18 includes, wherein the set of interconnected processing elements is a coarse grained reconfigurable array (CGRA) of a compute-near-memory system. In Example 20, the subject matter of Examples 15-19 includes, wherein causing execution of the instructions by the first and second processing elements comprises encoding an indication of the initiation interval and the assignment of instructions into machine code representing the set of instructions or into metadata included along with the machine code. In Example 21, the subject matter of Examples 15-20 includes, wherein the processor implements a dispatch interface and wherein causing execution of the instructions by the first and second processing elements comprises configuring the first and second processing elements, loading the set of instructions according to the instruction assignments, and initiating execution of the instructions. Example 22 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-21. Example 23 is an apparatus comprising means to implement of any of Examples 1-21. Example 24 is a system to implement of any of Examples 1-21. Example 25 is a method to implement of any of Examples 1-21. The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples”. Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” can include “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein”. Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features can be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
115,627
11861367
NOTATION AND NOMENCLATURE Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” or “couples” is intended to mean either an indirect or direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections. The recitation “based on” is intended to mean “based at least in part on.” Therefore, if X is based on Y, X may be based on Y and any number of additional factors. The terms “branch” and “jump” are used herein as equivalents to refer to a discontinuity in instruction retrieval and execution. Accordingly, the terms “jump instruction” and “branch instruction” are used interchangeably. DETAILED DESCRIPTION The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment. While pre-fetching can increase processor performance by reducing pipeline stalls associated with conditional constructs or instruction memory latency in linear code, pre-fetching is not without its issues. The higher the number of instructions pre-fetched, the higher the likelihood that the pre-fetch buffer contains the target instruction of an executed jump or branch. Accordingly, some conventional processors pre-fetch as many instructions as possible. Unfortunately, if the destination instruction referenced by a jump or branch is too distant from the jump or branch instruction, the destination instruction will not be stored in the pre-fetch buffer, and because memory accesses are typically energy intensive, the pre-fetching will have wasted substantial energy retrieving instructions from memory that will not be executed. Energy consumption may be reduced by pre-fetching fewer instructions. In conventional processors, pre-fetch buffer size is determined as a compromise between performance and energy optimization. Embodiments of the present disclosure include a dynamically variable pre-fetch threshold. The pre-fetch threshold determines the number of instructions pre-fetched and stored in the pre-fetch buffer, and varying the pre-fetch threshold allows the number of instructions pre-fetched and stored in the pre-fetch buffer to vary under instruction control. When a portion of the instruction stream including conditional constructs for which the destination instruction of a jump or branch is likely to reside in the pre-fetch buffer is to be executed, the pre-fetch threshold may be increased to improve execution performance. In contrast, when a portion of the instruction stream including discontinuities like sub routine calls, unconditional branches, or conditional constructs for which the destination instruction of the discontinuity is not likely to reside in the pre-fetch buffer (e.g., the pre-fetch buffer is too small to contain the jump and its destination) is to be executed, the pre-fetch threshold may be decreased to reduce energy consumption. Embodiments disclosed herein include instructions that allow the pre-fetch threshold to be programmatically adjusted. FIG.1shows a block diagram of a processor100in accordance with various embodiments. The processor100may be a general purpose microprocessor, a digital signal processor, a microcontroller, or other computing device that executes instructions retrieved from a memory device. The processor100includes a fetch unit104, a decode unit106, and an execution unit108. The fetch unit104retrieves instructions from instruction memory110, for execution by the processor100. The fetch unit104provides the retrieved instructions to the decode unit106. The instruction memory110may be included in the processor100, or external to the processor100. The decode unit106examines the instructions received from the fetch unit104, and translates each instruction into controls suitable for operating the execution unit108, processor registers, and other components of the processor100to perform operations that effectuate the instructions. In some embodiments of the processor100, various operations associated with instruction decoding may be performed in the fetch unit104or another operational unit of the processor100. The decode unit106provides control signals to the execution unit108, and other units of the processor100, that cause the processor100to carry out the operations needed to execute each instruction. The execution unit108includes arithmetic circuitry, shifters, multipliers, registers, logical operation circuitry, etc. that are arranged to manipulate data values as specified by the control signals generated by the decode unit106. Some embodiments of the processor100may include multiple execution units that include the same or different data manipulation capabilities. The processor100may include various other components that have been omitted fromFIG.1as a matter of clarity. For example, embodiments of the processor100may include instruction and/or data caches, memory, communication devices, interrupt controllers, timers, clock circuitry, direct memory access controllers, and various other components and peripherals. The fetch unit104includes a pre-fetch unit102. The pre-fetch unit102pre-fetches instructions from instruction memory110prior to when the instructions are to be decoded, and stores the instructions until the instructions are needed for decoding and execution. By pre-fetching instructions, the processor100can provide stored instructions for execution without the delays often associated with fetching instructions from a memory device that may be unable to provide instructions at as high a rate as the processor100is able to execute the instructions. The pre-fetch unit102allows the number of instructions pre-fetched and stored for later execution to vary based on pre-fetch threshold information provided via instructions executed by the processor100. A software development system that constructs programs for execution by the processor100analyzes jump and branch constructs during program development, and determines whether and/or how much pre-fetching will benefit the execution of the program. If pre-fetching will reduce pipeline stalls caused by the jump or branch instructions, then the software development system will insert in the instruction set (i.e., the program) to be executed by the processor100, instructions that set the pre-fetch threshold to allow pre-fetching of the jump destination instruction. If pre-fetching will not reduce pipeline stalls caused by particular jump or branch instructions, then the software development system will insert in the instruction set to be executed by the processor100, instructions that reduce the pre-fetch threshold to reduce energy consumed by pre-fetching instructions that will not be executed. FIG.2shows a block diagram of the pre-fetch unit102in accordance with various embodiments. The pre-fetch unit102includes instruction storage202and pre-fetch control logic204. The instruction storage202includes an array of storage cells, such as registers and/or memory devices that store instructions retrieved from the instruction memory110. Instructions stored in the instruction storage202are provided to the decoder106for execution by the execution unit108. The instruction storage202may include storage for any number of instructions. For example, embodiments of the instruction storage202may store 16, 32, 64, 128, or another number of instruction words. Similarly, the storage cells of the instruction storage202may be of any width needed to store instructions executed by the processor100. For example, the storage cells may be 16 bits in width if the processor100executes instructions that are 16-bits (or a multiple of 16-bits) in width. Similarly, the storage cells may be 32 bits in width if the processor100executes instructions that are 32-bits (or a multiple of 32 bits) in width, etc. As instructions are pre-fetched, the pre-fetched instructions may be sequentially stored in the instruction storage202. The pre-fetch control logic204is coupled to the instruction storage202, and controls pre-fetching of instructions from instruction memory110, storing of pre-fetched instructions in the instruction storage202, and reading of instructions from the instruction storage202for execution. The pre-fetch control logic204includes read-write control logic208and a pre-fetch threshold register206coupled to the read-write control logic208. The read-write control logic208may including address and access control logic for reading and writing to the instruction storage202. For example, the read-write control logic208may include logic to implement reading and writing of a circular buffer in the instruction storage202. Storage cells of the circular buffer may be written/over-written when the contents of the storage cells is provided to the decode unit106, when the circular buffer is flushed due to a flow direction requiring instructions not already in the buffer, etc. The read-write control logic208may also include pre-fetch address and control logic for triggering fetch operations by the fetch unit104for fetching of instructions that are to be stored in the instruction storage202(i.e., pre-fetching instructions). For example, when storage cells of a circular buffer formed in the instruction storage202are available to be written/over-written, the read-write control logic208may trigger the fetch unit104to fetch instructions to be written to the buffer. The pre-fetch threshold register206limits the number of instructions pre-fetched and stored in the instruction storage202in accordance with a pre-fetch threshold value stored in the pre-fetch threshold register206. For example, a pre-fetch threshold value stored in the pre-fetch threshold register206may control the number of instruction words that can be pre-fetched and stored in the instruction storage202in advance of execution. If the pre-fetch threshold value specifies that only a few instruction words ahead of an instruction currently being executed may be pre-fetched and stored in the instruction storage, the number of pre-fetch cycles wasted when a program discontinuity causes the buffer to be flushed is reduced. If the pre-fetch threshold value specifies pre-fetching of a greater number of instruction words, then stall cycles will be reduced if the instruction storage contains the pre-fetched destination instruction associated with an executed jump or branch instruction. Similarly, specifying pre-fetching of a greater number of instruction words can reduce stall cycles for linear code fetched from a slow instruction memory, which adds bus stall cycles at high clock frequencies. In some embodiments of the pre-fetch control logic204, the pre-fetch threshold value stored in the pre-fetch threshold register206controls the number of instruction words pre-fetched by setting a maximum offset between a read pointer that controls instructions read from the instruction storage202and a write pointer that controls instructions written to the instruction storage202. In other embodiments of the pre-fetch control logic204, the pre-fetch threshold value controls the number of instruction words pre-fetched by setting the number of storage cells of the instruction storage202included in a circular buffer that stores pre-fetched instruction words. The pre-fetch threshold value stored in the pre-fetch threshold register206is provided via an instruction executed by the processor100. A pipeline element (e.g., the decode unit106or execution unit108) identifies an instruction passing through the pipeline that sets the pre-fetch threshold value, extracts the pre-fetch threshold value from the instruction, and provides the pre-fetch threshold value to the pre-fetch unit for storage in the pre-fetch threshold register206. When the pre-fetch threshold value stored in the pre-fetch threshold register206changes, the number of instructions, sequentially following a currently executing instruction, that are pre-fetched changes. Some embodiments of the processor100can decode and execute instructions of various lengths. For example, the decode unit106may decode instructions that are 16 bits in length and instructions that are 32 bit in length. To reduce overhead associated with execution of instructions that set a pre-fetch threshold, the decode unit106may simultaneously process a pre-fetch threshold instruction and another instruction. For example, a 16 bit pre-fetch threshold instruction may be simultaneously decoded with another 16 bit instruction if the decode unit106can receive and decode 32 bit instructions. The decode unit106may provide the pre-fetch threshold value to the pre-fetch unit102. Thus, the processor100may provide instruction based pre-fetch threshold adjustment with little or no additional execution cycle overhead. FIG.3shows an exemplary instruction300for controlling pre-fetch threshold in accordance with various embodiments. In some embodiments, the instruction300may be dedicated to setting the pre-fetch threshold (i.e., a command code dedicated to setting pre-fetch threshold). In other embodiments, the instruction300may be a general-purpose instruction, such as a load or store instruction, that loads a value into a register (e.g., the pre-fetch threshold register), where the pre-fetch threshold register is, for example, memory mapped. In other embodiments, the instruction300may be any instruction executable by the processor100that includes a field that is used to transfer pre-fetch threshold information. The instruction300includes a THRES field302that specifies the pre-fetch threshold value to be applied in the pre-fetch unit102. The THRES field302may contain a coded value that indicates a maximum number of instruction words to be pre-fetched. For example, a single bit THRES field302may be used, where a “1” indicates that the maximum number of instruction words to be pre-fetched corresponds to the maximum number of instruction words storable in the instruction storage202(or any predetermined number of instruction words), and a “0” indicates that no (or any predetermined number of) instruction words are to be pre-fetched. In some embodiments, the THRES field302may contain a value that specifies a number of instruction words to be pre-fetched. In other embodiments, the pre-fetch threshold value may be encoded in the command code304of the instruction300or in another field of the instruction300. In some embodiments of the processor100, the execution unit108or other pipeline element may extract the value from the THRES field302and apply further processing to the value prior to providing the value to the pre-fetch unit102. For example, decoding may be applied to the value provided in the THRES field302, and the decoded value provided to the pre-fetch unit102. FIG.4shows an instruction sequence400that includes a pre-fetch threshold set to optimize performance in accordance with various embodiments. The instruction stream400includes a pre-fetch threshold instruction402, jump instructions404and408, and jump destination instructions406and410. Instruction406is the destination of jump instruction404, and instruction410is the destination of jump408. While the instruction sequence400is under development, the software development system analyzes the sequence and identifies instructions404,406,408, and410. The software development system computes the distances between the various jump and destination instructions, and determines whether the instruction storage202is large enough to store pre-fetched instructions encompassing jump instructions404,408and jump destination instructions406,410. If the instruction storage202is large enough to store, for example, 16 instruction words, and the jump instruction404through the destination instruction410includes 8 instruction words, then software development system may determine that the sequence from jump instruction404to destination instruction410can be pre-fetched to improve execution efficiency. Accordingly, the software development system can insert pre-fetch threshold instruction402in the instruction sequence ahead of the jump instruction404, where the pre-fetch threshold instruction402specifies a pre-fetch threshold value large enough to allow the sequence from the jump instruction404through the destination instruction410to be pre-fetched and stored in the instruction storage202. The pre-fetch threshold instruction402sets a pre-fetch threshold of 16 instruction words (e.g., the entire instruction storage202). In other embodiments, the pre-fetch threshold instruction402may set the pre-fetch threshold to a different value (e.g., 8, 12, etc.). FIG.5shows an instruction sequence500that includes a pre-fetch threshold set to reduce pre-fetch energy use in accordance with various embodiments. The instruction stream500includes a pre-fetch threshold instruction502, jump instruction504, and jump destination instruction506. While the instruction sequence500is under development, the software development system analyzes the sequence and identifies instructions504and506. The software development system computes the distance between instructions504and506, and determines whether the instruction storage202is large enough to store pre-fetched instructions encompassing instructions504and506. If the instruction storage202is large enough to store, for example, 16 instruction words, and the jump instruction504through the destination instruction506includes 200 instruction words, then the software development system may determine that the sequence from jump instruction504to destination instruction506cannot be pre-fetched to improve execution efficiency. Accordingly, the software development system can insert pre-fetch threshold instruction502in the instruction sequence ahead of the jump instruction504, where the pre-fetch threshold instruction502specifies a pre-fetch threshold value small enough to reduce extraneous pre-fetching of instructions between instruction504and instruction506that may not be executed. Thus, the relatively small pre-fetch threshold specified by instruction502may save the energy associated with pre-fetching instructions that are not executed. The pre-fetch threshold instruction502sets a pre-fetch threshold of 4 instruction words. In other embodiments, the pre-fetch threshold instruction402may set the pre-fetch threshold to a different value (e.g., 2, 0, half the instruction storage, etc.). FIG.6shows a flow diagram for a method600for setting pre-fetch thresholds in accordance with various embodiments. Though depicted sequentially as a matter of convenience, at least some of the actions shown can be performed in a different order and/or performed in parallel. Additionally, some embodiments may perform only some of the actions shown. At least some of the operations of the method600may be performed by a processor executing instructions retrieved from a non-transitory computer readable storage medium. In block602, a software program executable by the processor100is under development. A tool of a software development system, e.g., a compiler, assembler, or other tool, analyzes instructions of the program and identifies jump or branch instructions and the destination instructions associated with a break in program flow caused by execution of the jump or branch instructions. In block604, the tool determines the distance (offset or number of instruction words) between the identified jump or branch instructions and the associated destination instructions. In some embodiments, where jump instructions are in close proximity, the tool may determine the distance between a jump instruction and a destination instruction of a subsequent jump instruction. In block606, the tool determines whether the distance is greater than the number of instructions/instruction words that can be stored in the instruction storage202of the pre-fetch unit102. If the distance exceeds the capacity of the instruction storage202, then, in block608, the tool inserts into the instruction sequence a pre-fetch threshold instruction that sets the pre-fetch threshold of the pre-fetch unit102to a relatively low value (e.g., 0, 2, 4, etc.). If the distance does not exceed the capacity of the instruction storage202, then, in block610, the tool inserts into the instruction sequence a pre-fetch threshold instruction that sets the pre-fetch threshold of the pre-fetch unit102to a relatively high value (e.g., a value large enough to allow storage of the instructions from the jump through the jump destination). In block612, the tool identifies a set of successive (i.e., adjacent) instructions in the instruction stream generated by the tool. The set of successive instructions lack flow redirection instructions (jump, call, etc.) and therefore will be sequentially executed by the processor100. If the number of successive sequentially executed instructions is greater than a predetermined value, then, in block614, the tool inserts into the instruction sequence a pre-fetch threshold instruction that sets the pre-fetch threshold of the pre-fetch unit102to a relatively high value (maximum pre-fetch). Setting the pre-fetch threshold to a high value may accelerate execution of the set of successive instructions by reducing pipeline stalls associated with retrieving the instructions from memory. The tool may analyze the entirety of the software program under development in accordance with the operations of blocks602to614. For example, each program discontinuity (jump, call, etc.) in the software program may be processed in accordance with blocks602-610and each set of successive sequentially executed instructions of the software program may be processed in accordance with blocks612-614. Because the analysis and control of the pre-fetch threshold is performed at program build time rather than program run time, the processor100need not include logic for determining whether the pre-fetch threshold should be increased or decreased to best accommodate conditional constructs. Accordingly, embodiments of the processor100may be less costly and more power efficient than processors that analyze instructions for setting the pre-fetch threshold at run time. In block616, the processor100is executing the program. The processor100is pre-fetching instructions from the instruction memory100, storing instructions in the instruction storage202, reading instructions from the instruction storage202, and providing the pre-fetched instructions read from the instruction storage202for execution. In block618, a pipeline element (e.g., decode unit106or execution unit108) of the processor100identifies a pre-fetch threshold instruction that is being executed. For example, the command code of the instruction is identified. The pipeline element extracts a pre-fetch threshold value from the identified instruction, and provides the pre-fetch threshold value to the pre-fetch unit102. In block620, the pre-fetch unit sets the pre-fetch threshold based on the pre-fetch threshold value. That is, the pre-fetch unit102sets the number of instruction words that can be pre-fetched from instruction memory110and stored in the instruction storage in accordance with the pre-fetch threshold value. FIG.7shows a block diagram of a system700for setting pre-fetch thresholds in a set of instructions under development in accordance with various embodiments. The system700includes a processor702and storage704. The processor702is communicatively coupled to the storage704. The processor702may be a general-purpose microprocessor, a digital signal processor, a microcontroller, or other device capable of executing instructions retrieved from a computer-readable storage medium. Processor architectures generally include execution units (e.g., fixed point, floating point, integer, etc.), storage (e.g., registers, memory, etc.), instruction decoding, peripherals (e.g., interrupt controllers, timers, direct memory access controllers, etc.), input/output systems (e.g., serial ports, parallel ports, etc.) and various other components and sub-systems. The storage704is a non-transitory computer-readable storage medium suitable for storing instructions that are retrieved and executed by the processor702to perform the functions disclosed herein. The storage704may include volatile storage such as random access memory, non-volatile storage (e.g., a hard drive, an optical storage device (e.g., CD or DVD), FLASH storage, read-only-memory), or combinations thereof. The system700may include other components and subsystems (not shown) such as a display device, input devices, and various interfaces. The display device may produce images rendered by the processor702for viewing by a user of the system700. The display device may be liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, or any other type of display device suitable for producing images rendered by the processor702. An input device is an instrument that can be manipulated by a user to control the system700. The input device may be a keyboard, a touch panel integrated with the display device, a pointing device such as a mouse, a trackball, a touch pad, a camera-based input device, or any other instrument suitable for manipulation by a user to operate the system700. Interfaces suitable for use in the system700may include a network adapter that allows the system700to communicate with other devices via wired or wireless networking, multi-media interfaces such as sound generation systems, sound capture systems, video capture systems, etc. In some implementations, the system700may be embodied in a computer, such as a desktop computer, a workstation computer, rack mount computer, a notebook computer, or other form of computer known in the art. The storage706includes software development system706and software program under development710. The program under development710is a sequence of instructions executable by the processor100. The software development system706includes tools for generating the program under development710, such as a compiler, an assembler, a linker, etc. The software development system706also includes a pre-fetch threshold analysis and control tool708that analyzes the instructions of the program under development710, identifies conditional constructs including jump and branch instructions and the destinations of the jump and branch instructions, determines whether the pre-fetch unit102can be applied to accelerate execution of the conditional constructs, and inserts pre-fetch threshold instructions in the program under development710to set the pre-fetch threshold applied by the pre-fetch unit102as described herein. The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
28,042
11861368
DESCRIPTION OF EXAMPLES An apparatus has processing circuitry having two or more execution states for execution of instructions; branch history storage to store branch history information indicative of at least one branch property for a sequence of branches; and prediction circuitry to determine a prediction for controlling execution of at least one instruction by the processing circuitry, where the prediction circuitry is configured to determine a first type of prediction based at least on a first prediction table storing prediction information looked up based on at least a first portion of the branch history information corresponding to a first predetermined number of branches. Performing the prediction based on information from a table looked up based on at least a portion of the branch history information can be useful for performance reasons. The same instruction may behave in different ways depending on what other instructions have been executed before that instruction. The branch history information (which indicates branch properties for a sequence of branches) can be used as an indication of the path of program flow that was taken leading up to the instruction being predicted. Hence, by using the branch history information for selecting which entry of prediction information from the first prediction table is used to form the prediction, this enables more accurate prediction of the behaviour expected for the current scenario in which the instruction is encountered. However, while prediction mechanisms can be useful for performance, in recent years it has been discovered that, in the absence of a suitable defence mechanism, such prediction mechanisms can potentially introduce security vulnerabilities that can be exploited by attackers to gain access to sensitive information. If the attacker can maliciously train the prediction circuitry to generate an incorrect prediction for a given instruction, a number of subsequent instructions may speculatively be executed based on the incorrect prediction, and even if later the prediction is determined to be incorrect and so the architectural effects of the incorrectly executed instructions are then reversed, the incorrectly executed instructions may have caused changes in which addresses have data allocated to a cache, which could be probed by an attacker using cache-timing side-channel methods. It is possible to use this type of attack to leak information about sensitive information which is inaccessible to the attacker's program code but is accessible to victim code executing in a more privileged execution state. Recently, a new form of this attack has been described which is based on maliciously training the branch history information stored in the branch history storage before an execution state switch from a first execution state to a second execution state more privileged than the first execution state, in an attempt to cause the wrong entry of a prediction table to be selected based on the branch history information when making predictions influencing the execution of instructions of victim code executing in the second execution state after the execution state switch. If the attacker can find parts of the victim code that access secret information and cause those instructions to be incorrectly executed (possibly in a sequence not envisaged by the developer of the victim code) due to the incorrect prediction based on the maliciously trained branch history information, this may affect cache allocation and allow deductions about the secret information to be made based on cache timing side-channels. While relatively difficult to mount, this attack has been demonstrated in practice, even on processor hardware which has hardware-implemented defences against other forms of cache-timing side-channel attacks. This type of attack can be referred to as “branch history injection” (BHI) or “Spectre-BHB”. One approach for defending against a BHI attack could be to clear either the first prediction table or the branch history storage, or both, in response to an execution state switch to a more privileged execution state. However, this may have a negative effect on performance. Another approach can be to ensure that lookup mechanism for looking up the first prediction table uses a more precise tagging mechanism to avoid prediction information allocated for one instruction being accessible when making a prediction for another instruction. However, such a more precise tagging mechanism may be more expensive to implement in terms of circuit area and power consumption (e.g. requiring wider prediction table entries and comparison circuit logic to support a larger number of bits), and so may be less preferred. In the examples discussed below, where a first prediction table is to be looked up based on a first portion of the branch history information corresponding to a first predetermined number of branches, prediction control circuitry is provided to:in response to detecting the execution state switch of the processing circuitry from a first execution state to a second execution state more privileged than the first execution state, disable use of the first prediction table in determining the first type of prediction; andin response to detecting that a number of branches for which an update has been made to the branch history storage since the execution state switch is greater than or equal to the first predetermined number, re-enable use of the first prediction table in determining the first type of prediction. This can avoid the need to implement more precise tagging of prediction information in the first prediction table, but nevertheless defends against the BHI attack because the first prediction table is prevented from being used for determining the first type of prediction until the number of branches for which a branch property has been allocated to the branch history storage since the execution state switch has reached or exceeded the first predetermined number of branches corresponding to the portion of the branch history information that is actually used for looking up the first prediction table. Hence, it can be policed that the branch history information used for the prediction lookup is branch history information allocated since the execution state switch, which removes the opportunity for an attacker executing code in the first execution state to maliciously train branch history information to cause incorrect predictions in the second execution state. This approach to defending against BHI attacks can be better for performance than an implementation which clears the branch history information or first prediction table in response the execution state switch. For example, there may be occasions when the apparatus executed instructions in the second execution state for only a relatively short period of time before switching back to the first execution states, and so if the period of execution the second execution state is short enough, the branch history storage may still be storing branch history information relating to instructions executed in the first execution state before the switch to the second execution state. This branch history information could be useful for selecting prediction information for the instructions executed after the return to the first execution state, and so can help improve performance by enabling more accurate predictions, but would be lost in an implementation which clears the branch history information in response to the execution state switch to the second execution state. In response to a return to the first execution state, which occurs after the execution state switch at a time when the number of branches causing an update to the branch history storage since the execution state switch is still less than the first predetermined number, the prediction control circuitry may re-enable use of the first prediction table in determining the first type of prediction. By not clearing the branch history information in response to the execution state switch, but instead temporarily disabling use of the first prediction table for forming the first type of prediction until either the number of branches whose property information is allocated to the branch history storage since the execution state switch reaches or exceeds the first predetermined number, or the processing circuitry returns to the first execution state, this helps to improve performance in comparison to fully clearing the branch history storage in response to the execution state switch. The first prediction table may not be the only table of prediction information used for generating the first type of prediction. In some examples, the prediction circuitry may determine the first type of prediction based on at least the first prediction table and a second prediction table storing prediction information looked up based on at least a second portion of the branch history information corresponding to a second predetermined number of branches, where the second predetermined number is greater than the first predetermined number. It will be appreciated that the number of tables used to form the first type of prediction may in fact be greater than two and so the first prediction table and the second prediction table can be any two of the prediction tables used to form the first type of prediction. Hence, describing presence of a first prediction table and a second prediction table does not exclude the possibility that there could also be a third prediction table, fourth prediction table, etc., even if those additional prediction tables are not explicitly described. The prediction control circuitry may, in response to detecting the execution state switch, disable use of the second prediction table in determining the first type of prediction. In response to detecting that the number of branches causing an update to the branch history storage since the execution state switch is greater than or equal to the second predetermined number, the prediction control circuitry may re-enable use of the second prediction table in determining the first type of prediction. Hence, use of the first prediction table for determining the first type of prediction may be re-enabled earlier than re-enabling use of the second prediction table for determining the first type of prediction. This can be useful for performance because, compared to an approach which fully disables speculation based on the first type of prediction altogether until all prediction information used in the first type of prediction can be considered safe, performance recovery after the execution state can be more gradual and allows each type of prediction table to be used as soon as the portion of the branch history information used to look up that prediction table is determined to be safe (when that portion is based exclusively on properties of branches executed after the execution state switch). This avoids the performance penalty of unnecessarily preventing use of the first prediction table while waiting for the branch history information used by the second prediction table to become safe. In one example, the prediction circuitry determines the first type of prediction based on two or more tagged-geometric prediction tables, including the first prediction table, which are looked up based on respective portions of the branch history information corresponding to successively increasing numbers of branches. The prediction circuitry selects, as the first type of prediction, a prediction based on the tagged-geometric prediction table which, among the tagged-geometric prediction tables currently enabled for use and which detect a lookup hit, is looked up based on a portion of branch history information corresponding to the greatest number of branches. Following the execution state switch, the prediction control circuitry gradually re-enables use of the respective tagged-geometric prediction tables in ascending order of the number of branches corresponding to the respective portions of the branch history information used for looking up the respective tagged-geometric prediction tables. For some types of prediction, tagged-geometric predictors can be particularly good for performance in comparison to other types of predictor mechanisms. If a hit can be detected in a table looked up based on a longer sequence of branch history, the prediction made is more likely to be accurate than if the prediction is based on a hit detected in a table looked up based on a shorter sequence of branch history, because the longer branch history is less likely to encounter “aliasing” lookups where a hit is detected in the lookup although in fact the predicted behaviour represented by the hit entry does not match the actual behaviour of the instruction for which the prediction is made. However, a table looked up based on a long sequence of branch history is more likely to encounter a miss than a table looked up based on a shorter sequence of branch history, and there may be some instructions whose behaviour does not depend strongly on outcomes of branches executed a relatively long time ago, which might not be able to be predicted well using a table looked up based on a longer sequence of branch history. Therefore, a tagged-geometric predictor balances these competing factors by looking up multiple tables based on different lengths of branch history and forming the prediction based on the table which, out of those tables currently enabled for use and which detect a hit in the lookups, is the table looked up based on the longest portion of branch history. Tagged-geometric predictors can provide better performance than a predictor based on the lookup of a single table based on a single length of branch history. The technique discussed above for protecting against BHI attacks can be particularly useful for tagged-geometric predictors to allow progressive recovery of performance after the execution state switch. In a tagged-geometric example, the prediction circuitry may also form the first type of prediction based on a base prediction table looked up based on a value which does not depend on the branch history information in the branch history storage at all (e.g. the lookup value for the base prediction table could be derived from a program counter address). The base prediction table can be used as a fallback predictor in case none of the tagged-geometric prediction tables detect a hit. Use of predictions based on the base prediction table may remain enabled after the execution state switch, as they do not depend on the branch history information in the branch history storage and so are not vulnerable to the BHI attacks described above. The first type of prediction may not be the only type of prediction which depends on the branch history information stored in the branch history storage. The prediction circuitry may also determine a second type of prediction depending on at least a portion of the branch history information. Following the execution state switch, the prediction control circuitry may enable use of the branch history information for determining the second type of prediction, even when use of the first prediction table for determining the first type of prediction is disabled. For example, the second type of prediction may be a form of prediction which is much less likely to be exploited by an attacker to yield information about secret information accessible to the program code executing in the second execution state, and so may be considered safe to proceed after the execution state switch even if based on branch history information allocated based on branches executed in the first execution state before the execution state switch. Therefore, there may be no need to disable use of the branch history information for determining the second type of prediction. This may be another reason why the technique discussed above can be useful for performance in comparison to techniques which clear the branch history information in response to the execution state switch. Even if it helps address a BHI attack based on the first type of prediction, clearing the branch history information would negatively affect performance by also preventing use of the second type of prediction. In contrast, the technique discussed above of disabling use of the first prediction table for a time following the execution state switch allows the BHI attack based on the first type of prediction to be defended against without incurring the performance impact of clearing the branch history information which would reduce prediction accuracy for the second type of prediction. Following the execution state switch, the prediction control circuitry may enable use of said at least a portion of the branch history information for determining the second type of prediction, independent of the number of branches which have caused an update to the branch history storage since the execution state switch. Hence, it is not necessary to make enabling of the second type of prediction depend on the number of branches for which at least one branch property has been allocated to the branch history storage since the execution state switch. The second type of prediction may remain enabled following the execution state switch regardless of the number of branches encountered since the execution state switch. In general, the first type of prediction may be any form of prediction which may be considered to pose a vulnerability which could be exploited by a BHI attack, while the second type of prediction may be any form of prediction for which such vulnerability may be unlikely (at least when it is assumed that the first type of prediction is protected against BHI attacks by the technique discussed above). In one example, the first type of prediction comprises a prediction of a branch target address, and the second type of prediction comprises a prediction of whether a branch is taken or not-taken. Prediction of branch outcome (taken/not-taken) is much less likely to cause a vulnerability that could be exploited by a BHI attack, because, provided the target address prediction is protected against attack, incorrectly predicting a branch taken or not-taken when the actual outcome should have been the opposite would merely result in selecting the wrong path out of two valid options intended to be available for selection by the software developer of the code being executed. In contrast, branch target address mispredictions may be of greater concern as an incorrect branch target address prediction could lead to an entirely different instruction being executed which is not one of the instructions intended by the software developer as valid options to be executed after the branch. Therefore, while the first prediction table (and second or further prediction tables) for determining branch target address predictions may temporarily be disabled after the execution state switch for a time and re-enabled based on monitoring of the number of branches as discussed above, the taken/not-taken predictions may continue to be made based on branch history information allocated in the first execution state. This can be useful for performance reasons because sometimes behaviour of branches in the second execution state may depend on the path executed in the first execution state which caused a system call to the second execution state, and so the branch history information allocated in the first execution state may help to improve prediction accuracy for taken/not-taken branch prediction made in the second execution state. The apparatus may comprise a branch counter to count a number of branches causing an update to the branch history storage. The prediction control circuitry may reset the branch counter to a reset value (e.g. 0, or any other initial value) in response to detecting the execution state switch. Following the execution state switch, the prediction control circuitry determines, based on the branch counter, whether to re-enable use of the first prediction table in determining the first type of prediction. For example, the first prediction table may be re-enabled when the branch counter value has reached a certain threshold corresponding to the first predetermined number. Similarly, the second prediction table may be re-enabled when the branch counter has reached a certain threshold corresponding to the second predetermined number. The prediction circuitry may look up the first prediction table based on a hash value derived from a program counter address (an address of an instruction representing a current point of program execution) and the first portion of the branch history information. The hash value may have fewer bits than the total number of bits in the program counter address and the first portion, so that it is possible for different combinations of program counter address and value of the first portion to alias to the same hash value. While this might sometimes lead to incorrect hits on an entry trained for a different instruction or a different sequence of branch history, such incorrect hits may be relatively rare and the hashing approach can greatly reduce the circuit area and power cost of looking up the table compared to a precise hashing approach which avoids any aliasing. However, implementations which use such a hash value could potentially be exploited by an attacker using a BHI attack, based on maliciously training the branch history information so that the hash value derived from a first program counter address and the attacker's trained value of the first portion of the branch history information is the same as the hash value previously derived from a second program counter address and a second value of the first portion of the branch history information when allocating an entry of the first predictor table. The attacker may be able to use this to cause an incorrect prediction to be made, which could allow the instructions in the program code executed in the second execution state to be strung together in ways which were not expected by the software developer of the program code, which could cause security vulnerabilities. By using the approach discussed above of temporary disablement of use of the first prediction table for a time after the execution state switch, but re-enabling use of the first prediction table when the number of branches allocated to the branch history storage since the execution state switch is or exceeds the first predetermined number, this makes it safe to continue using the imprecise hashing approach, so avoids the energy/circuit area penalty that would be incurred for a BHI defence based on fully-tagged prediction entries based on a precise lookup that does not permit aliasing. Hence, the technique discussed above can be particularly useful for implementations which look up the prediction table based on a hash value of the program counter address and the first portion of the branch history information. In some examples, each entry of the first prediction table is associated with a context identifier distinguishing entries allocated in different execution contexts, where execution contexts corresponding to the first execution state and the second execution state have different context identifiers, and in a lookup of the first prediction table performed for a first execution context, the prediction circuitry detects a miss for a given entry of the first prediction table when a mismatch is detected between the context identifier for the given entry and a context identifier associated with the first execution context. This can provide a further defence against other variants of speculation-based cache-timing attacks not based on branch history injection. Other defences to those variants of attacks are also possible (e.g. based on preventing use of speculatively allocated cached information for a time after a switch from a more privileged execution state back to a less privileged execution state). Hence, the selective disabling of prediction resources described above for addressing the BHI attack need not be the only form of defence provided. A wide variety of other defences are possible for dealing with other variants of speculation-based cache-timing attacks (including defences based purely in software and not requiring hardware protection). The branch history storage may update the branch history information for the sequence of branches based on a first-in-first-out (FIFO) policy. For example, the branch history storage may operate as a circular buffer, where (if there is no empty location available) the property for the latest branch allocated to the history storage overwrites the property for the branch least recently allocated to the branch history storage, with a pointer being used to track the location in the buffer to which the next piece of branch history information is to be allocated. Alternatively, allocation of new branch history information may be made to the same location in the buffer every time, but on each allocation the previous contents of the buffer are shifted up one position to evict the least recently allocated entry which is shifted out at the other end of the buffer from the end at which the new information is inserted. It is not necessary to update the branch history storage for all branches encountered. In some examples, only a subset of branches may cause an update to the branch history storage. The sequence of branches tracked by the branch history storage may therefore be the most recent sequence of branches which meet the criteria for allocating to the branch history storage, rather than the most recent sequence of branches per se. For example, the selection of whether to allocate a particular branch to the branch history storage may be based on branch type or branch alignment (the relative offset of the branch instruction address relative to an alignment boundary). Hence, in response to a newly encountered branch, the branch history storage updates a given location of the branch history storage based on the at least one branch property of the newly encountered branch, where that given location is selected independent of a program counter address of the newly encountered branch. Hence, the branch history information stored in the branch history storage may be considered an indication of “global” branch history—a property reflecting the overall behaviour of the program being executed as it traverses a path of program flow across multiple branches. This may differ to “local” branch history maintained in an prediction table looked up based on a value derived from a program counter address of the branch, where the program counter is used to distinguish which of several entries relates to the program counter address for the branch being looked up. The branch history information in the branch history storage may provide a branch history value which depends on the order in which the branches having the respective branch properties were encountered. The at least one branch property allocated to the branch history storage for each of the branches can vary between different implementations. For example, the at least one branch property could be a taken/not-taken outcome for the given branch, or a branch target address for the given branch, or a combination of (or hash value derived from) the taken/not-taken outcome and branch target address, and/or another property of each branch. The techniques discussed above can be used for any type of prediction which is based on a prediction table looked up based on the branch history information, which could potentially be vulnerable to BHI attacks. For example, one particularly useful form of prediction that could be protected using the mechanisms discussed above can be where the first type of prediction comprises branch target address prediction. More particularly, the first type of prediction may comprise polymorphic branch target address prediction, where the first prediction table supports two or more entries being allocated to provide two or more different target addresses corresponding to the same branch instruction but different values of the first portion of the branch history information. Polymorphic branch target address prediction can be useful for more complex branches whose target address may be data-dependent and so one instance of executing the branch may calculate a different target address to another instance. A hash of the program counter address of the branch with a portion of branch history information can be a way of distinguishing the scenario in which a given branch is encountered in a given program, and so allow different entries for different target addresses to be distinguished, but this opens an opportunity for an attacker to modify the branch history information in such a way that an entry allocated for one program counter address may be used to provide a prediction for a different branch having a different program counter address due to aliasing of the hash values as discussed above—this can be exploited in a BHI attack. The technique of selecting when to re-enable use of the first prediction table based on the number of branches encountered since the execution state switch as discussed above can therefore be particularly useful for polymorphic branch target address predictions. However, the techniques discussed above can also be used for types of prediction other than branch predictions. Prediction of instruction behaviour for non-branch instructions can nevertheless depend on a lookup based on branch history information, so could potentially be vulnerable to branch history injection attacks. Such predictions could cause incorrect speculative execution which can cause cache allocations/evictions which can be probed with cache timing measurements, potentially leaking sensitive information. The defence mechanisms discussed above can therefore be useful for such other types of predictions. For example, the first type of prediction could comprise a prefetch prediction for determining data or instructions to be prefetched into a cache, or a value prediction to predict a value of data or instructions to be loaded from memory. Specific examples are now described with reference to the drawings. FIG.1schematically illustrates an example of a data processing apparatus2. The data processing apparatus has a processing pipeline4which includes a number of pipeline stages. In this example, the pipeline stages include a fetch stage6for fetching instructions from an instruction cache8; a decode stage10for decoding the fetched program instructions to generate micro-operations to be processed by remaining stages of the pipeline; an issue stage12for checking whether operands required for the micro-operations are available in a register file14and issuing micro-operations for execution once the required operands for a given micro-operation are available; an execute stage16for executing data processing operations corresponding to the micro-operations, by processing operands read from the register file14to generate result values; and a writeback stage18for writing the results of the processing back to the register file14. It will be appreciated that this is merely one example of possible pipeline architecture, and other systems may have additional stages or a different configuration of stages. For example, in an out-of-order processor a register renaming stage could be included for mapping architectural registers specified by program instructions or micro-operations to physical register specifiers identifying physical registers in the register file14. The execute stage16includes a number of processing units, for executing different classes of processing operation. For example the execution units may include a scalar arithmetic/logic unit (ALU)20for performing arithmetic or logical operations on scalar operands read from the registers14; a floating point unit22for performing operations on floating-point values; a branch unit24for evaluating the outcome of branch operations and adjusting the program counter which represents the current point of execution accordingly; and a load/store unit26for performing load/store operations to access data in a memory system8,30,32,34. In this example, the memory system includes a level one data cache30, the level one instruction cache8, a shared level two cache32and main system memory34. It will be appreciated that this is just one example of a possible memory hierarchy and other arrangements of caches can be provided. The specific types of processing unit20to26shown in the execute stage16are just one example, and other implementations may have a different set of processing units or could include multiple instances of the same type of processing unit so that multiple micro-operations of the same type can be handled in parallel. It will be appreciated thatFIG.1is merely a simplified representation of some components of a possible processor pipeline architecture, and the processor may include many other elements not illustrated for conciseness. As shown inFIG.1, the apparatus2includes a branch predictor40for predicting outcomes of branch instructions. The branch predictor is looked up based on addresses of instructions provided by the fetch stage6and provides a prediction of whether those instructions are predicted to include branch instructions, and for any predicted branch instructions, a prediction of their branch properties such as a branch type, branch target address and branch direction (predicted branch outcome, indicating whether the branch is predicted to be taken or not taken). The branch predictor40includes a branch target buffer (BTB)42for predicting properties of the branches other than branch direction, and a branch direction predictor (BDP)44for predicting the not taken/taken outcome (branch direction). The branch predictor40also includes a polymorphic branch target address predictor46for predicting the target address of certain more-complex-to-predict branches which can have different target addresses on different instances of executing the branch. In contrast, the BTB42may be a simpler structure which records a single predicted target address per branch. One of the branch properties predicted by the BTB42could include a prediction of whether the target address for a given branch is better predicted using the polymorphic branch target address predictor46or whether the BTB prediction of the target address is sufficient. It will be appreciated that the branch predictor could also include other prediction structures, such as a call-return stack for predicting return addresses of function calls, a loop direction predictor for predicting when a loop controlling instruction will terminate a loop, or other more specialised types of branch prediction structures for predicting behaviour of outcomes in specific scenarios. The various components42,44,46of the branch predictor maintain tables of branch prediction state used to generate their predictions. Table updating circuitry60may update these tables based on branch outcomes (e.g. taken/not-taken, and target address) determined by the branch unit24for executed branch instructions. The apparatus2could also have other types of prediction circuitry, such as a data prefetcher50for predicting addresses of data likely to be requested from the memory system30,32,34by the load/store unit26in response to instructions, and prefetching data into the caches30,32from memory34in advance of such requests to reduce access latency, and/or a load value predictor52which predicts the data value of data being loaded from the memory system30,32,34before the data is actually returned, so that subsequent instructions can be executed speculatively based on the predicted data value. Similarly, on the instruction side, an instruction prefetcher54and/or instruction value predictor56can be provided to predict the addresses and encodings of instructions to be fetched by the fetch stage6. For all of the prediction structures40,50,52,54,56shown inFIG.1, if a prediction is correct, this will tend to improve performance by allowing other operations performed speculatively based on the prediction to be performed earlier. If the prediction turns out to be incorrect, the pipeline can be flushed of instructions which are potentially affected by the misprediction (e.g. the pipeline can be flushed of instructions from a point of program order at or after the mispredicted instruction) and processing may resume from a safe point of execution. Provided the mispredictions are sufficiently rare, processing performance as a whole may be faster despite the occasional misprediction. The table updating circuitry60can learn from previous mispredictions to adjust the prediction state used by the prediction structures to improve the likelihood of predictions being correct in future. While table updating circuitry60is shown explicitly only for the branch predictor, it will be appreciated that the other prediction structures may have similar circuitry for updating the prediction state used to make the predictions (in the case of the data prefetcher50, based on data access addresses calculated by the load/store unit26for executed instructions; for the load value predictor52based on the data values returned for load operations; for the instruction prefetcher54based on the fetch addresses calculated for the instructions being fetched by the stage6; and for the instruction value predictor56based on the loaded encodings of the fetched instructions). FIG.2schematically illustrates an example of processes which can be executed by a data processing apparatus in a number of execution states EL0, EL1, EL2, EL3, S-EL0, S-EL1associated with different levels of privilege. A hypervisor62may manage a number of virtual machines (VMs, also known as guest operating systems or guest OS)64. Each VM64may manage one or more applications66. For example the hypervisor62may control which regions of an address space are allocated to each virtual machine64and control switching between the virtual machines64, e.g. scheduling interrupts to time share processing resource between the respective virtual machines64. Similarly, each VM64may control which regions of the address space are allocated to each application66executing under that VM64, and may control switching between the applications as required. As shown inFIG.1, each process is associated with a given privilege level as determined by the execution state EL0, EL1, EL2, EL3in which the process is executed. In this example higher numbered privilege levels are more privileged than lower numbered privilege levels, although the numbering scheme could be the other way round in other examples. In this example, the applications66execute at privilege level EL0, the VMs64execute at privilege level ED and the hypervisor62executes at privilege level EL2. Typically, a process executing at a higher privilege level has rights not available to a process executing at a lower privilege level. As shown inFIG.1, the hypervisor62, VMs64and apparatus66may operate in a normal domain. In addition, the apparatus may support a secure domain which is partitioned from the normal domain so that processes executing in the normal domain cannot access data or instructions associated with the secure domain. Hence, there may also be processes running in the secure domain, such as a secure operating system (OS)70and trusted applications72executing in the secure domain under control of the secure OS70. The secure OS70and trusted applications72execute at privilege levels S-EL1, S-EL0respectively. WhileFIG.2does not show it, some implementations may also provide a secure hypervisor running in a “secure EL2” execution state (others may manage the secure OS70from the secure monitor code74at EL3without an intervening secure hypervisor, so those systems may not have a secure EL2execution state). The secure monitor process74also provided at privilege level EL3to manage transitions between the normal domain and the secure domain. The secure monitor process74may for example manage which regions of the address space are associated with the secure or non-secure domains, with some protection hardware being provided to prevent non-secure processes in the normal domain accessing data or instructions within the secure regions. An example of a technique for partitioning the normal and secure domains is the Trustzone® technique provided by ARM® Limited of Cambridge, UK, although other examples could also be used. The provision of a secure domain as shown inFIG.2is optional and other embodiments may not support the execution states for supporting the secure monitor74, secure OS70and trusted applications72for example. Hence, the processing circuitry has a number of execution states (e.g. corresponding to the combination of the exception level (EL) and security state (normal/secure domain)), which affects a level of privilege granted to instructions executing in those states. For example, the execution state may determine which types of instructions can be executed, which registers are readable, which registers are writable, and which memory locations can be read/written. The secure domain can be regarded as more privileged than the normal domain, and higher exception levels can be regarded as more privileged than lower exception levels. In general, software executing in a more privileged state may have access to some data not accessible to a less privileged state, either due to an inherent hardware-implemented control mechanism (not programmable based on software) which is controlled based on the current execution state according to rules defined in an instruction set architecture (e.g. an architectural restriction that a certain register is inaccessible in a certain execution state), or based on software-controlled information, such as page table permissions set in page tables to deny access to a certain memory address space region to a process executing at a less privileged state, with the enforcement of the page table permissions set by software being controlled in hardware by a memory management unit for example. FIG.3illustrates a portion of the apparatus ofFIG.1in more detail, showing polymorphic branch target prediction circuitry46(which is an example of the prediction circuitry mentioned earlier), branch history storage100and prediction control circuitry102which controls the operation of the polymorphic branch target prediction circuitry46with reference to a branch counter104. The branch history storage100is a record of branch properties of the N most recently encountered branches meeting any conditions required for allocation to the branch history storage (where N is a certain integer). In some examples, all branches may be considered to meet those conditions, in which case the branch history storage100simply tracks the most recent N branches. However, in other examples some other allocation conditions may be applied—e.g. limiting which types of branches are allocated to the branch history storage, in which case the N branches in the branch history storage100may be the most recent N branches meeting the allocation conditions. Each time a branch meeting any conditions required for allocation is encountered, a value derived from one or more properties of the executed branch is written to the branch history storage in the next available entry. Although it is possible to update the branch history storage100based on actual branch outcomes derived by the branch unit24in the execute stage16of the pipeline4, in practice the branch predictor may be operating a number of cycles ahead of execution at the execute stage16, and so to more accurately predict branch properties of a given branch based on behaviour of preceding branches which may not yet have reached the execute stage16, the update of the branch history storage100can be based on predicted branch properties of recently encountered branches which are based on earlier predictions of the branch predictor40and which may not yet have been verified as correct. If a branch misprediction is detected, the branch history storage100can be flushed of information allocated for the mispredicted branches and younger branches. The branch history storage operates in a first in first out (FIFO) manner, and so if there is no invalid entry available for allocation, then the branch property value written for the latest branch causes eviction of the branch property value for the least recent branch tracked in the branch history storage100. For example, the branch history storage can be operated as a circular buffer where a pointer indicates the next entry to be updated and the pointer is advanced each time a new branch is encountered, so that writing of information for a new branch may overwrite the information for the least recently allocated branch. Alternatively, the branch property could always be written to a predetermined location and the previous contents of the branch history storage can be shifted up one position causing the information for the least recent branch to be shifted out and discarded. Hence, there can be a number of different ways of implementing the circuitry for tracking the branch history, but in general branch history information indicating at least one branch property per branch is maintained for a sequence of branches, in a manner such that a value derived from the at least one branch property is separately represented for each of those branches and maintained in an order which corresponds to the order in which those branches are encountered in the program flow. The particular branch property used as the information updated in the branch history storage for a given branch can vary. In some examples, the branch property is the taken/not-taken outcome of the branch. In other examples, the branch property is a target address of the branch (the address to which the branch causes program flow to be diverted when the branch is taken). In some cases, both of these properties may be combined to form a value to be written to the branch history storage for the branch. Other properties could also be considered. However, in a relatively simple implementation, the branch property could simply be the taken/not-taken outcome of the branch so that the branch history storage provides a series of bits of one and zero indicating the pattern of taken/not-taken outcomes for the most recent N branches meeting the requirements for allocation to the branch history storage. The branch history storage100can also be referred to as a global history register (GHR) because it provides a measure of the overall program flow through a program being executed, rather than attempting to track state for any particular branch at a given program counter address. Hence, which location of the branch history storage is updated for the latest branch encountered in program flow may be independent of the program counter address of that latest branch. This differs from other tables of branch prediction state which may be maintained for the branch prediction components such as the BTB42, branch direction predictor44and polymorphic branch target prediction circuitry46, since such tables may typically be looked up based on a value derived from the program counter address, and when the table is updated by the table updating circuitry60based on the branch outcome derived for a particular executed branch, the table updating circuitry60will select a particular entry to update based on the program counter address of that branch. Hence, the prediction tables used by components42,44,46may be tables of local branch history comprising entries which each relate to behaviour for a specific branch having a specific target address (or a block of branches in a certain region of the address space), in contrast to the global history in branch history storage100which is tracking a history of branches for the program as a whole, regardless of which particular branches (at any particular program counter addresses) were executed. The branch history information maintained the branch history storage100can be useful as information for deriving a value for looking up the local history tables maintained by the branch predictor components42,44,46, so that an entry specific to a recent pattern of branch outcomes can be selected and so different paths of program flow to the same branch can be distinguished to make different predictions depending on the particular route taken through the program to arrive at the branch being predicted. This can be particularly useful for the polymorphic branch target prediction circuitry46and the branch direction predictor44, in comparison to the BTB42, since the BTB42may be used for predicting static properties of simpler branches (e.g. branch type, which depends solely on the branch instruction encoding, or target addresses of simpler branches which always jump to the same target address). In this example, the polymorphic branch target prediction circuitry46is a tagged-geometric (ITTAGE) predictor, which forms a prediction of the branch target address for an instruction at a given program counter address (or, in some implementations, predicts the branch target address of the first taken branch in a region of addresses corresponding to the given program counter address, if lookups are grouped by instruction address region). The prediction is based on multiple tagged-geometric prediction tables110,112, . . .114and a base prediction table120. There are M tagged-geometric prediction tables in total (where M is any integer greater than 1)—FIG.3only shows 3 of the tagged-geometric prediction tables for conciseness. In each table120,110-114, there are a number of prediction entries, each specifying a tag value124, a context identifier126and a predicted target address128(other information not shown inFIG.3could also be specified by each entry). The tables120,110-114are looked up based on different lookup values130,132,134,136respectively, each lookup value130-136being derived from a different combination of information. For the base table120, the lookup value130is based on a hash of the program counter address of the instruction or instruction block for which the prediction is being made. For each of the tagged-geometric prediction tables110,112, . . . ,114, the lookup value132,134,136is based on a hash of the program counter address with respective portions GHR0, GHR1, . . . , GHR(M−1) of the branch history information stored in the branch history storage100. The respective portions GHR0to GHR(M−1) of branch history information are of successively increasing length (corresponding to successively greater numbers of recent branches). Hence, portion GHR0used for table T0110corresponds to a certain number X1of branches, portion GHR1used for table T1112corresponds to a certain number X2of branches (where X2>X1), and so on until portion GHR(M−1) used for table T(M−1)114corresponds to a number X(M−1) (greater than X1, X2, etc.) of branches (typically the number of branches represented by the entire contents of the branch history storage100). If the branch history storage100is operated as a circular buffer, the start point for reading each portion of branch history is the point indicated by the buffer pointer as representing the location storing the information for the least recently allocated branch, and the portions of branch history read out may wrap around the beginning of the buffer if the required portion of branch history is longer than the portion between the pointer-indicated location and the end of the buffer. Alternatively, it may be simpler to operate the branch history storage100as a shift register which shifts all previously allocated branch history information up one position when new information is inserted into the storage100—in the case of using a shift register, the portion to be read out for hashing in the lookup of each tagged-geometric table110,112,114can start from the same location in the buffer each cycle, rather than needing to read a pointer value. Nevertheless, both implementations are possible. Hence, each of the tables120,110,112,114is looked up based on its corresponding lookup value130,132,134,136. The lookup of each table110,112,114,120depends on both a context identifier comparison and a tag comparison, with the comparisons performed on one or more entries of each table. The number of entries looked up in a given one of the tables depends on the lookup scheme used for that table. For a direct-mapped scheme only a single entry of the given table needs to be looked up, with the entry to use selected based on a portion of the lookup value and the tag124compared with a remaining portion of the lookup value. For a set-associative scheme, a set of two or more entries of the given table (not all the entries) is selected based on a portion of the lookup value and the tag124of those entries is compared with a remaining portion of the lookup value. For a fully-associative scheme, all the entries of the given table are looked up, and the lookup value130,132,134,136is compared with the tag value of all entries of the given table. The context identifier comparison compares a current context identifier identifying the current execution context with the context field126of each looked up entry. The context field126is set based on the current context in which instructions are being executed at the time the entry is allocated by table updating circuitry60. The current context identifier used for the lookup is based on the current context at the time of the lookup. These context identifiers could for example be an indication of the exception level EL, or a context identifier (e.g. thread identifier, address space identifier or virtual machine identifier, or a combination of more than one identifier) identifying a specific execution context such as one of the hypervisor62, secure monitor74, VMs64, secure OS70, or applications66,72. Hence, for a given table lookup, a hit is detected when one of the looked up entries encounters both a tag match in the tag comparison and a context match in the context comparison. An entry that encounters only one of the tag match and the context match but does not have a match for the other of the tag and context comparisons is detected as missing against the lookup. Filtering lookups based on the context comparison can be useful to protect against some variants of speculative side-channel attacks such as Spectre, by preventing entries allocated for one context being used to provide predictions for another (possibly more privileged) context. For each table120,110,112,114, the predicted target address128specified by an entry for which a hit was detected (if any) is provided to prediction selection circuitry140, together with a hit indication142indicating whether any hit was detected in the lookup of the corresponding table. The prediction selection circuitry140also receives enable signals144,146, . . . ,148corresponding to each tagged-geometric prediction table110,112, . . . ,114, indicating whether predictions based on the corresponding tagged-geometric prediction table are enabled. The generation of these enable signals144,146,148is described further below. The base prediction table120can be considered to be always enabled, so there is no corresponding enable signal for the base prediction table120. The prediction selection circuitry140selects a target address from among the predicted target addresses128output by the tables120,110,112, . . .114. Any tables which did not generate a hit or which are currently disabled are discounted from the selection, so only target addresses output by enabled tables which generated a hit in the lookup can be selected as the target address prediction150output by the polymorphic branch target prediction circuitry46. Among those tables which are both enabled and encountered a hit, the selection circuitry140selects the target address output by the one of the enabled/hit tagged-geometric tables that was looked up based on the longest sequence of branch history, and if none of the enabled tagged-geometric tables detect a hit, and the base prediction table120provided a hit, then the target address128output by the base prediction table120is selected. Hence, the order of preference for selecting the prediction is:select the target address128predicted by the longest-history-sequence tagged-geometric table114, T(M−1), if tagged-geometric table T(M−1) is enabled and detected a hit;if tagged-geometric table T(M−1)114did not detect a hit or was disabled, select the target address128predicted by the next longest-history-sequence tagged-geometric table T(M−2), if tagged-geometric table T(M−2) is enabled and detected a hit;and so on for each successive table looked up based on the next shortest sequence of history . . .if tagged-geometric table T2did not detect a hit or was disabled, select the target address128predicted by the second-shortest-history-sequence tagged-geometric table T1112, if tagged-geometric table T1is enabled and detected a hit;if tagged-geometric table T1did not detect a hit or was disabled, select the target address128predicted by the shortest-history-sequence tagged-geometric table T0110, if table T0is enabled and detected a hit;if tagged-geometric table T0did not detect a hit or was disabled, and the base prediction table120detects a hit, select the target address128predicted by the base prediction table120.if none of the tagged-geometric tables T0. . . T(M−1) are both enabled and output a hit, and the base prediction table120did not detect a hit, then no target address prediction is possible using predictor46. The branch predictor40can either fall back on a target address prediction made by the BTB42, or if no target address prediction is available at all, can predict that any branch, if present, would be not-taken and so allow the fetch stage6to continue to fetch instructions sequentially. The tagged-geometric approach is useful because a table looked up based on a single branch history would have to compromise on the length of branch history100used for the lookup. If the length of branch history is too short, the predictor may not be able to distinguish different outcomes for the same branch which follow different patterns of branch history preceding the branch which share the same pattern for the shorter sequence of immediate branch history corresponding to the length of the history portion used for the lookup, but which differ in branch properties for branches further away in time which could have been distinguished using a portion of the branch history100not used in the short branch history portion used for the lookup. If the length of branch history used for the lookup is too long, while occasionally the longer branch history sequence can help to more accurately predict branches whose output depends on branches a longer time ago, other branches which depend only on more recent branches may fail to be predicted accurately because of irrelevant differences in branch properties recorded in the portion of the branch history storage100used for the lookup relating to branches which are less recent. By providing tables looked up based on branch history portions of different lengths, and choosing the prediction corresponding to the longest sequence of branch history that causes a hit to be generated in the tagged-geometric table110-114, and falling back to the base prediction table120if none of the tagged-geometric tables generates a hit, then this enables much greater prediction accuracy as it enables both branches which depend only on very recent branch history and branches which depend on less recent branch history to be predicted based on the different prediction tables110-114,120. As discussed above, the execution states of the processing circuitry4may be assigned different privileges and the privilege-based control mechanism may be used to restrict access to certain secure resources (e.g. program code or data in memory) to prevent, for example, user code executing at EL0from accessing kernel resources associated with an operating system executing at EL1. In recent years, a type of security attack (commonly known as Spectre) has been described which attempts to gain access to the kernel resources from user code operated by an attacker, by exploiting the property that the effects of speculatively executed instructions (e.g. instructions executed speculatively after a branch prediction) may persist in the cache even after any architectural effects of the speculatively executed instructions have been reversed following a misspeculation. A number of variants of such attacks have been described. Such attacks may train branch predictors or other speculation mechanisms to trick more privileged code into speculatively executing a sequence of instructions designed to make the privileged code access a pattern of memory addresses dependent on sensitive information, so that less privileged code which does not have access to that sensitive information can use cache timing side-channels (measurements of the time taken to access data/instructions for various memory addresses) to probe which addresses have been allocated to, or evicted from, the cache by the more privileged code, to give some information which could allow the sensitive information to be deduced. Some initially proposed variants of the Spectre attack were based on the fact that many branch predictors share prediction state entries between less privileged and more privileged execution contexts, so that a branch in a more privileged execution context may have its target address predicted based on a prediction state entry trained based on branches executed and a less privileged execution context, so that the more privileged branch is mispredicted and causes instructions to be executed from an incorrect branch target address causing an attacker-controlled “gadget”—code designed to expose the sensitive information—to be executed in the more privileged execution context to cause information with an address dependent on the sensitive information to be allocated into the cache. A number of hardware and software mitigations against such attacks are possible, but one defence is as shown inFIG.3, to tag prediction state entries with the context identifier126and to perform a context identifier comparison between a context identifier of the current execution state of the processing circuitry4and the context identifier tagged for a given branch prediction state entry, so that a hit is detected only when the context identifiers match. This avoids branch prediction state trained by the attacker's user-level program code at EL0being used to predict target addresses for branches in kernel-level program code at ED. However, recently a new variant of the Spectre attack is being published, referred to as Spectre-BHB or “branch history injection” (BHI), which exploits the branch history register100to influence the indirect prediction of target addresses of polymorphic branches in kernel-level program code (code at EL1), to cause one branch in the ED program code to be incorrectly predicted as using the target address of another branch of the ED program code, which while a legitimate target for that other branch would not be a safe target for the first branch. FIG.4schematically illustrates an example of this attack. The kernel code160includes a number of branches including branch X (BR_X)162and branch Y (BR_Y)164. Branch X is a branch expected to be executed relatively shortly after an entry point into kernel code160from user-level code operating at EL0, and so is protected by surrounding the branch with some other instructions designed to reduce the likelihood of attacks like Spectre (e.g. as branch X162is considered relatively vulnerable given its proximity to the entry point from user-level code, the branch X162may be associated with a speculative barrier instruction to prevent subsequent instructions being speculatively executed based on the branch outcome, to prevent cache allocation of information following the branch until the correct branch outcome has been resolved). However, the performance cost of providing such protections for every branch of the kernel-level code160may be too high and so other branches, such as branch Y, which are not expected to be executed shortly after the entry point from user-level code, may be unprotected. Hence, the branch X162may have a number of legitimate safe target addresses, T_X0and T_X1, which the polymorphic branch target prediction circuitry46can learn to predict through training based on previous outcomes of executing branch X162, but the legitimate target addresses T_Y0and T_Y1of branch Y164may be considered legitimate unsafe target addresses as branch Y is not associated with the same protections as branch X. As shown at the top ofFIG.4, the polymorphic branch target prediction circuitry46may have been trained, based on legitimate execution of instructions from the kernel-level code160, to allocate a prediction entry for branch Y with a certain value, e.g. 0xBC, of the tag124(computed based on the hash of the PC of branch Y and a pattern of branch history from register100). This entry is tagged with the ED context identifier and specifies a predicted target address128of T_Y1, which is one of the legitimate targets of branch X. Similarly, the legitimate training of the polymorphic branch target prediction circuitry46causes another entry to be allocated for branch X, tagged with the al context identifier, a tag value, e.g. 0xF4, (derived from the PC of branch X and a pattern of branch history from register100that was seen preceding branch X) and the predicted target address of T_X1, which is again one of the legitimate targets of branch X. However, the attacker controls the user-level code operating at EL0to execute a software routine designed to cause a sequence of branches with a certain pattern of branch properties (e.g. pattern of taken/not-taken outcomes and/or target addresses) to be executed, which causes the history register100to be filled with the corresponding sequence of branch properties, so that when the attacker code at EL0makes a supervisor call to trigger a switch to the kernel-level code operating at EL1, the lookup of prediction state performed for branch X162of the kernel level code executed soon after the supervisor call is based on a hash value132,134,136derived from a portion of branch history, a significant portion of which is based on outcomes of branches executed in the attacker's code at EL0. If the attacker can carefully control the sequence of branch properties allocated to the history register, the attacker can cause the hash value132,134or136generated based on the PC of branch X and the EL0-allocated sequence of branch history in register100to match the tag value 0xBC in the entry168allocated in the prediction tables110-114for branch Y, causing the unsafe target address T_Y1to incorrectly be predicted as the target address of branch X (even if a few of the branches used in that portion of branch history are branches executed in al after the execution switch, if those branches tend to have relatively consistent outcomes then the lookup will be more influenced by the behaviour in EL0than in al around the execution state switch). By causing the kernel-level code to execute in a sequence not expected by the developer of the kernel-level code, the kernel-level code's own instructions could be used as a gadget by the attacker to cause sensitive information not directly accessible to the attacker to be accessed based on the kernel's level of privilege. This may leak information to the attacker if addresses dependent on that sensitive information are allocated to the cache and the addresses allocated to the cache subsequently be probed by cache timing measurements. As this misprediction is based on a lookup for one branch hitting against an entry allocated for another branch in the same execution state, the context identifier comparison using context tag126would not detect any mismatch. While this may be a more sophisticated attack which is harder to mount by an attacker than the originally disclosed Spectre variants, because it relies on the attacker finding existing vulnerable code within the kernel-level program code which is a valid branch target for some branches of the kernel code but could incorrectly be executed following a branch misprediction of another branch of the kernel code (rather than the attacker being able to force execution of arbitrary attacker supplied code), and on the gadget at the incorrect target address being such that it is exploitable to leak sensitive information, this attack has been demonstrated in practice. One approach to defending against this attack could be to use full tagging of the entries in the prediction circuitry46based on the PC address of the looked up branch, rather than using a hash132,134,136of the PC with fewer bits which permits aliasing where different PCs can map to the same hash/tag values. However, more precise tagging would incur a significant circuit area penalty because each entry of the prediction tables would have to be much larger (as well as having wider comparison logic for the tag comparisons). Another approach can be to remove the global history input into the hash used to generate the lookup value (effectively predicting the target address based only on the base predictor120). However, this would again incur a significant performance penalty, because the global history value is useful for distinguishing different program flow paths to the same branch which may cause different target addresses to be calculated depending on data arising from those earlier program flow paths, and so use of the global history value in the hash132,134,136calculated for looking up prediction state can be extremely beneficial for improving prediction accuracy. Another approach can be to completely flush the contents of the local branch prediction tables110,112,114, when switching from a less privileged state (e.g. EL0) to a more privileged state (e.g. EL1). However, this would have a drastic effect on performance, causing a great slowdown because all the information learned from previous branches will be lost on a supervisor call, causing branches to be mispredicted for a long period afterwards. There is also a performance overhead because invalidation of table entries takes some time. Therefore, this would be undesirable. Another approach can be to clear the contents of the branch history storage100when the supervisor call is made from the less privileged execution state (e.g. EL0) to the more privileged state (e.g. EL1). However, again this would have an effect of reducing performance because, firstly, many supervisor calls only cause the ED code to be executed for a relatively short time before switching back to EL0, and following the return to EL0, the information on previous branch history associated with the earlier period of execution of EL0may still be in the history register and may be useful for predicting outcomes of subsequent branches in EL0. Also, even while executing branches in ED following the supervisor call, in some scenarios branch predictor accuracy may be higher if information allocated by EL0can be considered, because the behaviour of a branch in ED executed shortly after an entry point from EL0may depend on the location in the EL0code from which the supervisor call was made, which could be distinguished based on branch history of previous branches executed by EL0. Also, there are aspects of branch prediction which can safely be predicted based on branch history allocated by EL0, without risk of the Spectre-BHB attack. For example, the taken/not-taken outcome prediction made by the branch direction predictor44may (provided the branch target address prediction is not successfully attacked) not be at risk of causing a vulnerable gadget to be executed because it merely controls whether the next instruction executed after branch X166is the sequential instruction following branch X166or one of the legitimate safe targets T_X0, T_X1. If the contents of the branch history storage100were flushed on each supervisor call, this would reduce the accuracy of the branch direction prediction of a branch following the supervisor call. From analysis of typical software workloads, it has been identified that supervisor calls may occur relatively frequently in some workloads (e.g. every few thousand processing cycles) and so flushing the global history100on each supervisor call would have a negative impact on performance. Instead, the prediction control circuitry102protects against the Spectre-BHB attack in a different way. On a transition from a less privileged execution state (e.g. EL0) to a more privileged execution state (e.g. EL1), the contents of the branch history storage100are not changed, and so the global branch history is left as it is (including any branch property information which may have been maliciously trained by an attacker). Instead, the prediction control circuitry102uses the branch counter104to count how many branches have had branch properties allocated into the branch history storage since the change of execution state. The prediction control circuitry102resets the branch counter104to an initial value (reset value) in response to the execution state switch, and then the branch counter104is advanced (e.g. incremented or decremented) each time a subsequent branch causes an update to the branch history storage. The prediction control circuitry102then controls generation of the enable signals144,146,148for the tagged-geometric tables110,112,114so that these prediction resources are disabled in response to the execution state switch, but subsequently re-enabled selectively once the used portion GHR0, GHR1, . . . , GHR(M−1) for the respective tables has become “safe”, that is when the counter104indicates that a sufficient number of branches have been encountered since the execution state switch that the corresponding portion of branch history used for looking up that table represents only outcomes of branches executed since the execution state switch. Hence, as shown inFIG.5, for an implementation with three tagged-geometric tables T0, T1, T2(110,112,114) looked up based on portions of branch history corresponding to 5 branches, 10 branches and 20 branches respectively, all of these tables T0, T1, T2can initially be disabled in response to the execution state switch, but table T0110can be re-enabled when the counter104indicates that the number of branches seen since the execution state switch is or more, table T1112can be re-enabled when the counter104indicates that the number of branches seen since the execution state switch is 10 or more, and table T2114can be re-enabled when the counter104indicates that the number of branches seen since the execution state switch is 20 or more. Also, all of the tables can be re-enabled if there is a subsequent switch back to the less privileged execution state EL0. Hence, in the scenario shown in FIG. where the number of branches indicated by the counter104is 6, then table T0110is currently enabled but tables T1and T2112,114are currently disabled. Hence, as shown inFIG.6, following the execution state switch from EL0to EL1, all of the tagged-geometric prediction tables T0, T1, T2, etc. which are looked up based on the global branch history in the branch history storage100are temporarily disabled. Gradually, as the number of branches executed in ED increases, each of the tagged-geometric tables T0, T1, T2is successively re-enabled in ascending order of the length of history used for lookup. Hence, performance recovery is gradual and allows each prediction resource to be re-enabled as soon as it is safe to do so. Meanwhile, the use of the base prediction table120(which does not depend on branch history information from the branch history storage100) remains enabled following the execution state switch. Also, the branch direction predictor44(which, given that branch target prediction has now been made safe, can be safely predicted based on the global branch history of branch history storage100even when an attacker maliciously trains that history) remains enabled following the execution state switch and so does not need to suffer in terms of performance, as would be the case for the alternative approaches discussed above where the global branch history100or prediction tables used by the branch direction predictor44are flushed in response to the execution state switch. While the examples discussed above relate to polymorphic branch target prediction (e.g. ITTAGE), a similar technique may be used for any other type of predictor which uses at least one prediction table looked up based on a portion of the global branch history stored in the branch history storage100. For example, other prediction structures, such as the data prefetcher50, load value predictor52, instruction prefetcher54and instruction value predictor56, could also use a portion of branch history read from the branch history storage100to look up prediction state and so could be vulnerable to similar attacks to the Spectre-BHB attack discussed above. For example, a TAGE predictor (a tagged-geometric predictor used to predict branch direction—taken or not-taken outcome) or a VTAGE predictor (a tagged-geometric predictor used as the load value predictor52or instruction value predictor56) could make use of these techniques. A tagged-geometric predictor could be any of TAGE, VTAGE or ITTAGE for example. In any of these examples, those prediction circuits could also be provided with prediction control circuitry102to selectively disable/enable use of predictions based on prediction state looked up based on branch history information from the global history register100, with the disable/enable control based on whether the number of branches executed since the execution state switch to a more privileged execution state has exceeded the number of branches corresponding to the size of the portion of branch history information used for the lookup. Also, while the technique is particularly useful for tagged-geometric predictors with a number of geometrically-tagged tables looked up based on successively longer portions of branch history, the technique can also be used for a predictor which only has one prediction table looked up based on a single fixed size portion of branch history, with the branch counter104being used to determine when the number of branches encountered since the execution state switch reaches the number of branches represented by that fixed size portion of branch history, at which point the use of that prediction table can be re-enabled. Hence,FIG.7illustrates a method for a data processing system which has prediction circuitry which determines a first type of prediction (e.g. branch target address prediction by polymorphic branch target address predictor46, prefetch prediction by data/instruction prefetcher50or54, or value prediction by data/instruction value predictor52,56) based at least on a first prediction table (e.g. one of tagged-geometric tables110,112,114) storing prediction information looked up based on at least a first portion of the branch history information corresponding to a first predetermined number of branches. At step200, instructions are executed by the processing circuitry4of the data processing system2. At step202, the prediction control circuitry102detects whether an execution state switch has been detected from a first execution state (e.g. EL0) to a second execution state (e.g. EL1) having greater privilege than the first execution state. If no such execution state switch is detected then instruction execution and use of prediction resources continues as normal. If an execution state switch to an execution state with greater privilege is detected, then at step204the prediction control circuitry102disables use of the first prediction table for generating the first type of prediction. A second prediction table (or further prediction table) whose lookup is based on branch history information from storage100may also be disabled. A second type of prediction (e.g. branch direction prediction44) may remain enabled despite being looked up based on global branch history allocated before the execution state switch100. There is no need for the prediction control circuitry102to trigger any flushing or invalidation of global branch history allocated in the branch history storage100before the execution state switch. The prediction control circuitry102resets the branch counter104and the branch counter104starts to count branches executed following the execution state switch which have caused an update to the branch history storage100. At step206, the prediction control circuitry102determines whether the number of branches for which at least one branch property was allocated to the branch history storage100is greater than or equal to the first predetermined number of branches corresponding to the size of the portion of branch history used for the lookup of the first prediction table. If not, then the prediction control circuitry102continues to wait for the number of branches to reach the first predetermined number. Once the number of branches causing an update to the branch history storage100since the execution state switch reaches the first prediction number, then at step208use of the first prediction table for generating the first type of prediction is re-enabled. If there is more than one prediction table which is looked up based on different sized portions of branch history from storage100, then those tables are re-enabled in response to the number of branches counted by branch counter104reaching different thresholds corresponding to the size of the respective portions of branch history used for the lookup. Hence, use of a second prediction table for generating the first type of prediction may be re-enabled when the branch counter104indicates that the number of branches causing an update to the branch history storage100since the execution state switch exceeds a second predetermined number (which may be greater than the first predetermined number used for the first prediction table). Also, while not shown inFIG.7, if there is a return to the first execution state while any of the prediction tables are still disabled because the number of branches counted by branch counter104has not yet reached the corresponding threshold for that table to be re-enabled, then use of that prediction table can be re-enabled in response to the return is a first execution state. Concepts described herein may be embodied in computer-readable code for fabrication of an apparatus that embodies the described concepts. For example, the computer-readable code can be used at one or more stages of a semiconductor design and fabrication process, including an electronic design automation (EDA) stage, to fabricate an integrated circuit comprising the apparatus embodying the concepts. The above computer-readable code may additionally or alternatively enable the definition, modelling, simulation, verification and/or testing of an apparatus embodying the concepts described herein. For example, the computer-readable code for fabrication of an apparatus embodying the concepts described herein can be embodied in code defining a hardware description language (HDL) representation of the concepts. For example, the code may define a register-transfer-level (RTL) abstraction of one or more logic circuits for defining an apparatus embodying the concepts. The code may define a HDL representation of the one or more logic circuits embodying the apparatus in Verilog, SystemVerilog, Chisel, or VHDL (Very High-Speed Integrated Circuit Hardware Description Language) as well as intermediate representations such as FIRRTL. Computer-readable code may provide definitions embodying the concept using system-level modelling languages such as SystemC and SystemVerilog or other behavioural representations of the concepts that can be interpreted by a computer to enable simulation, functional and/or formal verification, and testing of the concepts. Additionally or alternatively, the computer-readable code may define a low-level description of integrated circuit components that embody concepts described herein, such as one or more netlists or integrated circuit layout definitions, including representations such as GDSII. The one or more netlists or other computer-readable representation of integrated circuit components may be generated by applying one or more logic synthesis processes to an RTL representation to generate definitions for use in fabrication of an apparatus embodying the invention. Alternatively or additionally, the one or more logic synthesis processes can generate from the computer-readable code a bitstream to be loaded into a field programmable gate array (FPGA) to configure the FPGA to embody the described concepts. The FPGA may be deployed for the purposes of verification and test of the concepts prior to fabrication in an integrated circuit or the FPGA may be deployed in a product directly. The computer-readable code may comprise a mix of code representations for fabrication of an apparatus, for example including a mix of one or more of an RTL representation, a netlist representation, or another computer-readable definition to be used in a semiconductor design and fabrication process to fabricate an apparatus embodying the invention. Alternatively or additionally, the concept may be defined in a combination of a computer-readable definition to be used in a semiconductor design and fabrication process to fabricate an apparatus and computer-readable code defining instructions which are to be executed by the defined apparatus once fabricated. Such computer-readable code can be disposed in any known transitory computer-readable medium (such as wired or wireless transmission of code over a network) or non-transitory computer-readable medium such as semiconductor, magnetic disk, or optical disc. An integrated circuit fabricated using the computer-readable code may comprise components such as one or more of a central processing unit, graphics processing unit, neural processing unit, digital signal processor or other components that individually or collectively embody the concept. Various examples are set out in the clauses below:1. An apparatus comprising:processing circuitry having a plurality of execution states for execution of instructions;branch history storage to store branch history information indicative of at least one branch property for a sequence of branches;prediction circuitry to determine a prediction for controlling execution of at least one instruction by the processing circuitry, where the prediction circuitry is configured to determine a first type of prediction based at least on a first prediction table storing prediction information looked up based on at least a first portion of the branch history information corresponding to a first predetermined number of branches; andprediction control circuitry to:in response to detecting an execution state switch of the processing circuitry from a first execution state to a second execution state more privileged than the first execution state, disable use of the first prediction table in determining the first type of prediction; andin response to detecting that a number of branches causing an update to the branch history storage since the execution state switch is greater than or equal to the first predetermined number, re-enable use of the first prediction table in determining the first type of prediction.2. The apparatus according to clause 1, in which in response to a return to the first execution state occurring after the execution state switch when the number of branches causing an update to the branch history storage since the execution state switch is still less than the first predetermined number, the prediction control circuitry is configured to re-enable use of the first prediction table in determining the first type of prediction.3. The apparatus according to any of clauses 1 and 2, in which the prediction circuitry is configured to determine the first type of prediction based on at least the first prediction table and a second prediction table storing prediction information looked up based on at least a second portion of the branch history information corresponding to a second predetermined number of branches, where the second predetermined number is greater than the first predetermined number.4. The apparatus according to clause 3, in which the prediction control circuitry is configured to: in response to detecting the execution state switch, disable use of the second prediction table in determining the first type of prediction; and in response to detecting that the number of branches causing an update to the branch history storage since the execution state switch is greater than or equal to the second predetermined number, re-enable use of the second prediction table in determining the first type of prediction.5. The apparatus according to any of clauses 3 and 4, in which the prediction control circuitry is configured to: in response to detecting the execution state switch, disable use of the second prediction table in determining the first type of prediction; and re-enable use of the first prediction table for determining the first type of prediction earlier than re-enabling use of the second prediction table for determining the first type of prediction.6. The apparatus according to any preceding clause, in which the prediction circuitry is configured to determine the first type of prediction based on a plurality of tagged-geometric prediction tables, including the first prediction table, looked up based on respective portions of the branch history information corresponding to successively increasing numbers of branches, wherein the prediction circuitry is configured to select, as the first type of prediction, a prediction based on the tagged-geometric prediction table which, among the tagged-geometric prediction tables currently enabled for use and which detect a lookup hit, is looked up based on a portion of branch history information corresponding to the greatest number of branches; and following the execution state switch, the prediction control circuitry is configured to gradually re-enable use of the respective tagged-geometric prediction tables in ascending order of the number of branches corresponding to the respective portions of the branch history information used for looking up the respective tagged-geometric prediction tables.7. The apparatus according to any preceding clause, in which the prediction circuitry is configured to determine a second type of prediction depending on at least a portion of the branch history information; and following the execution state switch, the prediction control circuitry is configured to enable use of said at least a portion of the branch history information for determining the second type of prediction, even when use of the first prediction table for determining the first type of prediction is disabled.8. The apparatus according to clause 7, in which, following the execution state switch, the prediction control circuitry is configured to enable use of said at least a portion of the branch history information for determining the second type of prediction, independent of the number of branches causing an update to the branch history storage since the execution state switch.9. The apparatus according to any of clauses 7 and 8, in which: the first type of prediction comprises a prediction of a branch target address; and the second type of prediction comprises a prediction of whether a branch is taken or not-taken.10. The apparatus according to any preceding clause, comprising a branch counter to count the number of branches causing an update to the branch history storage; in which: the prediction control circuitry is configured to reset the branch counter to a reset value in response to detecting the execution state switch; and following the execution state switch, the prediction control circuitry is configured to determine, based on the branch counter, whether to re-enable use of the first prediction table in determining the first type of prediction.11. The apparatus according to any preceding clause, in which the prediction circuitry is configured to look up the first prediction table based on a hash value derived from a program counter address and the first portion of the branch history information.12. The apparatus according to any preceding clause, in which each entry of the first prediction table is associated with a context identifier distinguishing entries allocated in different execution contexts, where execution contexts corresponding to the first execution state and the second execution state have different context identifiers; and in a lookup of the first prediction table performed for a first execution context, the prediction circuitry is configured to detect a miss for a given entry of the first prediction table when a mismatch is detected between the context identifier for the given entry and a context identifier associated with the first execution context.13. The apparatus according to any preceding clause, in which the branch history storage is configured to update the branch history information for the sequence of branches based on a first-in-first-out policy.14. The apparatus according to any preceding clause, in which, in response to a newly encountered branch, the branch history storage is configured to update a given location of the branch history storage based on the at least one branch property of the newly encountered branch, said given location being selected independent of a program counter address of the newly encountered branch.15. The apparatus according to any preceding clause, in which, for a given branch in the sequence of branches, the at least one branch property comprises information dependent on at least one of: a taken/not-taken outcome for the given branch; and a branch target address for the given branch.16. The apparatus according to any preceding clause, in which the first type of prediction comprises branch target address prediction.17. The apparatus according to any preceding clause, in which the first type of prediction comprises polymorphic branch target address prediction, and the first prediction table supports two or more entries being allocated to provide two or more different target addresses corresponding to the same branch instruction but different values of the first portion of the branch history information.18. The apparatus according to any of clauses 1 to 15, in which the first type of prediction comprises a prefetch prediction for determining data or instructions to be prefetched into a cache.19. The apparatus according to any of clauses 1 to 15, in which the first type of prediction comprises a value prediction to predict a value of data or instructions to be loaded from memory.20. A method comprising:executing instructions using an apparatus comprising processing circuitry having a plurality of execution states for execution of instructions, branch history storage to store branch history information indicative of at least one branch property for a sequence of branches, and prediction circuitry to determine a prediction for controlling execution of at least one instruction by the processing circuitry, where the prediction circuitry is configured to determine a first type of prediction based at least on a first prediction table storing prediction information looked up based on at least a first portion of the branch history information corresponding to a first predetermined number of branches;in response to detecting an execution state switch of the processing circuitry from a first execution state to a second execution state more privileged than the first execution state, disabling use of the first prediction table in determining the first type of prediction; andin response to detecting that a number of branches causing an update to the branch history storage since the execution state switch is greater than or equal to the first predetermined number, re-enabling use of the first prediction table in determining the first type of prediction.21. A computer-readable medium to store computer-readable code for fabrication of an apparatus comprising:processing circuitry having a plurality of execution states for execution of instructions;branch history storage to store branch history information indicative of at least one branch property for a sequence of branches;prediction circuitry to determine a prediction for controlling execution of at least one instruction by the processing circuitry, where the prediction circuitry is configured to determine a first type of prediction based at least on a first prediction table storing prediction information looked up based on at least a first portion of the branch history information corresponding to a first predetermined number of branches; andprediction control circuitry to:in response to detecting an execution state switch of the processing circuitry from a first execution state to a second execution state more privileged than the first execution state, disable use of the first prediction table in determining the first type of prediction; andin response to detecting that a number of branches causing an update to the branch history storage since the execution state switch is greater than or equal to the first predetermined number, re-enable use of the first prediction table in determining the first type of prediction. In the present application, the words “configured to . . . ” are used to mean that an element of an apparatus has a configuration able to carry out the defined operation. In this context, a “configuration” means an arrangement or manner of interconnection of hardware or software. For example, the apparatus may have dedicated hardware which provides the defined operation, or a processor or other processing device may be programmed to perform the function. “Configured to” does not imply that the apparatus element needs to be changed in any way in order to provide the defined operation. Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope of the invention as defined by the appended claims.
99,218
11861369
DETAILED DESCRIPTION OF THE EMBODIMENTS In the following description of embodiments, it will be understood that the terms “first” and “second” are intended to identify elements, but not used to define a particular number or sequence of elements. In addition, when an element is referred to as being located “on,” “over,” “above,” “under,” or “beneath” another element, it is intended to mean relative positional relationship, but not used to limit certain cases for which the element directly contacts the other element, or at least one intervening element is present between the two elements. Accordingly, the terms such as “on,” “over,” “above,” “under,” “beneath,” “below,” and the like that are used herein are for the purpose of describing particular embodiments only and are not intended to limit the scope of the present disclosure. Further, when an element is referred to as being “connected” or “coupled” to another element, the element may be electrically or mechanically connected or coupled to the other element directly, or may be electrically or mechanically connected or coupled to the other element indirectly with one or more additional elements between the two elements. Moreover, when a parameter is referred to as being “predetermined,” it may be intended to mean that a value of the parameter is determined in advance of when the parameter is used in a process or an algorithm. The value of the parameter may be set when the process or the algorithm starts or may be set during a period in which the process or the algorithm is executed. A logic “high” level and a logic “low” level may be used to describe logic levels of electric signals. A signal with a logic “high” level may be distinguished from a signal with a logic “low” level. For example, when a signal with a first voltage corresponds to a signal with a logic “high” level, a signal with a second voltage may correspond to a signal with a logic “low” level. In an embodiment, the logic “high” level may be set as a voltage level which is higher than a voltage level of the logic “low” level. Meanwhile, logic levels of signals may be set to be different or opposite according to embodiment. For example, a certain signal with a logic “high” level in one embodiment may be set to have a logic “low” level in another embodiment. Various embodiments of the present disclosure will be described hereinafter in detail with reference to the accompanying drawings. However, the embodiments described herein are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Various embodiments are directed to processing-in-memory (PIM) devices which are capable of performing a deterministic arithmetic operation at a high speed. FIG.1is a block diagram illustrating a PIM device according to an embodiment of the present disclosure. As illustrated inFIG.1, the PIM device10may include a data storage region11, an arithmetic circuit12, an interface (I/F)13-1, and a data (DQ) input/output (I/O) pad13-2. The data storage region11may include a first storage region and a second storage region. In an embodiment, the first storage region and the second storage region may be a first memory bank and a second memory bank, respectively. In another embodiment, the first data storage region and the second storage region may be a memory bank and buffer memory, respectively. The data storage region11may include a volatile memory element or a non-volatile memory element. For an embodiment, the data storage region11may include both a volatile memory element and a non-volatile memory element. The arithmetic circuit12may perform an arithmetic operation on the data transferred from the data storage region11. In an embodiment, the arithmetic circuit12may include a multiplying-and-accumulating (MAC) operator. The MAC operator may perform a multiplying calculation on the data transferred from the data storage region11and perform an accumulating calculation on a multiplication result data. After MAC operations, the MAC operator may output an MAC result data. The MAC result data may be stored in the data storage region11or output from the PIM device10through the data I/O pad13-2. In an embodiment, the arithmetic circuit12may perform additional operations, for example a bias addition operation and an active function operation, for a neural network calculation, for example, an arithmetic operation in a deep learning process. In another embodiment, the PIM device10may include a bias addition circuit and active function circuit separated from the arithmetic circuit12. The interface13-1of the PIM device10may receive an external command E_CMD and an input address I_ADDR from an external device. The external device may denote a host or a PIM controller coupled to the PIM device10. Hereinafter, it may be assumed that the external command E_CMD transmitted to the PIM device10is a command requesting the MAC arithmetic operation. That is, the PIM device10may perform a MAC arithmetic operation in response to the external command E_CMD. The data I/O pad13-2of the PIM device10may function as a data communication terminal between a device external to the PIM device10, for example the PIM controller or a host located outside the PIM system1. Accordingly, data that is output from the host or the PIM controller may be input into the PIM device10through the data I/O pad13-2. Also, data that is output from the PIM device10may be input to the host or the PIM controller through the data I/O pad13-2. In an embodiment, the PIM device10may operate in a memory mode or a MAC arithmetic mode. In the event that the PIM device10operates in the memory mode, the PIM device10may perform a data that is read operation or a data write operation for the data storage region11. In the event that the PIM device10operates in the MAC arithmetic mode, the arithmetic circuit12of the PIM device10may receive first data and second data from the data storage region11to perform the MAC arithmetic operation. In the event that PIM device10operates in the MAC arithmetic mode, the PIM device10may also perform the data write operation for the data storage region11to execute the MAC arithmetic operation. The MAC arithmetic operation may be a deterministic arithmetic operation that is performed during a predetermined fixed time. The word “predetermined” as used herein with respect to a parameter, such as a predetermined fixed time or time period, means that a value for the parameter is determined prior to the parameter being used in a process or algorithm. For some embodiments, the value for the parameter is determined before the process or algorithm begins. In other embodiments, the value for the parameter is determined during the process or algorithm but before the parameter is used in the process or algorithm. FIG.2illustrates a disposal structure indicating placement of memory banks BK0, . . . , and BK15and MAC operators MAC0, . . . , and MAC7included in a PIM device100according to an embodiment of the present disclosure. In an embodiment, the memory banks BK0, . . . , and BK15and the MAC operators MAC0, . . . , and MAC7may be included in the data storage region and the arithmetic circuit of the PIM device10ofFIG.1, respectively. Referring toFIG.2, the PIM device100may include a data storage region and an arithmetic circuit. In an embodiment, the data storage region may include the memory banks BK0, . . . , and BK15. Although the present embodiment illustrates an example in which the data storage region includes the memory banks BK0, . . . , and BK15, the memory banks BK0, . . . , and BK15are merely examples which are suitable for the data storage region. In some embodiments, the memory banks BK0, . . . , and BK15may be a memory region corresponding to a volatile memory device, for example, a DRAM device. In an embodiment, each of the memory banks BK0, . . . , and BK15may be a component unit which is independently activated and may be configured to have the same data bus width as data I/O lines in the PIM device100. In an embodiment, the memory banks BK0, . . . , and BK15may operate through interleaving such that an active operation of any one of the memory banks is performed in parallel while another memory bank is selected. Although the present embodiment illustrates an example in which the PIM device100includes the memory banks BK0, . . . , and BK15, the number of the memory banks is not limited to 16 and may be different in different embodiments. Each of the memory banks BK0, . . . , and BK15may include at least one cell array which includes memory unit cells located at cross points of a plurality of rows and a plurality of columns. The memory banks BK0, . . . , and BK15may include a first group of memory banks (e.g., odd-numbered memory banks BK0, BK2, . . . , and BK14) and a second group of memory banks (e.g., even-numbered memory banks BK1, BK3, . . . , and BK15). A core circuit may be disposed to be adjacent to the memory banks BK0, . . . , and BK15. The core circuit may include X-decoders XDECs and Y-decoders/IO circuits YDEC/IOs. An X-decoder XDEC may also be referred to as a word line decoder or a row decoder. In an embodiment, two odd-numbered memory banks arrayed to be adjacent to each other in one row among the odd-numbered memory banks BK0, BK2, . . . , and BK14may share one of the X-decoders XDECs with each other. For example, the first memory bank BK0and the third memory bank BK2adjacent to each other in a first row may share one of the X-decoders XDECs, and the fifth memory bank BK4and the seventh memory bank BK6adjacent to each other in the first row may also share one of the X-decoders XDECs. Similarly, two even-numbered memory banks arrayed to be adjacent to each other in one row among the even-numbered memory banks BK1, BK3, . . . , and BK15may share one of the X-decoders XDECs with each other. For example, the second memory bank BK1and the fourth memory bank BK3adjacent to each other in a second row may share one of the X-decoders XDECs, and the sixth memory bank BK5and the eighth memory bank BK7adjacent to each other in the second row may also share one of the X-decoders XDECs. The X-decoder XDEC may receive a row address from an address latch included in a peripheral circuit PERI and may decode the row address to select and enable one of rows (i.e., word lines) coupled to the memory banks adjacent to the X-decoder XDEC. The Y-decoders/IO circuits YDEC/IOs may be disposed to be allocated to the memory banks BK0, . . . , and BK15, respectively. For example, the first memory bank BK0may be allocated to one of the Y-decoders/IO circuits YDEC/IOs, and the second memory bank BK1may be allocated to another one of the Y-decoders/IO circuits YDEC/IOs. Each of the Y-decoders/IO circuits YDEC/IOs may include a Y-decoder YDEC and an I/O circuit IO. The Y-decoder YDEC may also be referred to as a bit line decoder or a column decoder. The Y-decoder YDEC may receive a column address from an address latch included in the peripheral circuit PERI and may decode the column address to select and enable at least one of columns (i.e., bit lines) coupled to the selected memory bank. Each of the I/O circuits may include an I/O sense amplifier for sensing and amplifying a level of a read datum output from the corresponding memory bank during a read operation and a write driver for driving a write datum during a write operation for the corresponding memory bank. In an embodiment, the arithmetic circuit may include MAC operators MAC0, . . . , and MAC7. Although the present embodiment illustrates an example in which the MAC operators MAC0, . . . , and MAC7are employed as the arithmetic circuit, the present embodiment may be merely an example of the present disclosure. For example, in some other embodiments, processors other than the MAC operators MAC0, . . . , and MAC7may be employed as the arithmetic circuit. The MAC operators MAC0, . . . , and MAC7may be disposed such that one of the odd-numbered memory banks BK0, BK2, . . . , and BK14and one of the even-numbered memory banks BK1, BK3, . . . , and BK15share any one of the MAC operators MAC0, . . . , and MAC7with each other. Specifically, one odd-numbered memory bank and one even-numbered memory bank arrayed in one column to be adjacent to each other may constitute a pair of memory banks sharing one of the MAC operators MAC0, . . . , and MAC7with each other. One of the MAC operators MAC0, . . . , and MAC7and a pair of memory banks sharing the one MAC operator with each other will be referred to as ‘a MAC unit’ hereinafter. In an embodiment, the number of the MAC operators MAC0, . . . , and MAC7may be equal to the number of the odd-numbered memory banks BK0, BK2, . . . , and BK14or the number of the even-numbered memory banks BK1, BK3, . . . , and BK15. The first memory bank BK0, the second memory bank BK1, and the first MAC operator MAC0between the first memory bank BK0and the second memory bank BK1may constitute a first MAC unit. In addition, the third memory bank BK2, the fourth memory bank BK3, and the second MAC operator MAC1between the third memory bank BK2and the fourth memory bank BK3may constitute a second MAC unit. The first MAC operator MAC0included in the first MAC unit may receive first data DA1output from the first memory bank BK0included in the first MAC unit and second data DA2that are output from the second memory bank BK1included in the first MAC unit. In addition, the first MAC operator MAC0may perform a MAC arithmetic operation of the first data DA1and the second data DA2. In the event that the PIM device100performs a neural network calculation, for example, an arithmetic operation in a deep learning process, one of the first data DA1and the second data DA2may be weight data and the other may be vector data. A configuration of any one of the MAC operators MAC0˜MAC7will be described in more detail hereinafter. In the PIM device100, the peripheral circuit PERI may be disposed in a region other than an area in which the memory banks BK0, BK1, . . . , and BK15, the MAC operators MAC0, . . . , and MAC7, and the core circuit are disposed. The peripheral circuit PERI may include a control circuit and a transmission path for a command/address signal, a control circuit and a transmission path for input/output of data, and a power supply circuit. The control circuit for the command/address signal may include a command decoder for decoding a command included in the command/address signal to generate an internal command signal, an address latch for converting an input address into a row address and a column address, a control circuit for controlling various functions of row/column operations, and a control circuit for controlling a delay locked loop (DLL) circuit. The control circuit for the input/output of data in the peripheral circuit PERI may include a control circuit for controlling a read/write operation, a read/write buffer, and an output driver. The power supply circuit in the peripheral circuit PERI may include a reference power voltage generation circuit for generating an internal reference power voltage and an internal power voltage generation circuit for generating an internal power voltage from an external power voltage. The PIM device100according to the present embodiment may operate in any one mode of a memory mode and a MAC arithmetic mode. In the memory mode, the PIM device100may operate to perform the same operations as general memory devices. The memory mode may include a memory read operation mode and a memory write operation mode. In the memory read operation mode, the PIM device100may perform a read operation for reading out data from the memory banks BK0, BK1, . . . , and BK15to output the read data, in response to an external request. In the memory write operation mode, the PIM device100may perform a write operation for storing data provided by an external device into the memory banks BK0, BK1, . . . , and BK15, in response to an external request. In the MAC arithmetic mode, the PIM device100may perform the MAC arithmetic operation using the MAC operators MAC0, . . . , and MAC7. Specifically, the PIM device100may perform the read operation of the first data DA1for each of the odd-numbered memory banks BK0, BK2, . . . , and BK14and the read operation of the second data DA2for each of the even-numbered memory banks BK1, BK3, . . . , and BK15, for the MAC arithmetic operation in the MAC arithmetic mode. In addition, each of the MAC operators MAC0, . . . , and MAC7may perform the MAC arithmetic operation of the first data DA1and the second data DA2which are read out of the memory banks to store a result of the MAC arithmetic operation into the memory bank or to output the result of the MAC arithmetic operation. In some cases, the PIM device100may perform a data write operation for storing data to be used for the MAC arithmetic operation into the memory banks before the data that is read operation for the MAC arithmetic operation is performed in the MAC arithmetic mode. The operation mode of the PIM device100according to the present embodiment may be determined by a command which is transmitted from a host or a controller to the PIM device100. In an embodiment, if a first external command requesting a read operation or a write operation for the memory banks BK0, BK1, . . . , and BK15is input to the PIM device100, the PIM device100may perform the data that is read operation or the data write operation in the memory mode. Meanwhile, if a second external command requesting a MAC calculation corresponding to the MAC arithmetic operation is input to the PIM device100, the PIM device100may perform the MAC arithmetic operation. The PIM device100may perform a deterministic MAC arithmetic operation. The term “deterministic MAC arithmetic operation” used in the present disclosure may be defined as the MAC arithmetic operation that is performed in the PIM device100during a predetermined fixed time. Thus, the host or the controller may always predict a point in time (or a clock) when the MAC arithmetic operation terminates in the PIM device100at a point in time when an external command requesting the MAC arithmetic operation is transmitted from the host or the controller to the PIM device100. No operation for informing the host or the controller of a status of the MAC arithmetic operation is required while the PIM device100performs the deterministic MAC arithmetic operation. In an embodiment, a latency during which the MAC arithmetic operation is performed in the PIM device100may be fixed for the deterministic MAC arithmetic operation. FIG.3is a block diagram illustrating a configuration of a PIM device200corresponding to the PIM device100illustrated inFIG.3, andFIG.4illustrates an internal command signal I_CMD that is output from a command decoder250and a MAC command signal MAC_CMD that is output from a MAC command generator270included in the PIM device200ofFIG.3.FIG.3illustrates only the first memory bank (BK0)211, the second memory bank (BK1)212, and the first MAC operator (MAC0)220constituting the first MAC unit among the plurality of MAC units. However,FIG.3illustrates merely an example for simplification of the drawing. Accordingly, the following description for the first MAC unit may be equally applicable to the remaining MAC units. Referring toFIG.3, the PIM device200may include a global I/O line (hereinafter, referred to as a ‘GIO line’)290. The first memory bank (BK0)211, the second memory bank (BK1)212, and the first MAC operator (MAC0)220may communicate with each other through the GIO line290. In an embodiment, the GIO line290may be disposed in the peripheral circuit PERI ofFIG.2. The PIM device200may include a receiving driver (RX)230, a data I/O circuit (DQ)240, a command decoder250, an address latch260, a MAC command generator270, and a serializer/deserializer (SER/DES)280. The command decoder250, the address latch260, the MAC command generator270, and the serializer/deserializer280may be disposed in the peripheral circuit PERI of the PIM device100illustrated inFIG.2. The receiving driver230may receive an external command E_CMD and an input address I_ADDR from an external device. The external device may denote a host or a controller coupled to the PIM device200. Hereinafter, it may be assumed that the external command E_CMD transmitted to the PIM device200is a command requesting the MAC arithmetic operation. That is, the PIM device200may perform the deterministic MAC arithmetic operation in response to the external command E_CMD. The data I/O circuit240may include an I/O pad. The data I/O circuit240may be coupled to data I/O line. The PIM device200may communicate with the external device through the data I/O circuit240. The receiving driver230may separately output the external command E_CMD and the input address I_ADDR received from the external device. Data DA that is input to the PIM device200through the data I/O circuit240may be processed by the serializer/deserializer280and may be transmitted to the first memory bank (BK0)211and the second memory bank (BK1)212through the GIO line290of the PIM device200. The data DA that is output from the first memory bank (BK0)211, the second memory bank (BK1)212, and the first MAC operator (MAC0)220through the GIO line290may be processed by the serializer/deserializer280and may be output to the external device through the data I/O circuit240. The serializer/deserializer280may convert the data DA into parallel data if the data DA are serial data or may convert the data DA into serial data if the data DA are parallel data. For the data conversion, the serializer/deserializer280may include a serializer converting parallel data into serial data and a deserializer converting serial data into parallel data. The command decoder250may decode the external command E_CMD that is output from the receiving driver230to generate and output the internal command signal I_CMD. As illustrated inFIG.4, the internal command signal I_CMD that is output from the command decoder250may include first to fourth internal command signals. In an embodiment, the first internal command signal may be a memory active signal ACT_M, the second internal command signal may be a memory read signal READ_M, the third internal command signal may be a MAC arithmetic signal MAC, and the fourth internal command signal may be a result read signal READ_RST. The first to fourth internal command signals that are output from the command decoder250may be sequentially input to the MAC command generator270. In order to perform the deterministic MAC arithmetic operation of the PIM device200, the memory active signal ACT_M, the memory read signal READ_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder250may be sequentially generated at predetermined points in time (or clocks). In an embodiment, the memory active signal ACT_M, the memory read signal READ_M, the MAC arithmetic signal MAC, and the result read signal READ_RST may have predetermined latencies, respectively. For example, the memory read signal READ_M may be generated after a first latency elapses from a point in time when the memory active signal ACT_M is generated, the MAC arithmetic signal MAC may be generated after a second latency elapses from a point in time when the memory read signal READ_M is generated, and the result read signal READ_RST may be generated after a third latency elapses from a point in time when the MAC arithmetic signal MAC is generated. No signal is generated by the command decoder250until a fourth latency elapses from a point in time when the result read signal READ_RST is generated. The first to fourth latencies may be predetermined and fixed. Thus, the host or the controller outputting the external command E_CMD may predict the points in time when the first to fourth internal command signals constituting the internal command signal I_CMD are generated by the command decoder250in advance at a point in time when the external command E_CMD is output from the host or the controller. The address latch260may convert the input address I_ADDR that is output from the receiving driver230into a bank selection signal BK_S and a row/column address ADDR_R/ADDR_C to output the bank selection signal BK_S and the row/column address ADDR_R/ADDR_C. The bank selection signal BK_S may be input to the MAC command generator270. The row/column address ADDR_R/ADDR_C may be transmitted to the first and second memory banks211and212. One of the first and second memory banks211and212may be selected by the bank selection signal BK_S. One of rows included in the selected memory bank and one of columns included in the selected memory bank may be selected by the row/column address ADDR_R/ADDR_C. In an embodiment, a point in time when the bank selection signal BK_S is input to the MAC command generator270may be the same moment as a point in time when the row/column address ADDR_R/ADDR_C is input to the first and second memory banks211and212. In an embodiment, the point in time when the bank selection signal BK_S is input to the MAC command generator270and the point in time when the row/column address ADDR_R/ADDR_C is input to the first and second memory banks211and212may be a point in time when the MAC command is generated to read out data from the first and second memory banks211and212for the MAC arithmetic operation. The MAC command generator270may output the MAC command signal MAC_CMD in response to the internal command signal I_CMD that is output from the command decoder250and the bank selection signal BK_S output from the address latch260. As illustrated inFIG.4, the MAC command signal MAC_CMD that is output from the MAC command generator270may include first to seventh MAC command signals. In an embodiment, the first MAC command signal may be a MAC active signal RACTV, the second MAC command signal may be a first MAC read signal MAC_RD_BK0, the third MAC command signal may be a second MAC read signal MAC_RD_BK1, the fourth MAC command signal may be a first MAC input latch signal MAC_L1, the fifth MAC command signal may be a second MAC input latch signal MAC_L2, the sixth MAC command signal may be a MAC output latch signal MAC_L3, and the seventh MAC command signal may be a MAC result latch signal MAC_L_RST. The MAC active signal RACTV may be generated based on the memory active signal ACT_M that is output from the command decoder250. The first MAC read signal MAC_RD_BK0may be generated in response to the memory read signal READ_M output from the command decoder250and the bank selection signal BK_S with a first level (e.g., a logic “low” level) output from the address latch260. The first MAC input latch signal MAC_L1may be generated at a point in time when a certain time elapses from a point in time when the first MAC read signal MAC_RD_BK0is generated. For various embodiments, a certain time means a fixed time duration. The second MAC read signal MAC_RD_BK1may be generated in response to the memory read signal READ_M output from the command decoder250and the bank selection signal BK_S with a second level (e.g., a logic “high” level) output from the address latch260. The second MAC input latch signal MAC_L2may be generated at a point in time when a certain time elapses from a point in time when the second MAC read signal MAC_RD_BK1is generated. The MAC output latch signal MAC_L3may be generated in response to the MAC arithmetic signal MAC that is output from the command decoder250. Finally, the MAC result latch signal MAC_L_RST may be generated in response to the result read signal READ_RST that is output from the command decoder250. The MAC active signal RACTV that is output from the MAC command generator270may control an activation operation for the first and second memory banks211and212. The first MAC read signal MAC_RD_BK0output from the MAC command generator270may control a data that is read operation for the first memory bank211. The second MAC read signal MAC_RD_BK1output from the MAC command generator270may control a data that is read operation for the second memory bank212. The first MAC input latch signal MAC_L1and the second MAC input latch signal MAC_L2output from the MAC command generator270may control an input data latch operation of the first MAC operator (MAC0)220. The MAC output latch signal MAC_L3that is output from the MAC command generator270may control an output data latch operation of the first MAC operator (MAC0)220. The MAC result latch signal MAC_L_RST that is output from the MAC command generator270may control a reset operation of the first MAC operator (MAC0)220. As described above, in order to perform the deterministic MAC arithmetic operation of the PIM device200, the memory active signal ACT_M, the memory read signal READ_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder250may be sequentially generated at predetermined points in time (or clocks), respectively. Thus, the MAC active signal RACTV, the first MAC read signal MAC_RD_BK0, the second MAC read signal MAC_RD_BK1, the first MAC input latch signal MAC_L1, the second MAC input latch signal MAC_L2, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may also be generated and output from the MAC command generator270at predetermined points in time after the external command E_CMD is input to the PIM device200, respectively. That is, a time period from a point in time when the first and second memory banks211and212are activated by the MAC active signal RACTV until a point in time when the first MAC operator (MAC0)220is reset by the MAC result latch signal MAC_L_RST may be predetermined, and thus the PIM device200may perform the deterministic MAC arithmetic operation. FIG.5illustrates an example of a configuration of the MAC command generator270included in the PIM device200illustrated inFIG.3. Referring toFIG.5, the MAC command generator270may sequentially receive the memory active signal ACT_M, the memory read signal READ_M, the MAC arithmetic signal MAC, and the result read signal READ_RST from the command decoder250. In addition, the MAC command generator270may also receive the bank selection signal BK_S from the address latch260. The MAC command generator270may output the MAC active signal RACTV, the first MAC read signal MAC_RD_BK0, the second MAC read signal MAC_RD_BK1, the first MAC input latch signal MAC_L1, the second MAC input latch signal MAC_L2, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST in series with certain time intervals. For an embodiment, a certain time interval is a time interval with a fixed duration. In an embodiment, the MAC command generator270may be configured to include an active signal generator271, a delay circuit272, an inverter273, and first to fourth AND gates274,275,276, and277. The active signal generator271may receive the memory active signal ACT_M to generate and output the MAC active signal RACTV. The MAC active signal RACTV that is output from the active signal generator271may be transmitted to the first and second memory banks211and212to activate the first and second memory banks211and212. The delay circuit272may receive the memory read signal READ_M and may delay the memory read signal READ_M by a delay time DELAY_T to output the delayed signal of the memory read signal READ_M. The inverter273may receive the bank selection signal BK_S and may invert a logic level of the bank selection signal BK_S to output the inverted signal of the bank selection signal BK_S. The first AND gate274may receive the memory read signal READ_M and an output signal of the inverter273and may perform a logical AND operation of the memory read signal READ_M and an output signal of the inverter273to generate and output the first MAC read signal MAC_RD_BK0. The second AND gate275may receive the memory read signal READ_M and the bank selection signal BK_S and may perform a logical AND operation of the memory read signal READ_M and the bank selection signal BK_S to generate and output the second MAC read signal MAC_RD_BK1. The third AND gate276may receive an output signal of the delay circuit272and an output signal of the inverter273and may perform a logical AND operation of the output signals of the delay circuit272and the inverter273to generate and output the first MAC input latch signal MAC_L1. The fourth AND gate277may receive an output signal of the delay circuit272and the bank selection signal BK_S and may perform a logical AND operation of the output signal of the delay circuit272and the bank selection signal BK_S to generate and output the second MAC input latch signal MAC_L2. It may be assumed that the memory read signal READ_M that is input to the MAC command generator270has a logic “high” level and the bank selection signal BK_S that is input to the MAC command generator270has a logic “low” level. A level of the bank selection signal BK_S may change from a logic “low” level into a logic “high” level after a certain time elapses. When the memory read signal READ_M has a logic “high” level and the bank selection signal BK_S has a logic “low” level, the first AND gate274may output the first MAC read signal MAC_RD_BK0with a logic “high” level and the second AND gate275may output the second MAC read signal MAC_RD_BK1with a logic “low” level. The first memory bank211may transmit the first data DA1to the first MAC operator220according to a control operation based on the first MAC read signal MAC_RD_BK0with a logic “high” level. If a level transition of the bank selection signal BK_S occurs so that both of the memory read signal READ_M and the bank selection signal BK_S have a logic “high” level, the first AND gate274may output the first MAC read signal MAC_RD_BK0with a logic “low” level and the second AND gate275may output the second MAC read signal MAC_RD_BK1with a logic “high” level. The second memory bank212may transmit the second data DA2to the first MAC operator220according to a control operation based on the second MAC read signal MAC_RD_BK1with a logic “high” level. Due to the delay time of the delay circuit272, the output signals of the third and fourth AND gates276and277may be generated after the first and second MAC read signals MAC_RD_BK0and MAC_RD_BK1are generated. Thus, after the second MAC read signal MAC_RD_BK1is generated, the third AND gate276may output the first MAC input latch signal MAC_L1with a logic “high” level. The first MAC operator220may latch the first data DA1in response to the first MAC input latch signal MAC_L1with a logic “high” level. After a certain time elapses from a point in time when the first data DA1are latched by the first MAC operator220, the fourth AND gate277may output the second MAC input latch signal MAC_L2with a logic “high” level. The first MAC operator220may latch the second data DA2in response to the second MAC input latch signal MAC_L2with a logic “high” level. The first MAC operator220may start to perform the MAC arithmetic operation after the first and second data DA1and DA2are latched. The MAC command generator270may generate the MAC output latch signal MAC_L3in response to the MAC arithmetic signal MAC that is output from the command decoder250. The MAC output latch signal MAC_L3may have the same logic level as the MAC arithmetic signal MAC. For example, if the MAC arithmetic signal MAC with a logic “high” level is input to the MAC command generator270, the MAC command generator270may generate the MAC output latch signal MAC_L3with a logic “high” level. The MAC command generator270may generate the MAC result latch signal MAC_L_RST in response to the result read signal READ_RST that is output from the command decoder250. The MAC result latch signal MAC_L_RST may have the same logic level as the result read signal READ_RST. For example, if the result read signal READ_RST with a logic “high” level is input to the MAC command generator270, the MAC command generator270may generate the MAC result latch signal MAC_L_RST with a logic “high” level. FIG.6illustrates input signals and output signals of the MAC command generator270illustrated inFIG.5along a timeline. InFIG.6, signals transmitted from the command decoder250to the MAC command generator270are illustrated in an upper dotted line box, and signals that are output from the MAC command generator270are illustrated in a lower dotted line box. Referring toFIGS.5and6at a first point in time “T1” of the timeline, the memory active signal ACT_M may be input to the MAC command generator270and the MAC command generator270may output the MAC active signal RACTV. At a second point in time “T2” when a certain time, for example, a first latency L1elapses from the first point in time “T1”, the memory read signal READ_M with a logic “high” level and the bank selection signal BK_S with a logic “low” level may be input to the MAC command generator270. In response to the memory read signal READ_M with a logic “high” level and the bank selection signal BK_S with a logic “low” level, the MAC command generator270may output the first MAC read signal MAC_RD_BK0with a logic “high” level and the second MAC read signal MAC_RD_BK1with a logic “low” level in response to the memory read signal READ_M with a logic “high” level and the bank selection signal BK_S with a logic “low” level, as described with reference toFIG.5. At a third point in time “T3” when a certain time elapses from the second point in time “T2”, a logic level of the bank selection signal BK_S may change from a logic “low” level into a logic “high” level. In such a case, the MAC command generator270may output the first MAC read signal MAC_RD_BK0with a logic “low” level and the second MAC read signal MAC_RD_BK1with a logic “high” level, as described with reference toFIG.5. At a fourth point in time “T4” when the delay time DELAY_T elapses from the second point in time “T2”, the MAC command generator270may output the first MAC input latch signal MAC_L1with a logic “high” level and the second MAC input latch signal MAC_L2with a logic “low” level. The delay time DELAY_T may be set by the delay circuit272. The delay time DELAY_T may bet to be different according a logic design scheme of the delay circuit272and may be fixed once the logic design scheme of the delay circuit272is determined. In an embodiment, the delay time DELAY_T may be set to be equal to or greater than a second latency L2. At a fifth point in time “T5” when a certain time elapses from the fourth point in time “T4”, the MAC command generator270may output the first MAC input latch signal MAC_L1with a logic “low” level and the second MAC input latch signal MAC_L2with a logic “high” level. The fifth point in time “T5” may be a moment when the delay time DELAY_T elapses from the third point in time “T3”. At a sixth point in time “T6” when a certain time, for example, a third latency L3elapses from the fourth point in time “T4”, the MAC arithmetic signal MAC with a logic “high” level may be input to the MAC command generator270. In response to the MAC arithmetic signal MAC with a logic “high” level, the MAC command generator270may output the MAC output latch signal MAC_L3with a logic “high” level, as described with reference toFIG.5. Subsequently, at a seventh point in time “T7” when a certain time, for example, a fourth latency L4elapses from the sixth point in time “T6”, the result read signal READ_RST with a logic “high” level may be input to the MAC command generator270. In response to the result read signal READ_RST with a logic “high” level, the MAC command generator270may output the MAC result latch signal MAC_L_RST with a logic “high” level, as described with reference toFIG.5. In order to perform the deterministic MAC arithmetic operation, moments when the internal command signals ACT_M, READ_M, MAC, and READ_RST generated by the command decoder250are input to the MAC command generator270may be fixed and moments when the MAC command signals RACTV, MAC_RD_BK0, MAC_RD_BK1, MAC_L1, MAC_L2, MAC_L3, and MAC_L_RST are output from the MAC command generator270in response to the internal command signals ACT_M, READ_M, MAC, and READ_RST may also be fixed. Thus, all of the first latency L1between the first point in time “T1” and the second point in time “T2”, the second latency L2between the second point in time “T2” and the fourth point in time “T4”, the third latency L3between the fourth point in time “T4” and the sixth point in time “T6”, and the fourth latency L4between the sixth point in time “T6” and the seventh point in time “T7” may have fixed values. In an embodiment, the first latency L1may be defined as a time it takes to activate both of the first and second memory banks based on the MAC active signal RACTV. The second latency L2may be defined as a time it takes to read the first and second data out of the first and second memory banks BK0and BK1based on the first and second MAC read signals MAC_RD_BK0and MAC_RD_BK1and to input the first and second data DA1and DA2into the first MAC operator (MAC0)220. The third latency L3may be defined as a time it takes to latch the first and second data DA1and DA2in the first MAC operator (MAC0)220based on the first and second MAC input latch signals MAC_L1and MAC_L2and it takes the first MAC operator (MAC0)220to perform the MAC arithmetic operation of the first and second data. The fourth latency L4may be defined as a time it takes to latch the output data in the first MAC operator (MAC0)220based on the MAC output latch signal MAC_L3. FIG.7illustrates an example of a configuration of the first MAC operator (MAC0)220included in the PIM device200illustrated inFIG.3. Referring toFIG.7, the first MAC operator (MAC0)220may be configured to include a data input circuit221, a MAC circuit222, and a data output circuit223. The data input circuit221may be configured to include a first input latch221-1and a second input latch221-2. The MAC circuit222may be configured to include a multiplication logic circuit222-1and an addition logic circuit222-2. The data output circuit223may be configured to include an output latch223-1, a transfer gate223-2, a delay circuit223-3, and an inverter223-4. In an embodiment, the first input latch221-1, the second input latch221-2, and the output latch223-1may be realized using flip-flops. The data input circuit221of the first MAC operator (MAC0)220may be synchronized with the first and second MAC input latch signals MAC_L1and MAC_L2to receive and output the first and second data DA1and DA2that are input through the GIO line290to the MAC circuit222. Specifically, the first data DA1may be transmitted from the first memory bank BK0(211ofFIG.3) to the first input latch221-1of the data input circuit221through the GIO line290, in response to the first MAC read signal MAC_RD_BK0with a logic “high” level that is output from the MAC command generator (270ofFIG.3). The second data DA2may be transmitted from the second memory bank BK1(212ofFIG.2) to the second input latch221-2of the data input circuit221through the GIO line290, in response to the second MAC read signal MAC_RD_BK1with a logic “high” level that is output from the MAC command generator270. The first input latch221-1may output the first data DA1to the MAC circuit222in synchronization with the first MAC input latch signal MAC_L1with a logic “high” level that is output from the MAC command generator270(270ofFIG.3). The second input latch221-2may output the second data DA2to the MAC circuit222in synchronization with the second MAC input latch signal MAC_L2with a logic “high” level that is output from the MAC command generator (270ofFIG.3). As described with reference toFIG.5, the second MAC input latch signal MAC_L2may be generated at a moment (corresponding to the fifth point in time “T5” ofFIG.6) when a certain time elapses from a moment (corresponding to the fourth point in time “T4” ofFIG.6) when the first MAC input latch signal MAC_L1is generated. Thus, after the first data DA1is input to the MAC circuit222, the second data DA2may then be input to the MAC circuit222. The MAC circuit222may perform a multiplying calculation and an accumulative adding calculation for the first and second data DA1and DA2. The multiplication logic circuit222-1of the MAC circuit222may include a plurality of multipliers222-11. Each of the plurality of multipliers222-11may perform a multiplying calculation of the first data DA1output from the first input latch221-1and the second data DA2that are output from the second input latch221-2and may output the result of the multiplying calculation. Bit values constituting the first data DA1may be separately input to the multipliers222-11. Similarly, bit values constituting the second data DA2may also be separately input to the multipliers222-11. For example, if each of the first and second data DA1and DA2is comprised of an ‘N’-bit binary stream and the number of the multipliers222-11is ‘M’, the first data DA1with ‘N/M’ bits and the second data DA2with ‘N/M’ bits may be input to each of the multipliers222-11. That is, each of the multipliers222-11may be configured to perform a multiplying calculation of first ‘N/M’-bit data and second ‘N/M’-bit data. The multiplication result data that is output from each of the multipliers222-11may have ‘2N/M’ bits. The addition logic circuit222-2of the MAC circuit222may include a plurality of adders222-21. Although not shown in the drawings, the plurality of adders222-21may be disposed to provide a tree structure including a plurality of stages. Each of the adders222-21disposed at a first stage may receive two sets of the multiplication result data from two of the multipliers222-11included in the multiplication logic circuit222-1and may perform an adding calculation of the two sets of the multiplication result data to output an addition result data. Each of the adders222-21disposed at a second stage may receive two sets of the addition result data from two of the adders222-21disposed at the first stage and may perform an adding calculation of the two sets of the addition result data to output the addition result data. The adders222-21disposed at a last stage may receive two sets of the addition result data from two adders222-21disposed at the previous stage and may perform an adding calculation of the two sets of the addition result data to output the addition result data. The adders222-21constituting the addition logic circuit222-2may include an adder for performing an accumulative adding calculation of the addition result data that is output from the adder222-21disposed at the last stage and the previous MAC result data that is stored in the output latch223-1of the data output circuit223. The data output circuit223may output the MAC result data DA_MAC that is output from the MAC circuit222to the GIO line290. Specifically, the output latch223-1of the data output circuit223may latch the MAC result data DA_MAC that is output from the MAC circuit222and may output the latched data of the MAC result data DA_MAC in synchronization with the MAC output latch signal MAC_L3with a logic “high” level that is output from the MAC command generator (270ofFIG.3). The MAC result data DA_MAC that is output from the output latch223-1may be fed back to the MAC circuit222for the accumulative adding calculation. In addition, the MAC result data DA_MAC may be input to the transfer gate223-2, and the transfer gate223-2may output the MAC result data DA_MAC to the GIO line290. The output latch223-1may be initialized if a latch reset signal LATCH_RST is input to the output latch223-1. In such a case, all of data latched by the output latch223-1may be removed. In an embodiment, the latch reset signal LATCH_RST may be activated by generation of the MAC result latch signal MAC_L_RST with a logic “high” level and may be input to the output latch223-1. The MAC result latch signal MAC_L_RST that is output from the MAC command generator270may be input to the transfer gate223-2, the delay circuit223-3, and the inverter223-4. The inverter223-4may inversely buffer the MAC result latch signal MAC_L_RST to output the inversely buffered signal of the MAC result latch signal MAC_L_RST to the transfer gate223-2. The transfer gate223-2may transfer the MAC result data DA_MAC from the output latch223-1to the GIO line290in response to the MAC result latch signal MAC_L_RST with a logic “high” level. The delay circuit223-3may delay the MAC result latch signal MAC_L_RST by a certain time to generate and output a latch control signal PINSTB. FIGS.8to14are block diagrams illustrating operations of the PIM device200illustrated inFIG.3. InFIGS.8to14, the same reference numerals or the same reference symbols as used inFIG.3denote the same elements. First, referring toFIG.8, if the external command E_CMD requesting the MAC arithmetic operation and the input address I_ADDR are transmitted from an external device to the receiving driver230, the receiving driver230may output the external command E_CMD and the input address I_ADDR to the command decoder250and the address latch260, respectively. The command decoder250may decode the external command E_CMD to generate and transmit the memory active signal ACT_M to the MAC command generator270. The address latch260receiving the input address I_ADDR may generate and transmit the bank selection signal BK_S to the MAC command generator270. The MAC command generator270may generate and output the MAC active signal RACTV in response to the memory active signal ACT_M and the bank selection signal BK_S. The MAC active signal RACTV may be transmitted to the first memory bank (BK0)211and the second memory bank (BK1)212. The first memory bank (BK0)211and the second memory bank (BK1)212may be activated by the MAC active signal RACTV. Next, referring toFIG.9, the command decoder250may generate and output the memory read signal READ_M with a logic “high(H)” level to the MAC command generator270. In addition, the address latch260may generate and output the bank selection signal BK_S with a logic “low(L)” level to the MAC command generator270. In response to the memory read signal READ_M with a logic “high(H)” level and the bank selection signal BK_S with a logic “low(L)” level, the MAC command generator270may generate and output the first MAC read signal MAC_RD_BK0with a logic “high(H)” level and the second MAC read signal MAC_RD_BK1with a logic “low(L)” level, as described with reference toFIG.4. The first MAC read signal MAC_RD_BK0with a logic “high(H)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the first memory bank (BK0)211. The second MAC read signal MAC_RD_BK1with a logic “low(L)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the second memory bank (BK1)212. The first data DA1may be read out of the first memory bank (BK0)211by the first MAC read signal MAC_RD_BK0with a logic “high(H)” level and may be transmitted to the first MAC operator (MAC0)220through the GIO line290. Next, referring toFIG.10, a logic level of the bank selection signal BK_S may change from a logic “low(L)” level into a logic “high(H)” level while the memory read signal READ_M maintains a logic “high(H)” level. In such a case, as described with reference toFIG.5, the MAC command generator270may generate and output the first MAC read signal MAC_RD_BK0with a logic “low(L)” level and the second MAC read signal MAC_RD_BK1with a logic “high(H)” level. The first MAC read signal MAC_RD_BK0with a logic “low(L)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the first memory bank (BK0)211. The second MAC read signal MAC_RD_BK1with a logic “high(H)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the second memory bank (BK1)212. The second data DA2may be read out of the second memory bank (BK1)212by the second MAC read signal MAC_RD_BK1with a logic “high(H)” level and may be transmitted to the first MAC operator (MAC0)220through the GIO line290. Next, referring toFIG.11, a logic level of the memory read signal READ_M transmitted from the command decoder250to the MAC command generator270may change from a logic “high(H)” level into a logic “low(L)” level. In addition, a logic level of the bank selection signal BK_S transmitted from the address latch260to the MAC command generator270may change from a logic “high(H)” level into a logic “low(L)” level. In such a case, the MAC command generator270may generate and output the first MAC input latch signal MAC_L1with a logic “high(H)” level and the second MAC input latch signal MAC_L2with a logic “low(L)” level. A point in time when the first MAC input latch signal MAC_L1with a logic “high(H)” level and the second MAC input latch signal MAC_L2with a logic “low(L)” level are output from the MAC command generator270may be determined by a delay time of the delay circuit (271ofFIG.4), as described with reference toFIG.5. The first MAC input latch signal MAC_L1with a logic “high(H)” level and the second MAC input latch signal MAC_L2with a logic “low(L)” level that is output from the MAC command generator270may be transmitted to the first MAC operator (MAC0)220. As described with reference toFIG.7, the first MAC operator (MAC0)220may perform a latch operation of the first data DA1. Next, referring toFIG.12, a logic level of the bank selection signal BK_S transmitted from the address latch260to the MAC command generator270may change from a logic “low(L)” level into a logic “high(H)” level while the memory read signal READ_M maintains a logic “low(L)” level. In such a case, the MAC command generator270may generate and output the first MAC input latch signal MAC_L1with a logic “low(L)” level and the second MAC input latch signal MAC_L2with a logic “high(H)” level. A point in time when the first MAC input latch signal MAC_L1with a logic “low(L)” level and the second MAC input latch signal MAC_L2with a logic “high(H)” level are output from the MAC command generator270may be determined by a delay time of the delay circuit (271ofFIG.5), as described with reference toFIG.5. The first MAC input latch signal MAC_L1with a logic “low(L)” level and the second MAC input latch signal MAC_L2with a logic “high(H)” level that is output from the MAC command generator270may be transmitted to the first MAC operator (MAC0)220. As described with reference toFIG.7, the first MAC operator (MAC0)220may perform a latch operation of the second data DA2. After the latch operations of the first and second data DA1and DA2terminate, the first MAC operator (MAC0)220may perform the MAC arithmetic operation and may generate the MAC result data DA_MAC. The MAC result data DA_MAC generated by the first MAC operator (MAC0)220may be input to the output latch223-1included in the first MAC operator (MAC0)220. Next, referring toFIG.13, the command decoder250may output and transmit the MAC arithmetic signal MAC with a logic “high(H)” level to the MAC command generator270. The MAC command generator270may generate and output the MAC output latch signal MAC_L3with a logic “high” level in response to the MAC arithmetic signal MAC with a logic “high(H)” level. The MAC output latch signal MAC_L3with a logic “high” level may be transmitted to the first MAC operator (MAC0)220. As described with reference toFIG.7, the output latch (223-1ofFIG.7) of the first MAC operator (MAC0)220may be synchronized with the MAC output latch signal MAC_L3with a logic “high” level to transfer the MAC result data DA_MAC that is output from the MAC circuit222of the first MAC operator (MAC0)220to the transfer gate (233-2ofFIG.7) of the first MAC operator (MAC0)220. The MAC result data DA_MAC that is output from the output latch (223-1ofFIG.7) may be fed back to the addition logic circuit (222-2ofFIG.7) for the accumulative adding calculation. Next, referring toFIG.14, the command decoder250may output and transmit the result read signal READ_RST with a logic “high(H)” level to the MAC command generator270. The MAC command generator270may generate and output the MAC result latch signal MAC_L_RST with a logic “high” level in response to the result read signal READ_RST with a logic “high(H)” level. The MAC result latch signal MAC_L_RST with a logic “high” level may be transmitted to the first MAC operator (MAC0)220. As described with reference toFIG.7, the first MAC operator (MAC0)220may output the MAC result data DA_MAC to the GIO line290in response to the MAC result latch signal MAC_L_RST with a logic “high” level and may also reset the output latch (223-1ofFIG.6) included in the first MAC operator (MAC0)220in response to the MAC result latch signal MAC_L_RST with a logic “high” level. The MAC result data DA_MAC transmitted to the GIO line290may be output to an external device through the serializer/deserializer280and the data I/O circuit240. FIG.15is a timing diagram illustrating an operation of the PIM device200illustrate inFIG.3. Referring toFIG.15, at a first point in time “T1”, the MAC command generator270may be synchronized with a falling edge of a clock signal CLK to generate and output the first MAC read signal MAC_RD_BK0(R1) with a logic “high(H)” level. The first memory bank (BK0)211may be selected by the first MAC read signal MAC_RD_BK0(R1) with a logic “high(H)” level so that the first data DA1are read out of the first memory bank (BK0)211. At a second point in time “T2”, the MAC command generator270may be synchronized with a falling edge of the clock signal CLK to generate and output the second MAC read signal MAC_RD_BK1(R2) with a logic “high(H)” level. The second memory bank (BK1)212may be selected by the second MAC read signal MAC_RD_BK1(R2) with a logic “high(H)” level so that the second data DA2are read out of the second memory bank (BK1)212. At a third point in time “T3”, the MAC command generator270may be synchronized with a falling edge of the clock signal CLK to generate and output the MAC arithmetic signal MAC with a logic “high(H)” level. The first MAC operator (MAC0)220may perform the multiplying calculations and the adding calculations of the first and second data DA1and DA2to generate the MAC result data DA_MAC, in response to the MAC arithmetic signal MAC with a logic “high(H)” level. At a fourth point in time “T4”, the MAC command generator270may be synchronized with a falling edge of the clock signal CLK to generate and output the MAC result latch signal MAC_L_RST (RST) with a logic “high” level. The MAC result data DA_MAC generated by the first MAC operator (MAC0)220may be transmitted to the GIO line290by the MAC result latch signal MAC_L_RST (RST) with a logic “high” level. FIG.16is a block diagram illustrating another configuration of a PIM device300according to an embodiment of the present disclosure, andFIG.17illustrates an internal command signal I_CMD that is output from a command decoder350of the PIM device300and a MAC command signal MAC_CMD that is output from a MAC command generator370of the PIM device300.FIG.16illustrates only a first memory bank (BK0)311, a second memory bank (BK1)312, and a first MAC operator (MAC0)320constituting a first MAC unit among the plurality of MAC units. However,FIG.16illustrates merely an example for simplification of the drawing. Accordingly, the following description for the first MAC unit may be equally applicable to the remaining MAC units. Referring toFIG.16, the PIM device300may be configured to include the first memory bank (BK0)311, the second memory bank (BK1)312, and the first MAC operator (MAC0)320. The PIM device300according to the present embodiment may include a GIO line390, a first bank input/output (BIO) line391, and a second BIO line392acting as data transmission lines. Data communication of the first memory bank (BK0)311, the second memory bank (BK1)312, and the first MAC operator (MAC0)320may be achieved through the GIO line390. Only the data transmission between the first memory bank (BK0)311and the first MAC operator (MAC0)320may be achieved through the first BIO line391, and only the data transmission between the second memory bank (BK1)312and the first MAC operator (MAC0)320may be achieved through the second BIO line392. Thus, the first MAC operator (MAC0)320may directly receive first data and second data from the first and second memory banks (BK0and BK1)311and312through the first BIO line391and the second BIO line392without using the GIO line390. The PIM device300may further include a receiving driver (RX)330, a data I/O circuit (DQ)340, the command decoder350, an address latch360, the MAC command generator370, and a serializer/deserializer (SER/DES)380. The command decoder350, the address latch360, the MAC command generator370, and the serializer/deserializer380may be disposed in the peripheral circuit PERI of the PIM device100illustrated inFIG.2. The receiving driver330may receive an external command E_CMD and an input address I_ADDR from an external device. The external device may denote a host or a controller coupled to the PIM device300. Hereinafter, it may be assumed that the external command E_CMD transmitted to the PIM device300is a command requesting the MAC arithmetic operation. That is, the PIM device300may perform the deterministic MAC arithmetic operation in response to the external command E_CMD. The data I/O circuit340may include a data I/O pad. The data I/O pad may be coupled with an data I/O line. The PIM device300communicates with the external device through the data I/O circuit340. The receiving driver330may separately output the external command E_CMD and the input address I_ADDR received from the external device. Data DA that is input to the PIM device300through the data I/O circuit340may be processed by the serializer/deserializer380and may be transmitted to the first memory bank (BK0)311and the second memory bank (BK1)312through the GIO line390of the PIM device300. The data DA that is output from the first memory bank (BK0)311, the second memory bank (BK1)312, and the first MAC operator (MAC0)320through the GIO line390may be processed by the serializer/deserializer380and may be output to the external device through the data I/O circuit340. The serializer/deserializer380may convert the data DA into parallel data if the data DA are serial data or may convert the data DA into serial data if the data DA are parallel data. For the data conversion, the serializer/deserializer380may include a serializer for converting parallel data into serial data and a deserializer for converting serial data into parallel data. The command decoder350may decode the external command E_CMD that is output from the receiving driver330to generate and output the internal command signal I_CMD. As illustrated inFIG.17, the internal command signal I_CMD that is output from the command decoder350may include first to third internal command signals. In an embodiment, the first internal command signal may be a memory active signal ACT_M, the second internal command signal may be a MAC arithmetic signal MAC, and the third internal command signal may be a result read signal READ_RST. The first to third internal command signals that are output from the command decoder350may be sequentially input to the MAC command generator370. In order to perform the deterministic MAC arithmetic operation of the PIM device300, the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder350may be sequentially generated at predetermined points in time (or clocks). In an embodiment, the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST may have predetermined latencies, respectively. For example, the MAC arithmetic signal MAC may be generated after a first latency elapses from a point in time when the memory active signal ACT_M is generated, and the result read signal READ_RST may be generated after a third latency elapses from a point in time when the MAC arithmetic signal MAC is generated. No signal is generated by the command decoder350until a fourth latency elapses from a point in time when the result read signal READ_RST is generated. The first to fourth latencies may be predetermined and fixed. Thus, the host or the controller outputting the external command E_CMD may predict the points in time when the first to third internal command signals constituting the internal command signal I_CMD are generated by the command decoder350in advance at a point in time when the external command E_CMD is output from the host or the controller. That is, the host or the controller may predict a point in time (or a clock) when the MAC arithmetic operation terminates in the PIM device300after the external command E_CMD requesting the MAC arithmetic operation is transmitted from the host or the controller to the PIM device300, even without receiving any signals from the PIM device300. The address latch360may convert the input address I_ADDR that is output from the receiving driver330into a row/column address ADDR_R/ADDR_C to output the row/column address ADDR_R/ADDR_C. The row/column address ADDR_R/ADDR_C that is output from the address latch360may be transmitted to the first and second memory banks311and312. According to the present embodiment, the first data and the second data to be used for the MAC arithmetic operation may be simultaneously read out of the first and second memory banks (BK0and BK1)311and312, respectively. Thus, it may be unnecessary to generate a bank selection signal for selecting any one of the first and second memory banks311and312. In an embodiment, a point in time when the row/column address ADDR_R/ADDR_C is input to the first and second memory banks311and312may be a point in time when a MAC command (i.e., the MAC arithmetic signal MAC) requesting a data that is read operation for the first and second memory banks311and312for the MAC arithmetic operation is generated. The MAC command generator370may output the MAC command signal MAC_CMD in response to the internal command signal I_CMD that is output from the command decoder350. As illustrated inFIG.16, the MAC command signal MAC_CMD that is output from the MAC command generator370may include first to fifth MAC command signals. In an embodiment, the first MAC command signal may be a MAC active signal RACTV, the second MAC command signal may be a MAC read signal MAC_RD_BK, the third MAC command signal may be a MAC input latch signal MAC_L1, the fourth MAC command signal may be a MAC output latch signal MAC_L3, and the fifth MAC command signal may be a MAC result latch signal MAC_L_RST. The MAC active signal RACTV may be generated based on the memory active signal ACT_M that is output from the command decoder350. The MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may be sequentially generated based on the MAC arithmetic signal MAC that is output from the command decoder350. That is, the MAC input latch signal MAC_L1may be generated at a point in time when a certain time elapses from a point in time when the MAC read signal MAC_RD_BK is generated. The MAC output latch signal MAC_L3may be generated at a point in time when a certain time elapses from a point in time when the MAC input latch signal MAC_L1is generated. Finally, the MAC result latch signal MAC_L_RST may be generated based on the result read signal READ_RST that is output from the command decoder350. The MAC active signal RACTV that is output from the MAC command generator370may control an activation operation for the first and second memory banks311and312. The MAC read signal MAC_RD_BK that is output from the MAC command generator370may control a data that is read operation for the first and second memory banks311and312. The MAC input latch signal MAC_L1that is output from the MAC command generator370may control an input data latch operation of the first MAC operator (MAC0)320. The MAC output latch signal MAC_L3that is output from the MAC command generator370may control an output data latch operation of the first MAC operator (MAC0)320. The MAC result latch signal MAC_L_RST that is output from the MAC command generator370may control an output operation of MAC result data of the first MAC operator (MAC0)320and a reset operation of the first MAC operator (MAC0)320. As described above, in order to perform the deterministic MAC arithmetic operation of the PIM device300, the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder350may be sequentially generated at predetermined points in time (or clocks), respectively. Thus, the MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may also be generated and output from the MAC command generator370at predetermined points in time after the external command E_CMD is input to the PIM device300, respectively. That is, a time period from a point in time when the first and second memory banks311and312are activated by the MAC active signal RACTV until a point in time when the first MAC operator (MAC0)320is reset by the MAC result latch signal MAC_L_RST may be predetermined. FIG.18illustrates an example of a configuration of the MAC command generator370included in the PIM device300illustrated inFIG.16. Referring toFIG.18, the MAC command generator370may sequentially receive the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST from the command decoder350. In addition, the MAC command generator370may sequentially generate and output the MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST. The MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may be output in series with certain time intervals. In an embodiment, the MAC command generator370may be configured to include an active signal generator371, a first delay circuit372, and a second delay circuit373. The active signal generator371may receive the memory active signal ACT_M to generate and output the MAC active signal RACTV. The MAC active signal RACTV that is output from the active signal generator371may be transmitted to the first and second memory banks311and312to activate the first and second memory banks311and312. The MAC command generator370may receive the MAC arithmetic signal MAC that is output from the command decoder350to output the MAC arithmetic signal MAC as the MAC read signal MAC_RD_BK. The first delay circuit372may receive the MAC arithmetic signal MAC and may delay the MAC arithmetic signal MAC by a first delay time DELAY_T1to generate and output the MAC input latch signal MAC_L1. The second delay circuit373may receive an output signal of the first delay circuit372and may delay the output signal of the first delay circuit372by a second delay time DELAY_T2to generate and output the MAC output latch signal MAC_L3. The MAC command generator370may generate the MAC result latch signal MAC_L_RST in response to the result read signal READ_RST that is output from the command decoder350. The MAC command generator370may generate and output the MAC active signal RACTV in response to the memory active signal ACT_M that is output from the command decoder350. Subsequently, the MAC command generator370may generate and output the MAC read signal MAC_RD_BK in response to the MAC arithmetic signal MAC that is output from the command decoder350. The MAC arithmetic signal MAC may be input to the first delay circuit372. The MAC command generator370may delay the MAC arithmetic signal MAC by a certain time determined by the first delay circuit372to generate and output an output signal of the first delay circuit372as the MAC input latch signal MAC_L1. The output signal of the first delay circuit372may be input to the second delay circuit373. The MAC command generator370may delay the MAC input latch signal MAC_L1by a certain time determined by the second delay circuit373to generate and output an output signal of the second delay circuit373as the MAC output latch signal MAC_L3. Subsequently, the MAC command generator370may generate and output the MAC result latch signal MAC_L_RST in response to the result read signal READ_RST that is output from the command decoder350. FIG.19illustrates input signals and output signals of the MAC command generator370illustrated inFIG.18with a timeline. InFIG.19, signals transmitted from the command decoder350to the MAC command generator370are illustrated in an upper dotted line box, and signals that are output from the MAC command generator370are illustrated in a lower dotted line box. Referring toFIGS.18and19, at a first point in time “T1” of the timeline, the memory active signal ACT_M may be input to the MAC command generator370and the MAC command generator370may output the MAC active signal RACTV. At a second point in time “T2” when a certain time, for example, a first latency L1elapses from the first point in time “T1”, the MAC arithmetic signal MAC with a logic “high” level may be input to the MAC command generator370. In response to the MAC arithmetic signal MAC with a logic “high” level, the MAC command generator370may output the MAC read signal MAC_RD_BK with a logic “high” level. At a third point in time “T3” when a certain time elapses from the second point in time “T2”, a logic level of the MAC arithmetic signal MAC may change from a logic “high” level into a logic “low” level. At the third point in time “T3” when the first delay time DELAY_T1elapses from the second point in time “T2”, the MAC command generator370may output the MAC input latch signal MAC_L1with a logic “high” level. The first delay time DELAY_T1may correspond to a delay time determined by the first delay circuit372illustrated inFIG.18. The first delay time DELAY_T1may be set to be different according to a logic design scheme of the first delay circuit372. In an embodiment, the first delay time DELAY_T1may be set to be equal to or greater than a second latency L2. At a fourth point in time “T4” when a certain time elapses from the third point in time “T3”, the MAC command generator370may output the MAC output latch signal MAC_L3with a logic “high” level. The fourth point in time “T4” may be a moment when the second delay time DELAY_T2elapses from the third point in time “T3”. The second delay time DELAY_T2may correspond to a delay time determined by the second delay circuit373illustrated inFIG.18. The second delay time DELAY_T2may be set to be different according to a logic design scheme of the second delay circuit373. In an embodiment, the second delay time DELAY_T2may be set to be equal to or greater than a third latency L3. At a fifth point in time “T5” when a certain time, for example, a fourth L4elapses from the fourth point in time “T4”, the result read signal READ_RST with a logic “high” level may be input to the MAC command generator370. In response to the result read signal READ_RST with a logic “high” level, the MAC command generator370may output the MAC result latch signal MAC_L_RST with a logic “high” level, as described with reference toFIG.18. In order to perform the deterministic MAC arithmetic operation, moments when the internal command signals ACT_M, MAC, and READ_RST generated by the command decoder350are input to the MAC command generator370may be fixed and moments when the MAC command signals RACTV, MAC_RD_BK, MAC_L1, MAC_L3, and MAC_L_RST are output from the MAC command generator370in response to the internal command signals ACT_M, MAC, and READ_RST may also be fixed. Thus, all of the first latency L1between the first point in time “T1” and the second point in time “T2”, the second latency L2between the second point in time “T2” and the third point in time “T3”, the third latency L3between the third point in time “T3” and the fourth point in time “T4”, and the fourth latency L4between the fourth point in time “T4” and the fifth point in time “T5” may have fixed values. In an embodiment, the first latency L1may be defined as a time it takes to activate both of the first and second memory banks based on the MAC active signal RACTV. The second latency L2may be defined as a time it takes to read the first and second data out of the first and second memory banks (BK0and BK1)311and312based on the MAC read signals MAC_RD_BK and to input the first and second data DA1and DA2into the first MAC operator (MAC0)320. The third latency L3may be defined as a time it takes to latch the first and second data DA1and DA2in the first MAC operator (MAC0)320based on the MAC input latch signals MAC_L1and it takes the first MAC operator (MAC0)320to perform the MAC arithmetic operation of the first and second data. The fourth latency L4may be defined as a time it takes to latch the output data in the first MAC operator (MAC0)320based on the MAC output latch signal MAC_L3. FIG.20illustrates an example of a configuration of the first MAC operator (MAC0)320included in the PIM device300ofFIG.16. The first MAC operator (MAC0)320included in the PIM device300may have the same configuration as the first MAC operator (MAC0)220described with reference toFIG.7except for a signal applied to clock terminals of first and second input latches321-1and321-2constituting a data input circuit321. Thus, inFIG.20, the same reference numerals or the same reference symbols as used inFIG.7denote the same elements, and descriptions of the same elements as set forth with reference toFIG.7will be omitted hereinafter. Describing in detail the differences between the first MAC operator (MAC0)220and the first MAC operator (MAC0)320, in case of the first MAC operator (MAC0)220illustrated inFIG.7, the first input latch (221-1ofFIG.7) and the second input latch (221-2ofFIG.7) of the data input circuit (221ofFIG.7) may be synchronized with the first and second MAC input latch signals MAC_L1and MAC_L2, respectively, sequentially generated with a certain time interval to output the first data DA1and the second data DA2. In contrast, in case of the first MAC operator (MAC0)320, the MAC input latch signal MAC_L1may be input to both of the clock terminals of the first and second input latches321-1and321-2constituting a data input circuit321. Thus, both of the first and second input latches321-1and321-2may be synchronized with the MAC input latch signal MAC_L1to output the first data DA1and the second data DA2, respectively. Accordingly, the first MAC operator (MAC0)320may transmit the first and second data DA1and DA2to the MAC circuit222in parallel without any time interval between the first and second data DA1and DA2. As a result, the MAC arithmetic operation of the MAC circuit222may be quickly performed without any delay of data input time. FIGS.21to25are block diagrams illustrating operations of the PIM device300illustrated inFIG.16. InFIGS.21to25, the same reference numerals or the same reference symbols as used inFIG.16denote the same elements. First, referring toFIG.21, if the external command E_CMD requesting the MAC arithmetic operation and the input address I_ADDR are transmitted from an external device to the receiving driver330, the receiving driver330may output the external command E_CMD and the input address I_ADDR to the command decoder350and the address latch360, respectively. The command decoder350may decode the external command E_CMD to generate and transmit the memory active signal ACT_M to the MAC command generator370. The MAC command generator370may generate and output the MAC active signal RACTV in response to the memory active signal ACT_M. The MAC active signal RACTV may be transmitted to the first memory bank (BK0)311and the second memory bank (BK1)312. Both of the first memory bank (BK0)311and the second memory bank (BK1)312may be activated by the MAC active signal RACTV. Next, referring toFIG.22, the command decoder350may generate and output the MAC arithmetic signal MAC with a logic “high(H)” level to the MAC command generator370. In response to the MAC arithmetic signal MAC with a logic “high(H)” level, the MAC command generator370may generate and output the MAC read signal MAC_RD_BK with a logic “high(H)” level. The MAC read signal MAC_RD_BK with a logic “high(H)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the first memory bank (BK0)311and the second memory bank (BK1)312. The first data DA1may be read out of the first memory bank (BK0)311by the MAC read signal MAC_RD_BK with a logic “high(H)” level and may be transmitted to the first MAC operator (MAC0)320through the first BIO line391. In addition, the second data DA2may be read out of the second memory bank (BK1)312by the MAC read signal MAC_RD_BK with a logic “high(H)” level and may be transmitted to the first MAC operator (MAC0)320through the second BIO line392. Next, referring toFIG.23, a logic level of the MAC arithmetic signal MAC that is output from the command decoder350may change from a logic “high(H)” level into a logic “low(L)” level at a point in time when the first delay time DELAY_T1determined by the first delay circuit (372ofFIG.18) elapses from a point in time when the MAC read signal MAC_RD_BK is output from the MAC command generator370. The MAC command generator370may generate and output the MAC input latch signal MAC_L1with a logic “high(H)” level in response to the MAC arithmetic signal MAC with a logic “low(L)” level. The MAC input latch signal MAC_L1with a logic “high(H)” level may be transmitted to the first MAC operator (MAC0)320. The first MAC operator (MAC0)320may be synchronized with the MAC input latch signal MAC_L1with a logic “high(H)” level to perform a latch operation of the first and second data DA1and DA2that are output from the first and second memory banks (BK0and BK1)311and312. If the latch operation of the first and second data DA1and DA2terminates, the first MAC operator (MAC0)320may perform the MAC arithmetic operation and may generate the MAC result data DA_MAC. The MAC result data DA_MAC generated by the first MAC operator (MAC0)320may be input to the output latch (223-1ofFIG.20) included in the first MAC operator (MAC0)320. Next, referring toFIG.24, a logic level of the MAC arithmetic signal MAC that is output from the command decoder350may change from a logic “low(L)” level into a logic “high(H)” level at a point in time when the second delay time DELAY_T2determined by the second delay circuit (373ofFIG.18) elapses from a point in time when the MAC input latch signal MAC_L1with a logic “high(H)” level is output from the MAC command generator370. The MAC command generator370may generate and output the MAC output latch signal MAC_L3with a logic “high(H)” level in response to the MAC arithmetic signal MAC with a logic “high(H)” level. The MAC output latch signal MAC_L3with a logic “high(H)” level may be transmitted to the first MAC operator (MAC0)320. The output latch (223-1ofFIG.20) included in the first MAC operator (MAC0)320may be synchronized with the MAC output latch signal MAC_L3with a logic “high(H)” level to transfer the MAC result data DA_MAC generated by the MAC circuit (222ofFIG.20) to the transfer gate (223-2ofFIG.20) included in the first MAC operator (MAC0)320. The MAC result data DA_MAC that is output from the output latch (223-1ofFIG.20) may be fed back to the addition logic circuit (222-2ofFIG.20) for the accumulative adding calculation executed by the MAC circuit (222ofFIG.20). Next, referring toFIG.25, the command decoder350may output and transmit the result read signal READ_RST with a logic “high(H)” level to the MAC command generator370. The MAC command generator370may generate and output the MAC result latch signal MAC_L_RST with a logic “high” level in response to the result read signal READ_RST with a logic “high(H)” level. The MAC result latch signal MAC_L_RST with a logic “high” level may be transmitted to the first MAC operator (MAC0)320. As described with reference toFIG.20, the first MAC operator (MAC0)320may output the MAC result data DA_MAC to the GIO line390in response to the MAC result latch signal MAC_L_RST with a logic “high” level and may also reset the output latch (223-1ofFIG.20) included in the first MAC operator (MAC0)320in response to the MAC result latch signal MAC_L_RST with a logic “high” level. The MAC result data DA_MAC transmitted to the GIO line390may be output to an external device through the serializer/deserializer380and the data I/O line340. Although not shown in the drawings, the MAC result data DA_MAC that is output from the first MAC operator (MAC0)320may be written into the first memory bank (BK0)311through the first BIO line391without using the GIO line390or may be written into the second memory bank (BK1)312through the second BIO line392without using the GIO line390. FIG.26is a timing diagram illustrating an operation of the PIM device300illustrated inFIG.16. Referring toFIG.26, at a first point in time “T1”, the MAC command generator370may be synchronized with a falling edge of a clock signal CLK to generate and output the MAC read signal MAC_RD_BK (R) with a logic “high(H)” level. The first and second memory banks (BK0and BK1)311and312may be selected by the MAC read signal MAC_RD_BK (R) with a logic “high(H)” level so that the first data DA1and the second data DA2are read out of the first and second memory banks (BK0and BK1)311and312. If a certain time elapses from a point in time when first data DA1and the second data DA2are read out, the first MAC operator (MAC0)320may perform the MAC arithmetic operation of the first and second data DA1and DA2to generate the MAC result data DA_MAC. At a second point in time “T2”, the MAC command generator370may be synchronized with a falling edge of the clock signal CLK to generate and output the MAC result latch signal MAC_L_RST (RST) with a logic “high” level. The MAC result data DA_MAC may be transmitted to the GIO line390by the MAC result latch signal MAC_L_RST (RST) with a logic “high” level. FIG.27illustrates a disposal structure indicating placement of memory banks and MAC operators included in a PIM device400according to another embodiment of the present disclosure. Referring toFIG.27, the PIM device400may include memory devices such as a plurality of memory banks (e.g., first to sixteenth memory banks BK0, . . . , and BK15), processing devices such as a plurality of MAC operators (e.g., first to sixteenth MAC operators MAC0, . . . , and MAC15), and a global buffer GB. A core circuit may be disposed to be adjacent to the memory banks BK0, . . . , and BK15. The core circuit may include X-decoders XDECs and Y-decoders/IO circuits YDEC/IOs. The memory banks BK0, . . . , and BK15and the core circuit may have the same configuration as described with reference toFIG.2. Thus, descriptions of the memory banks BK0, . . . , and BK15and the core circuit will be omitted hereinafter. The MAC operators MAC0, . . . , and MAC15may be disposed to be allocated to the memory banks BK0, . . . , and BK15, respectively. That is, in the PIM device400, two or more memory banks do not share one MAC operator with each other. Thus, the number of the MAC operators MAC0, . . . , and MAC15included in the PIM device400may be equal to the number of the memory banks BK0, . . . , and BK15included in the PIM device400. One of the memory banks BK0, . . . , and BK15together with one of the MAC operators MAC0, . . . , and MAC15may constitute one MAC unit. For example, the first memory bank BK0and the first MAC operator MAC0may constitute a first MAC unit, and the second memory bank BK1and the second MAC operator MAC1may constitute a second MAC unit. Similarly, the sixteenth memory bank BK15and the sixteenth MAC operator MAC15may constitute a sixteenth MAC unit. In each of the first to sixteenth MAC units, the MAC operator may receive first data DA1to be used for the MAC arithmetic operation from the respective memory bank. The PIM device400may further include a peripheral circuit PERI. The peripheral circuit PERI may be disposed in a region other than an area in which the memory banks BK0, BK1, . . . , and BK15; the MAC operators MAC0, . . . , and MAC15; and the core circuit are disposed. The peripheral circuit PERI may be configured to include a control circuit relating to a command/address signal, a control circuit relating to input/output of data, and a power supply circuit. The peripheral circuit PERI of the PIM device400may have substantially the same configuration as the peripheral circuit PERI of the PIM device100illustrated inFIG.2. A difference between the peripheral circuit PERI of the PIM device400and the peripheral circuit PERI of the PIM device100is that the global buffer GB is disposed in the peripheral circuit PERI of the PIM device400. The global buffer GB may receive second data DA2to be used for the MAC operation from an external device and may store the second data DA2. The global buffer GB may output the second data DA2to each of the MAC operators MAC0, . . . , and MAC15through a GIO line. In the event that the PIM device400performs neural network calculation, for example, an arithmetic operation in a deep learning process, the first data DA1may be weight data and the second data DA2may be vector data. The PIM device400according to the present embodiment may operate in a memory mode or a MAC arithmetic mode. In the memory mode, the PIM device400may operate to perform the same operations as general memory devices. The memory mode may include a memory read operation mode and a memory write operation mode. In the memory read operation mode, the PIM device400may perform a read operation for reading out data from the memory banks BK0, BK1, . . . , and BK15to output the read data, in response to an external request. In the memory write operation mode, the PIM device400may perform a write operation for storing data provided by an external device into the memory banks BK0, BK1, . . . , and BK15, in response to an external request. In the MAC arithmetic mode, the PIM device400may perform the MAC arithmetic operation using the MAC operators MAC0, . . . , and MAC15. In the PIM device400, the MAC arithmetic operation may be performed in a deterministic way, and the deterministic MAC arithmetic operation of the PIM device400will be described more fully hereinafter. Specifically, the PIM device400may perform the read operation of the first data DA1for each of the memory banks BK0, . . . , and BK15and the read operation of the second data DA2for the global buffer GB, for the MAC arithmetic operation in the MAC arithmetic mode. In addition, each of the MAC operators MAC0, . . . , and MAC15may perform the MAC arithmetic operation of the first data DA1and the second data DA2to store a result of the MAC arithmetic operation into the memory bank or to output the result of the MAC arithmetic operation to an external device. In some cases, the PIM device400may perform a data write operation for storing data to be used for the MAC arithmetic operation into the memory banks before the data that is read operation for the MAC arithmetic operation is performed in the MAC arithmetic mode. The operation mode of the PIM device400according to the present embodiment may be determined by a command which is transmitted from a host or a controller to the PIM device400. In an embodiment, if a first external command requesting a read operation or a write operation for the memory banks BK0, BK1, . . . , and BK15is transmitted from the host or the controller to the PIM device400, the PIM device400may perform the data that is read operation or the data write operation in the memory mode. Alternatively, if a second external command requesting the MAC arithmetic operation is transmitted from the host or the controller to the PIM device400, the PIM device400may perform the data that is read operation and the MAC arithmetic operation. The PIM device400may perform the deterministic MAC arithmetic operation. Thus, the host or the controller may always predict a point in time (or a clock) when the MAC arithmetic operation terminates in the PIM device400from a point in time when an external command requesting the MAC arithmetic operation is transmitted from the host or the controller to the PIM device400. Because the timing is predictable, no operation for informing the host or the controller of a status of the MAC arithmetic operation is required while the PIM device400performs the deterministic MAC arithmetic operation. In an embodiment, a latency during which the MAC arithmetic operation is performed in the PIM device400may be set to a fixed value for the deterministic MAC arithmetic operation. FIG.28is a block diagram illustrating an example of a detailed configuration of a PIM device500corresponding to the PIM device400illustrated inFIG.27.FIG.28illustrates only a first memory bank (BK0)511and a first MAC operator (MAC0)520constituting a first MAC unit among a plurality of MAC units. However,FIG.28illustrates merely an example for simplification of the drawing. Accordingly, the following description for the first MAC unit may be equally applicable to the remaining MAC units. Referring toFIG.28, the PIM device500may be configured to include the first memory bank (BK0)511and the first MAC operator (MAC0)520constituting the first MAC unit as well as a global buffer595. The PIM device500may further include a GIO line590and a BIO line591used as data transmission lines. The first memory bank (BK0)511and the first MAC operator (MAC0)520may communicate with the global buffer595through the GIO line590. Only the data transmission between the first memory bank (BK0)511and the first MAC operator (MAC0)520may be achieved through the BIO line591. The BIO line591is dedicated specifically for data transmission between the first memory bank (BK0)511and the first MAC operator (MAC0)520. Thus, the first MAC operator (MAC0)520may receive the first data DA1to be used for the MAC arithmetic operation from the first memory bank (BK0)511through the BIO line591and may receive the second data DA2to be used for the MAC arithmetic operation from the global buffer595through the GIO line590. The PIM device500may include a receiving driver (RX)530, a data I/O circuit (DQ)540, a command decoder550, an address latch560, a MAC command generator570, and a serializer/deserializer (SER/DES)580. The command decoder550, the address latch560, the MAC command generator570, and the serializer/deserializer580may be disposed in the peripheral circuit PERI of the PIM device400illustrated inFIG.27. The receiving driver530may receive an external command E_CMD and an input address I_ADDR from an external device. The external device may denote a host or a controller coupled to the PIM device500. Hereinafter, it may be assumed that the external command E_CMD transmitted to the PIM device500is a command requesting the MAC arithmetic operation. That is, the PIM device500may perform the deterministic MAC arithmetic operation in response to the external command E_CMD. The data I/O circuit540may provide a means through which the PIM device500communicates with the external device. The receiving driver530may separately output the external command E_CMD and the input address I_ADDR received from the external device. Data DA that is input to the PIM device500through the data I/O circuit540may be processed by the serializer/deserializer580and may be transmitted to the first memory bank (BK0)511and the global buffer595through the GIO line590of the PIM device500. The data DA that is output from the first memory bank (BK0)511and the first MAC operator (MAC0)520through the GIO line590may be processed by the serializer/deserializer580and may be output to the external device through the data I/O circuit540. The serializer/deserializer580may convert the data DA into parallel data if the data DA are serial data or may convert the data DA into serial data if the data DA are parallel data. For the data conversion, the serializer/deserializer580may include a serializer converting parallel data into serial data and a deserializer converting serial data into parallel data. The command decoder550may decode the external command E_CMD that is output from the receiving driver530to generate and output the internal command signal I_CMD. The internal command signal I_CMD that is output from the command decoder550may be the same as the internal command signal I_CMD described with reference toFIG.17. That is, the internal command signal I_CMD may include a first internal command signal corresponding to the memory active signal ACT_M, a second internal command signal corresponding to the MAC arithmetic signal MAC, and a third internal command signal corresponding to the result read signal READ_RST. The first to third internal command signals that are output from the command decoder550may be sequentially input to the MAC command generator570. As described with reference toFIG.17, the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder550may be sequentially generated at predetermined points in time (or clocks) in order to perform the deterministic MAC arithmetic operation of the PIM device500. Thus, the host or the controller outputting the external command E_CMD may predict the points in time when the first to third internal command signals constituting the internal command signal I_CMD are generated by the command decoder550in advance at a point in time when the external command E_CMD is output from the host or the controller. That is, the host or the controller may predict a point in time (or a clock) when the MAC arithmetic operation terminates in the PIM device500after the external command E_CMD requesting the MAC arithmetic operation is transmitted from the host or the controller to the PIM device500, even without receiving any signals from the PIM device500. The address latch560may convert the input address I_ADDR that is output from the receiving driver530into a row/column address ADDR_R/ADDR_C to output the row/column address ADDR_R/ADDR_C. The row/column address ADDR_R/ADDR_C that is output from the address latch560may be transmitted to the first memory bank (BK0)511. According to the present embodiment, the first data and the second data to be used for the MAC arithmetic operation may be simultaneously read out of the first memory bank (BK0)511and the global buffer595, respectively. Thus, it may be unnecessary to generate a bank selection signal for selecting the first memory bank511. A point in time when the row/column address ADDR_R/ADDR_C is input to the first memory bank511may be a point in time when a MAC command (i.e., the MAC arithmetic signal MAC) requesting a data that is read operation for the first memory bank511for the MAC arithmetic operation is generated. The MAC command generator570may output the MAC command signal MAC_CMD in response to the internal command signal I_CMD that is output from the command decoder550. The MAC command signal MAC_CMD that is output from the MAC command generator570may be the same as the MAC command signal MAC_CMD described with reference toFIG.17. That is, the MAC command signal MAC_CMD that is output from the MAC command generator570may include the MAC active signal RACTV corresponding to the first MAC command signal, the MAC read signal MAC_RD_BK corresponding to the second MAC command signal, the MAC input latch signal MAC_L1corresponding to the third MAC command signal, the MAC output latch signal MAC_L3corresponding to the fourth MAC command signal, and the MAC result latch signal MAC_L_RST corresponding to the fifth MAC command signal. The MAC active signal RACTV may be generated based on the memory active signal ACT_M that is output from the command decoder550. The MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may be sequentially generated based on the MAC arithmetic signal MAC that is output from the command decoder550. That is, the MAC input latch signal MAC_L1may be generated at a point in time when a certain time elapses from a point in time when the MAC read signal MAC_RD_BK is generated. The MAC output latch signal MAC_L3may be generated at a point in time when a certain time elapses from a point in time when the MAC input latch signal MAC_L1is generated. Finally, the MAC result latch signal MAC_L_RST may be generated based on the result read signal READ_RST that is output from the command decoder550. The MAC active signal RACTV that is output from the MAC command generator570may control an activation operation for the first memory bank511. The MAC read signal MAC_RD_BK that is output from the MAC command generator570may control a data that is read operation for the first memory bank511and the global buffer595. The MAC input latch signal MAC_L1that is output from the MAC command generator570may control an input data latch operation of the first MAC operator (MAC0)520. The MAC output latch signal MAC_L3that is output from the MAC command generator570may control an output data latch operation of the first MAC operator (MAC0)520. The MAC result latch signal MAC_L_RST that is output from the MAC command generator570may control an output operation of MAC result data of the first MAC operator (MAC0)520and a reset operation of the first MAC operator (MAC0)520. As described above, in order to perform the deterministic MAC arithmetic operation of the PIM device500, the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST that is output from the command decoder550may be sequentially generated at predetermined points in time (or clocks), respectively. Thus, the MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may also be generated and output from the MAC command generator570at predetermined points in time after the external command E_CMD is input to the PIM device500, respectively. That is, a time period from a point in time when the first and second memory banks511is activated by the MAC active signal RACTV until a point in time when the first MAC operator (MAC0)520is reset by the MAC result latch signal MAC_L_RST may be predetermined. The MAC command generator570of the PIM device500according to the present embodiment may have the same configuration as described with reference toFIG.18. In addition, the input signals and the output signals of the MAC command generator570may be input to and output from the MAC command generator570at the same points in time as described with reference toFIG.19. As described with reference toFIGS.18and19, the MAC command generator570may sequentially receive the memory active signal ACT_M, the MAC arithmetic signal MAC, and the result read signal READ_RST from the command decoder550. In addition, the MAC command generator570may sequentially generate and output the MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST. The MAC active signal RACTV, the MAC read signal MAC_RD_BK, the MAC input latch signal MAC_L1, the MAC output latch signal MAC_L3, and the MAC result latch signal MAC_L_RST may be output from the MAC command generator570in series with certain time intervals. The MAC command generator570may generate and output the MAC active signal RACTV in response to the memory active signal ACT_M that is output from the command decoder550. Subsequently, the MAC command generator570may generate and output the MAC read signal MAC_RD_BK in response to the MAC arithmetic signal MAC that is output from the command decoder550. The MAC command generator570may delay the MAC arithmetic signal MAC by a certain time determined by the first delay circuit (372ofFIG.18) to generate and output the MAC input latch signal MAC_L1. The MAC command generator570may delay the MAC input latch signal MAC_L1by a certain time determined by the second delay circuit (373ofFIG.18) to generate and output the MAC output latch signal MAC_L3. Subsequently, the MAC command generator570may generate and output the MAC result latch signal MAC_L_RST in response to the result read signal READ_RST that is output from the command decoder550. FIG.29is a block diagram illustrating an operation of the PIM device500illustrated inFIG.28. InFIG.29, the same reference numerals or the same reference symbols as used inFIG.16denote the same elements. The operation of the PIM device500according to the present embodiment may be similar to the operation of the PIM device300described with reference toFIG.16except a transmission process of the first and second data DA1and DA2that are input to the first MAC operator (MAC0)520. Thus, the operation of the PIM device500executed before the first and second data DA1and DA2are transmitted to the first MAC operator (MAC0)520may be the same as the operation of the PIM device300described with reference toFIG.21. As illustrated inFIG.29, when the MAC arithmetic signal MAC with a logic “high(H)” level is transmitted from the command decoder550to the MAC command generator570, the MAC command generator570may generate and output the MAC read signal MAC_RD_BK with a logic “high(H)” level. The MAC read signal MAC_RD_BK with a logic “high(H)” level, together with the row/column address ADDR_R/ADDR_C, may be transmitted to the first memory bank (BK0)511. In such a case, a global buffer read signal B_R may also be transmitted to the global buffer595. The first data DA1may be read out of the first memory bank (BK0)511by the MAC read signal MAC_RD_BK with a logic “high(H)” level and may be transmitted to the first MAC operator (MAC0)520through the BIO line591. In addition, the second data DA2may be read out of the global buffer595by the global buffer read signal B_R and may be transmitted to the first MAC operator (MAC0)520through the GIO line590. The operation of the PIM device500executed after the first and second data DA1and DA2are transmitted to the first MAC operator (MAC0)520may be the same as the operation of the PIM device300described with reference toFIGS.23to25. FIG.30is a timing diagram illustrating an operation of the PIM device500illustrate inFIG.28. Referring toFIG.30, at a first point in time “T1”, the MAC command generator570may be synchronized with a falling edge of a clock signal CLK to generate and output the MAC read signal MAC_RD_BK (R) with a logic “high(H)” level. The first memory bank (BK0)511may be selected by the MAC read signal MAC_RD_BK (R) with a logic “high(H)” level so that the first data DA1are read out of the first memory bank (BK0)511. In addition, the second data DA2may be read out of the global buffer595. If a certain time elapses from a point in time when the first and second data DA1and DA2are read out of the first memory bank (BK0)511and the global buffer595, the first MAC operator (MAC0)520may perform the MAC arithmetic operation of the first and second data DA1and DA2to generate the MAC result data DA_MAC. At a second point in time “T2”, the MAC command generator570may be synchronized with a falling edge of the clock signal CLK to generate and output the MAC result latch signal MAC_L_RST (RST). The MAC result data DA_MAC may be transmitted to an external device through the GIO line590or to the first memory bank (BK0)511through the BIO line591, by the MAC result latch signal MAC_L_RST (RST). FIG.31Ais a diagram illustrating a configuration and an operation method of a PIM device600A in accordance with an embodiment of the present disclosure. Referring toFIG.31A, the PIM device600A may perform an arithmetic operation. In particular, the PIM device600A may perform an element-wise arithmetic operation. The element-wise arithmetic operation may mean an operation of calculating respective elements of two matrices with the same size. For example, an element-wise multiplication operation may be performed as follows. The PIM device600A may multiply an element ‘1’ of the first row of a first matrix A[0:7] and an element ‘2’ of the first row of a second matrix B[0:7] to output a multiplication result of an element ‘2’ that is seen in the first row of a third matrix Y[0:7]. The PIM device600A may multiply an element ‘2’ of the second row of the first matrix A[0:7] and an element ‘3’ of the second row of the second matrix B[0:7] to output a multiplication result of an element ‘6’ that is seen in the second row of the third matrix Y[0:7]. The PIM device600A may multiply an element ‘3’ of the third row of the first matrix A[0:7] and an element ‘4’ of the third row of the second matrix B[0:7] to output a multiplication result of an element ‘12’ that is seen in the third row of the third matrix Y[0:7]. The PIM device600A may multiply an element ‘4’ of the fourth row of the first matrix A[0:7] and an element ‘5’ of the fourth row of the second matrix B[0:7] to output a multiplication result of an element ‘20’ that is seen in the fourth row of the third matrix Y[0:7]. In the same manner, the PIM device600A may multiply elements ‘5,’ ‘6,’ ‘7,’ and ‘8’ of fifth to eighth rows of the first matrix A[0:7] and elements ‘6,’ ‘7,’ ‘8,’ and ‘9’ of fifth to eighth rows of the second matrix B[0:7], respectively, to output multiplication results of elements ‘30,’ ‘42,’ ‘56,’ and ‘72,’ respectively, seen in the fifth to eighth rows of the third matrix Y[0:7]. For the sake of clarity in explanation, it is illustrated that each of the first to third matrices A[0:7], B[0:7] and Y[0:7] includes only elements of a plurality of rows. However, the spirit of the present disclosure may be applied to cases in which each of the first to third matrices A[0:7], B[0:7], and Y[0:7] include elements of a plurality of columns or a plurality of rows and columns. Hereinafter, the elements of the first to eighth rows may be described as first to eighth elements, respectively. The PIM device600A may include a plurality of MAC units. One MAC unit may include a plurality of storage regions and an MAC operator MAC_A. The storage region may be a memory bank that stores data. The plurality of storage regions may include a plurality of memory banks. The MAC operator MAC_A may be coupled to the plurality of memory banks and may perform an arithmetic operation on data that is output from the plurality of memory banks. The MAC operator MAC_A may store the result data of the arithmetic operation in the plurality of memory banks. For example, in order to perform the element-wise multiplication operation, one MAC operator MAC_A may be coupled to at least three memory banks. The at least three memory banks and the MAC operator MAC_A may configure one MAC unit. InFIG.31A, first to fourth memory banks BK0, BK1, BK2, and BK3are illustrated. The first to third memory banks BK0, BK1, and BK2and the MAC operator MAC_A may configure one MAC unit, or the first to fourth memory banks BK0, BK1, BK2, and BK3and the MAC operator MAC_A may configure one MAC unit. Each of the first to fourth memory banks BK0, BK1, BK2, and BK3may include a plurality of rows and a plurality of columns, and a plurality of memory cells may be coupled to points at which the plurality of rows and the plurality of columns intersect with each other. In order to perform the element-wise multiplication operation, the PIM device600A may store data, corresponding to the first to eighth elements ‘1,’ ‘2,’ ‘3,’ ‘4,’ ‘5,’ ‘6,’ ‘7,’ and ‘8’ of the first matrix A[0:7], in the first memory bank BK0. The PIM device600A may store data, corresponding to the first to eighth elements ‘2,’ ‘3,’ ‘4,’ ‘5,’ ‘6,’ ‘7,’ ‘8,’ and ‘9’ of the second matrix B[0:7], in the second memory bank BK1. The PIM device600A may simultaneously read data that is stored in the first and second memory banks BK0and BK1, and may provide the read data to the MAC operator MAC_A. The PIM device600A may control the data that corresponds to the pluralities of elements of the first and second matrices A[0:7] and B[0:7] to be sequentially output from the first and second memory banks BK0and BK1, and may control elements with the same order to be simultaneously output. During an operation that is performed based on a single command signal, the PIM device600A may simultaneously read elements with the same order (that is, a pair of elements with the same order) among the elements of the first and second matrices A[0:7] and B[0:7]. For example, during a first operation that is performed based on the single command signal, the PIM device600A may simultaneously output data that corresponds to the first elements ‘1’ and ‘2’ of the first and second matrices A[0:7] and B[0:7] among the data that is stored in the first and second memory banks BK0and BK1. Thereafter, during a second operation that is performed based on the single command signal, the PIM device600A may simultaneously output data that corresponds to the second elements ‘2’ and ‘3’ of the first and second matrices A[0:7] and B[0:7] among the data that is stored in the first and second memory banks BK0and BK1. The PIM device600A may control data that corresponds to the respective third to eighth elements ‘3’ and ‘4,’ ‘4’ and ‘5,’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9’ of the first and second matrices A[0:7] and B[0:7] to be sequentially output from the first and second memory banks BK0and BK1. The MAC operator MAC_A may perform an arithmetic operation on data that is output from the first and second memory banks BK0and BK1. The MAC operator MAC_A may multiply data that is output from the first and second memory banks BK0and BK1. The MAC operator MAC_A may sequentially multiply data that corresponds to elements with the same order and may be output from the first and second memory banks BK0and BK1. The MAC operator MAC_A may receive data, corresponding to the first elements ‘1’ and ‘2’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by multiplying the data that corresponds to the first elements ‘1’ and ‘2.’ The arithmetic data may be data that corresponds to the first element ‘2’ of the third matrix Y[0:7]. The MAC operator MAC_A may receive data, corresponding to the second elements ‘2’ and ‘3’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by multiplying the data that corresponds to the second elements ‘2’ and ‘3.’ The arithmetic data may be data that corresponds to the second element ‘6’ of the third matrix Y[0:7]. The MAC operator MAC_A may receive data, corresponding to the third elements ‘3’ and ‘4’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by multiplying the data that corresponds to the third elements ‘3’ and ‘4.’ The arithmetic data may be data that corresponds to the third element ‘12’ of the third matrix Y[0:7]. The MAC operator MAC_A may receive data, corresponding to the fourth elements ‘4’ and ‘5’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by multiplying the data that corresponds to the fourth elements ‘4’ and ‘5.’ The arithmetic data may be data that corresponds to the fourth element ‘20’ of the third matrix Y[0:7]. In the same manner, the MAC operator MAC_A may sequentially receive data that corresponds to the fifth to eighth elements ‘5’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9’ of the first and second matrices A[0:7] and B[0:7], and may generate respective arithmetic data by multiplying the data that corresponds to the fifth to eighth elements ‘5’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9.’ The arithmetic data may be data that corresponds to the fifth to eighth elements ‘30,’ ‘42,’ ‘56’ and ‘72,’ respectively, of the third matrix Y[0:7]. The MAC operator MAC_A may provide the arithmetic data to the third memory bank BK2, and the arithmetic data may be written into the third memory bank BK2. The third memory bank BK2may sequentially receive the arithmetic data, corresponding to the first to eighth elements ‘2,’ ‘6,’ ‘12,’ ‘20,’ ‘30,’ ‘42,’ ‘56,’ and ‘72’ of the third matrix Y[0:7], from the MAC operator MAC_A, and the arithmetic data may be sequentially stored in the third memory bank BK2. The PIM device600A may complete the element-wise arithmetic operation by writing the arithmetic data to the third memory bank BK2. The PIM device600A may store elements of the first to third matrices A[0:7], B[0:7] and Y[0:7] in rows, respectively, with the same order of the first to third memory banks BK0, BK1and BK2. The PIM device600A may store elements with the same order among the elements of the first to third matrices A[0:7], B[0:7], and Y[0:7] in columns, respectively, with the same order of the first to third memory banks BK0, BK1, and BK2. For example, when the elements of the first matrix A[0:7] are stored in a first row of the first memory bank BK0, the elements of the second matrix B[0:7] may be stored in a first row of the second memory bank BK1, and the elements of the third matrix Y[0:7] may be stored in a first row of the third memory bank BK3. When the first element ‘1’ of the first matrix A[0:7] is stored in a first column that is coupled to the first row of the first memory bank BK0, the first element ‘2’ of the second matrix B[0:7] may be stored in a first column that is coupled to the first row of the second memory bank BK1, and the first element ‘2’ of the third matrix Y[0:7] may be stored in a first column that is coupled to the first row of the third memory bank BK2. When the second element ‘2’ of the first matrix A[0:7] is stored in a second column that is coupled to the first row of the first memory bank BK0, the second element ‘3’ of the second matrix B[0:7] may be stored in a second column that is coupled to the first row of the second memory bank BK1, and the second element ‘6’ of the third matrix Y[0:7] may be stored in a second column that is coupled to the first row of the third memory bank BK2. In the same manner, the third to eighth elements of the first to third matrices A[0:7], B[0:7] and Y[0:7] may be stored in third to eighth columns, respectively, coupled to the first rows of the first to third memory banks BK0, BK1and BK2. Each of the first to eighth columns may include a plurality of columns. FIG.31Bis a diagram illustrating a configuration and an operation method of a PIM device600B in accordance with an embodiment of the present disclosure. Referring toFIG.31B, the PIM device600B may perform an arithmetic operation. In particular, the PIM device600B may perform an element-wise arithmetic operation. The element-wise arithmetic operation may mean an operation of calculating respective elements of two matrices with the same size. For example, an element-wise addition operation may be performed as follows. The PIM device600B may add an element ‘1’ of the first row of a first matrix A[0:7] and an element ‘2’ of the first row of a second matrix B[0:7] to output an addition result of an element ‘3’ that is seen in the first row of a third matrix Y[0:7]. The PIM device600B may add an element ‘2’ of the second row of the first matrix A[0:7] and an element ‘3’ of the second row of the second matrix B[0:7] to output an addition result of an element ‘5’ that is seen in the second row of the third matrix Y[0:7]. The PIM device600B may add an element ‘3’ of the third row of the first matrix A[0:7] and an element ‘4’ of the third row of the second matrix B[0:7] to output an addition result of an element ‘7’ that is seen in the third row of the third matrix Y[0:7]. The PIM device600B may add an element ‘4’ of the fourth row of the first matrix A[0:7] and an element ‘5’ of the fourth row of the second matrix B[0:7] to output an addition result of an element ‘9’ that is seen in the fourth row of the third matrix Y[0:7]. In the same manner, the PIM device600B may add elements ‘5,’ ‘6,’ ‘7’ and ‘8’ of fifth to eighth rows of the first matrix A[0:7] and elements ‘6,’ ‘7,’ ‘8’ and ‘9’ of fifth to eighth rows of the second matrix B[0:7], respectively, to output addition results of elements ‘11,’ ‘13,’ ‘15,’ and ‘17,’ respectively, seen in the fifth to eighth rows of the third matrix Y[0:7]. For the sake of clarity in explanation, it is illustrated that each of the first to third matrices A[0:7], B[0:7] and Y[0:7] includes only elements of a plurality of rows. However, the spirit of the present disclosure may be applied to cases in which each of the first to third matrices A[0:7], B[0:7] and Y[0:7] includes elements of a plurality of columns or a plurality of rows and columns. Hereinafter, the elements of the first to eighth rows may be described as first to eighth elements, respectively. The PIM device600B may include a plurality of MAC units. One MAC unit may include a plurality of storage regions and an MAC operator MAC_B. The storage region may be a memory bank for storing data. The plurality of storage regions may include a plurality of memory banks. The MAC operator MAC_B may be coupled to the plurality of memory banks and may perform an arithmetic operation on data that is output from the plurality of memory banks. The MAC operator MAC_B may store result data of the arithmetic operation in the plurality of memory banks. For example, in order to perform the element-wise addition operation, one MAC operator MAC_B may be coupled to at least three memory banks. The at least three memory banks and the MAC operator MAC_B may configure one MAC unit. InFIG.31B, first to fourth memory banks BK0, BK1, BK2, and BK3are illustrated. The first to third memory banks BK0, BK1, and BK2and the MAC operator MAC_B may configure one MAC unit, or the first to fourth memory banks BK0, BK1, BK2, and BK3and the MAC operator MAC_B may configure one MAC unit. Each of the first to fourth memory banks BK0, BK1, BK2, and BK3may include a plurality of rows and a plurality of columns, and a plurality of memory cells may be coupled to points at which the plurality of rows and the plurality of columns intersect with each other. In order to perform the element-wise addition operation, the PIM device600B may store data, corresponding to the first to eighth elements ‘1,’ ‘2,’ ‘3,’ ‘4,’ ‘5,’ ‘6,’ ‘7,’ and ‘8’ of the first matrix A[0:7], in the first memory bank BK0. The PIM device600B may store data, corresponding to the first to eighth elements ‘2,’ ‘3,’ ‘4,’ ‘5,’ ‘6,’ ‘7,’ ‘8,’ and ‘9’ of the second matrix B[0:7], in the second memory bank BK1. The PIM device600B may simultaneously read data that is stored in the first and second memory banks BK0and BK1, and may provide the read data to the MAC operator MAC_B. The PIM device600B may control the data that corresponds to the pluralities of elements of the first and second matrices A[0:7] and B[0:7] to be sequentially output from the first and second memory banks BK0and BK1, and may control elements with the same order to be simultaneously output. During an operation that is performed based on a single command signal, the PIM device600B may simultaneously read elements with the same order (that is, a pair of elements with the same order) among the elements of the first and second matrices A[0:7] and B[0:7]. For example, during a first operation that is performed based on the single command signal, the PIM device600B may simultaneously output data that corresponds to the first elements ‘1’ and ‘2’ of the first and second matrices A[0:7] and B[0:7] among the data that is stored in the first and second memory banks BK0and BK1. Thereafter, during a second operation that is performed based on the single command signal, the PIM device600B may simultaneously output data that corresponds to the second elements ‘2’ and ‘3’ of the first and second matrices A[0:7] and B[0:7] among the data that is stored in the first and second memory banks BK0and BK1. The PIM device600B may control data that corresponds to the respective third to eighth elements ‘3’ and ‘4,’ ‘4’ and ‘5,’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9’ of the first and second matrices A[0:7] and B[0:7] to be sequentially output from the first and second memory banks BK0and BK1. The MAC operator MAC_B may perform an arithmetic operation on data that is output from the first and second memory banks BK0and BK1. The MAC operator MAC_B may add data that is output from the first and second memory banks BK0and BK1. The MAC operator MAC_B may sequentially add data which correspond to elements with the same order and are output from the first and second memory banks BK0and BK1. The MAC operator MAC_B may receive data, corresponding to the first elements ‘1’ and ‘2’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by adding the data that corresponds to the first elements ‘1’ and ‘2.’ The arithmetic data may be data that corresponds to the first element ‘3’ of the third matrix Y[0:7]. The MAC operator MAC_B may receive data, corresponding to the second elements ‘2’ and ‘3’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by adding the data that corresponds to the second elements ‘2’ and ‘3.’ The arithmetic data may be data that corresponds to the second element ‘5’ of the third matrix Y[0:7]. The MAC operator MAC_B may receive data, corresponding to the third elements ‘3’ and ‘4’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by adding the data that corresponds to the third elements ‘3’ and ‘4.’ The arithmetic data may be data that corresponds to the third element ‘7’ of the third matrix Y[0:7]. The MAC operator MAC_B may receive data, corresponding to the fourth elements ‘4’ and ‘5’ of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate arithmetic data by adding the data that corresponds to the fourth elements ‘4’ and ‘5.’ The arithmetic data may be data that corresponds to the fourth element ‘9’ of the third matrix Y[0:7]. In the same manner, the MAC operator MAC_B may sequentially receive data that corresponds to the fifth to eighth elements ‘5’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9’ of the first and second matrices A[0:7] and B[0:7], and may generate respective arithmetic data by adding the data that corresponds to the fifth to eighth elements ‘5’ and ‘6,’ ‘6’ and ‘7,’ ‘7’ and ‘8’ and ‘8’ and ‘9.’ The arithmetic data may be data that corresponds to the fifth to eighth elements ‘11,’ ‘13,’ ‘15’ and ‘17,’ respectively, of the third matrix Y[0:7]. The MAC operator MAC_B may provide the arithmetic data to the third memory bank BK2, and the arithmetic data may be written into the third memory bank BK2. The third memory bank BK2may sequentially receive the arithmetic data, corresponding to the first to eighth elements ‘3,’ ‘5,’ ‘7,’ ‘9,’ ‘11,’ ‘13,’ ‘15’ and ‘17’ of the third matrix Y[0:7], from the MAC operator MAC_B, and the arithmetic data may be sequentially stored in the third memory bank BK2. The PIM device600B may complete the element-wise arithmetic operation by writing the arithmetic data to the third memory bank BK2. The PIM device600B may store elements of the first to third matrices A[0:7], B[0:7] and Y[0:7] in rows, respectively, with the same order of the first to third memory banks BK0, BK1and BK2. The PIM device600B may store elements with the same order among the elements of the first to third matrices A[0:7], B[0:7] and Y[0:7] in columns, respectively, with the same order of the first to third memory banks BK0, BK1and BK2. For example, when the elements of the first matrix A[0:7] are stored in a first row of the first memory bank BK0, the elements of the second matrix B[0:7] may be stored in a first row of the second memory bank BK1, and the elements of the third matrix Y[0:7] may be stored in a first row of the third memory bank BK3. When the first element ‘1’ of the first matrix A[0:7] is stored in a first column that is coupled to the first row of the first memory bank BK0, the first element ‘2’ of the second matrix B[0:7] may be stored in a first column that is coupled to the first row of the second memory bank BK1, and the first element ‘3’ of the third matrix Y[0:7] may be stored in a first column that is coupled to the first row of the third memory bank BK2. When the second element ‘2’ of the first matrix A[0:7] is stored in a second column that is coupled to the first row of the first memory bank BK0, the second element ‘3’ of the second matrix B[0:7] may be stored in a second column that is coupled to the first row of the second memory bank BK1, and the second element ‘5’ of the third matrix Y[0:7] may be stored in a second column that is coupled to the first row of the third memory bank BK2. In the same manner, the third to eighth elements of the first to third matrices A[0:7], B[0:7] and Y[0:7] may be stored in third to eighth columns, respectively, coupled to the first rows of the first to third memory banks BK0, BK1and BK2. Each of the first to eighth columns may include a plurality of columns. FIG.32is a flow chart illustrating an operation method of a PIM device in accordance with an embodiment of the present disclosure. The operation method of the PIM devices600A and600B will be described below with reference toFIG.32together withFIGS.31A and31B. When the PIM devices600A and600B perform an element-wise arithmetic operation, at step S321, the PIM devices600A and600B may receive data that corresponds to the elements of the first matrix A[0:7], and may write the data to a first target memory bank. The first target memory bank may be the first memory bank BK0. The PIM devices600A and600B may activate the first target memory bank and enable a specific row (e.g., a first row) of the first target memory bank. The PIM devices600A and600B may access a first column that is coupled to the first row, and may write the first element ‘1’ of the first matrix A[0:7] to the first column. At step S322, the PIM devices600A and600B may determine whether all elements of the first matrix A[0:7] have been written into the first target memory bank. If all the elements of the first matrix A[0:7] have not been written (No of the step S322), the steps S321and S322may be repeatedly performed, and the PIM devices600A and600B may sequentially write data, corresponding to elements of the first matrix A[0:7], to the first target memory bank. The PIM devices600A and600B may sequentially access second to eighth columns that are coupled to the first row of the first target memory bank, and may sequentially write data, corresponding to the second to eighth elements of the first matrix A[0:7], to the second to eighth columns, respectively. If all the elements of the first matrix A[0:7] have been written (Yes of the step S322), the process may proceed to step S323. At the step S323, the PIM devices600A and600B may receive data that corresponds to the elements of the second matrix B[0:7], and may write the data to a second target memory bank. The second target memory bank may be the second memory bank BK1. The PIM devices600A and600B may activate the second target memory bank and enable a specific row (e.g., a first row) of the second target memory bank. The PIM devices600A and600B may access a first column that is coupled to the first row, and may write the first element ‘2’ of the second matrix B[0:7] to the first column. At step S324, the PIM devices600A and600B may determine whether all elements of the second matrix B[0:7] have been written into the second target memory bank. If all the elements of the second matrix B[0:7] have not been written (No of the step S324), the steps S323and S324may be repeatedly performed, and the PIM devices600A and600B may sequentially write data, corresponding to elements of the second matrix B[0:7], to the second target memory bank. The PIM devices600A and600B may sequentially access second to eighth columns that are coupled to the first row of the second target memory bank, and may sequentially write data, corresponding to the second to eighth elements of the second matrix B[0:7], to the second to eighth columns, respectively. If all the elements of the second matrix B[0:7] have been written (Yes of the step S324), the process may proceed to step S331. At the step S331, the PIM devices600A and600B may simultaneously read data, corresponding to elements with the same order among the elements of the first and second matrices A[0:7] and B[0:7], from the first and second target memory banks. The PIM devices600A and600B may activate the first and second target memory banks and enable specific rows of the first and second target memory banks. The first and second target memory banks may be simultaneously activated or sequentially activated. The PIM devices600A and600B may activate a third target memory bank and enable a specific row of the third target memory bank. The third target memory bank may be the third memory bank BK2. The third target memory bank may be activated simultaneously with the first and second target memory banks, or may be sequentially activated after the first and second target memory banks are activated. The PIM devices600A and600B may simultaneously access columns with the same order of the first and second target memory banks, and may simultaneously read data, corresponding to elements with the same order among the elements of the first and second matrices A[0:7] and B[0:7], from the columns with the same order. For example, the PIM devices600A and600B may simultaneously access first columns that are coupled to first rows of the first and second memory banks BK0and BK1, and may simultaneously read data, corresponding to the first elements ‘1’ and ‘2’ of the first and second matrices A[0:7] and B[0:7], stored in the first columns. At step S332, the PIM devices600A and600B may generate arithmetic data by performing an arithmetic operation on the data that is read from the first and second target memory banks. The PIM device600A may generate the arithmetic data by multiplying data that is read from the first and second memory banks BK0and BK1. The PIM device600B may generate the arithmetic data by adding data that is read from the first and second memory banks BK0and BK1. The arithmetic data, as a result of calculating the data, corresponding to the first elements of the first and second matrices A[0:7] and B[0:7], by the PIM devices600A and600B, may be the first element of the third matrix Y[0:7]. At step S333, the PIM devices600A and600B may determine whether data that corresponds to all the elements of the first and second matrices A[0:7] and B[0:7] have been read. If data that corresponds to all the elements have not been read (No of the step S333), the steps S331to S333may be repeatedly performed. The PIM devices600A and600B may sequentially read data, corresponding to the second to eighth elements of the first and second matrices A[0:7] and B[0:7], from the first and second memory banks BK0and BK1, and may generate respective arithmetic data by performing arithmetic operations on the read data. The arithmetic data may be the second to eighth elements, respectively, of the third matrix Y[0:7]. If data that corresponds to all the elements have been read (Yes of the step S333), the process may proceed to step S335to be described later. Step S334may be performed in parallel with the step S333. At the step S334, the PIM devices600A and600B may provide the arithmetic data generated at the step S332to the third target memory bank, and may write the arithmetic data to the third target memory bank. At step S335, the PIM devices600A and600B may determine whether arithmetic data for all the elements of the first and second matrices A[0:7] and B[0:7] (that is, all the elements of the third matrix Y[0:7]) have been written into the third target memory bank. If arithmetic data that corresponds to all the elements of the third matrix Y[0:7] have not been written into the third target memory bank (No of the step S335), the steps S334and S335may be repeatedly performed. Each time arithmetic data are sequentially generated at the step S332, the PIM devices600A and600B may sequentially write the arithmetic data to the third target memory bank. In the third memory bank BK2, the arithmetic data may be stored in a row and columns with the same orders as rows and columns in which the elements of the first and second matrices A[0:7] and B[0:7] are stored in the first and second memory banks BK0and BK1. Arithmetic data generated by calculating the first elements of the first and second matrices A[0:7] and B[0:7] (that is, the first element of the third matrix Y[0:7]) may be stored in a first column that is coupled to a first row of the third target memory bank. Arithmetic data generated by calculating the second to eighth elements of the first and second matrices A[0:7] and B[0:7] (that is, the second to eighth elements of the third matrix Y[0:7]) may be stored in second to eighth columns, respectively, coupled to the first row of the third target memory bank. If arithmetic data for all the elements have been written into the third target memory bank (Yes of the step S335), the element-wise arithmetic operation of the PIM devices600A and600B may be ended. FIG.33is a diagram illustrating a configuration of a PIM device700A in accordance with an embodiment of the present disclosure. Referring toFIG.33, the PIM device700A may include components for performing an element-wise multiplication operation among element-wise arithmetic operations. The PIM device700A may include an MAC unit. The MAC unit may include a plurality of memory banks and an MAC operator MAC_A. The MAC unit may include a first memory bank BK0, a second memory bank BK1, a third memory bank BK2and a fourth memory bank BK3. However, the number of memory banks included in the MAC unit is not limited thereto, and the number of memory banks included in the MAC unit may be three or more. Each of the first to fourth memory banks BK0, BK1, BK2and BK3may include a Y-decoder/I/O circuit YDEC/IO. The first and third memory banks BK0and BK2may share one X-decoder XDEC, and the second and fourth memory banks BK1and BK3may share one X-decoder XDEC. Each of the first to fourth memory banks BK0, BK1, BK2and BK3may be accessed through the X-decoder XDEC and the Y-decoder/I/O circuit YDEC/IO. The first memory bank BK0may be accessed based on a first bank access control signal CASP<0> and a bank column address signal CA<0:4>. The first bank access control signal CASP<0> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0. The second memory bank BK1may be accessed based on a second bank access control signal CASP<1> and the bank column address signal CA<0:4>. The second bank access control signal CASP<1> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1. The third memory bank BK2may be accessed based on a third bank access control signal CASP<2> and the bank column address signal CA<0:4>. The third bank access control signal CASP<2> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2. The fourth memory bank BK3may be accessed based on a fourth bank access control signal CASP<3> and the bank column address signal CA<0:4>. The fourth bank access control signal CASP<3> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the fourth memory bank BK3. In the MAC unit, it may be prescribed that data that corresponds to elements of first and second matrices are stored in the first and second memory banks BK0and BK1, respectively. In the MAC unit, it may be prescribed that arithmetic data generated through an element-wise arithmetic operation on the first and second matrices (i.e., data that corresponds to elements of a third matrix) are stored in the third memory bank BK2. The MAC operator MAC_A may be coupled to at least the first to third memory banks BK0, BK1and BK2. The MAC operator MAC_A may be coupled even to the fourth memory bank BK3. The MAC operator MAC_A may be coupled to the first to third memory banks BK0, BK1and BK2through bank I/O lines791,792and793. The MAC operator MAC_A may be coupled to the first memory bank BK0through a first bank I/O line791. The MAC operator MAC_A may be coupled to the second memory bank BK1through a second bank I/O line792. The MAC operator MAC_A may be coupled to the third memory bank BK2through a third bank I/O line793. The MAC operator MAC_A may receive data, output from the first and second memory banks BK0and BK1, through the first and second bank I/O lines791and792, and may output arithmetic data, generated by an arithmetic operation, to the third memory bank BK2through the third bank I/O line793. The MAC operator MAC_A may perform an arithmetic operation on data that is output from the first and second memory banks BK0and BK1. In general, the MAC operator MAC_A may perform both multiplication and addition calculations. In order to allow the PIM device700A to perform an element-wise multiplication operation, the MAC operator MAC_A may perform only a multiplication calculation on data that is output from the first and second memory banks BK0and BK1. For example, the bank column address signal CA<0:4> may be a 5-bit signal, and one element may be mapped as 16-bit data. During a single write operation or a single read operation of the PIM device700A, the PIM device700A may write 256-bit data to the first and second memory banks BK0and BK1or read 256-bit data from the first and second memory banks BK0and BK1, based on the bank column address signal CA<0:4>. Accordingly, the PIM device700A may perform an element-wise arithmetic operation on total 16 pairs of matrices. When the PIM device700A performs an element-wise arithmetic operation on two matrices, 16-bit data that corresponds to one elements of first and second matrices may be written into the first and second memory banks BK0and BK1through a single write operation, and the remaining 240-bit data may be written as 0. Among 256 bits that are output from the first and second memory banks BK0and BK1during a single read operation, 16-bit data may be data to which one elements of two matrices are respectively mapped, and the remaining 240-bit data may be 0. However, the number of bits of data for mapping one element and the total number of bits of data to be stored in and output from the first and second memory banks BK1and BK2or to be stored in and output from the third memory bank BK2may be variously changed. The PIM device700A may include a column control circuit770A which controls the MAC unit to perform an element-wise arithmetic operation. The column control circuit770A may generate various control signals so that the MAC unit of the PIM device700A may perform an element-wise arithmetic operation. The column control circuit770A may receive a calculation signal EWMUL and a column address signal ADDR_C<0:n> (n is an arbitrary integer), and may generate an arithmetic operation signal MUL_OP, the bank access control signals CASP<0:3> and the bank column address signal CA<0:4> based on the calculation signal EWMUL and the column address signal ADDR_C<0:n>. The column control circuit770A may enable the first bank access control signal CASP<0> and the second bank access control signal CASP<1> among the bank access control signals CASP<0:3> based on the calculation signal EWMUL. When the calculation signal EWMUL is enabled, the column control circuit770A may enable the arithmetic operation signal MUL_OP, and may enable the first and second bank access control signals CASP<0> and CASP<1> together. The column control circuit770A may output at least a part of the column address signal ADDR_C<0:n> as the bank column address signal CA<0:4>. For example, the bank column address signal CA<0:4> may be a 5-bit signal. The MAC operator MAC_A may receive the arithmetic operation signal MUL_OP from the column control circuit770A. The MAC operator MAC_A may generate a delayed bank access control signal CASP_M<2> based on the arithmetic operation signal MUL_OP and at least one of the first and second bank access control signals CASP<0> and CASP<1>. The MAC operator MAC_A may generate a delayed column address signal CA_M<0:4> based on the bank column address signal CA<0:4>. The MAC operator MAC_A may provide the delayed bank access control signal CASP_M<2> and the delayed column address signal CA_M<0:4> to the third memory bank BK2. The third memory bank BK2may be accessed based on the delayed bank access control signal CASP_M<2> and the delayed column address signal CA_M<0:4>. When the PIM device700A performs an element-wise multiplication operation, the third memory bank BK2may be accessed based on the delayed bank access control signal CASP_M<2> and the delayed column address signal CA_M<0:4> instead of the third bank access control signal CASP<2> and the bank column address signal CA<0:4>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate a first data enable signal DEN<0> based on the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may provide the first data enable signal DEN<0> to the MAC operator MAC_A. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may generate a second data enable signal DEN<1> based on the second bank access control signal CASP<1>. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may generate the second data enable signal DEN<1> by delaying the second bank access control signal CASP<1>. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may provide the second data enable signal DEN<1> to the MAC operator MAC_A. The MAC operator MAC_A may further receive the first and second data enable signals DEN<0> and DEN<1>. The MAC operator MAC_A may generate the delayed bank access control signal CASP_M<2> based on the arithmetic operation signal MUL_OP and at least one of the first and second data enable signals DEN<0> and DEN<1>. The MAC operator MAC_A may generate the delayed column address signal CA_M<0:4> based on the arithmetic operation signal MUL_OP, at least one of the first and second data enable signals DEN<0> and DEN<1> and the bank column address signal CA<0:4>. The PIM device700A may further include a receiving driver (RX)730, a data I/O circuit (DQ)740, a command decoder (CMD DECODER)750, an address latch760, and a serializer/deserializer (SER/DES)780. The PIM device700A may include the same or similar components as or to those of the PIM device200illustrated inFIG.2, and repeated descriptions for the same or similar components will be omitted herein. The receiving driver730may receive an external command signal E_CMD and an input address signal I_ADDR from an external device. The receiving driver730may provide the external command signal E_CMD to the command decoder750, and may provide the input address signal I_ADDR to the address latch760. The data I/O circuit740may be coupled to a data I/O line. The PIM device700A may communicate with the external device through the data I/O circuit740. When the external command signal E_CMD has information for performing an element-wise arithmetic operation, the command decoder750may generate the calculation signal EWMUL by decoding the external command signal E_CMD. For example, when the external command signal E_CMD has information for performing an element-wise multiplication operation, the command decoder750may generate the calculation signal EWMUL by decoding the external command signal E_CMD. When the external command signal E_CMD has information for performing an active operation, the command decoder750may generate an active signal ACT by decoding the external command signal E_CMD. When the external command signal E_CMD has information for performing a write operation, the command decoder750may generate a write signal WT by decoding the external command signal E_CMD. The active signal ACT may be a signal for enabling a specific row of a memory bank selected among the first to fourth memory banks BK0, BK1, BK2and BK3. The write signal WT may be a signal for writing data to a memory bank selected among the first to fourth memory banks BK0, BK1, BK2and BK3. The write signal WT may be provided to the column control circuit770. The column control circuit770may generate the bank access control signals CASP<0:3> and the bank column address signal CA<0:4> based on the write signal WT and the column address signal ADDR_C<0:n>. The address latch760may generate a row address signal ADDR_R and the column address signal ADDR_C<0:n> based on the input address signal I_ADDR. The row address signal ADDR_R may be an address signal for selecting a specific row of a selected memory bank during the active operation. The column address signal ADDR_C<0:n> may be an address signal for selecting a specific column that is coupled to an enabled row. The X-decoders XDEC may receive the active signal ACT and the row address signal ADDR_R, and may enable specific rows of the first to fourth memory banks BK0, BK1, BK2and BK3, based on the active signal ACT and the row address signal ADDR_R. The serializer/deserializer780may be coupled to a global I/O line790. The global I/O line790may be coupled to the first to fourth memory banks BK0, BK1, BK2, and BK3and the MAC operator MAC_A. The serializer/deserializer780may receive data that is output from at least one of the first to fourth memory banks BK0, BK1, BK2and BK3and the MAC operator MAC_A and transmitted through the global I/O line790, may generate data DA by serializing the received data, and may output the data DA to the external device through the data I/O circuit740. The serializer/deserializer780may deserialize data DA received from the external device through the data I/O circuit740, and may output the deserialized data through the global I/O line790. The deserialized data may be transmitted to at least one of the first to fourth memory banks BK0, BK1, BK2and BK3and the MAC operator MAC_A through the global input/output line790. FIG.34is a diagram illustrating at least a part of components of the column control circuit770A illustrated inFIG.33. Referring toFIG.34, the column control circuit770A may include an arithmetic operation signal generation circuit810A and an access signal generation circuit820A. The arithmetic operation signal generation circuit810A may receive the calculation signal EWMUL, and may generate the arithmetic operation signal MUL_OP based on the calculation signal EWMUL. The arithmetic operation signal generation circuit810A may further receive a reset signal RST and an idle signal IDLE. The reset signal RST may be a signal which is enabled to initialize internal circuits of the PIM device700A when the PIM device700A is powered up or booted up. The idle signal IDLE may be a signal which is enabled when the PIM device700A is in an idle state in which the PIM device700A does not perform any operation. The arithmetic operation signal generation circuit810A may generate the arithmetic operation signal MUL_OP based on the calculation signal EWMUL, the reset signal RST and the idle signal IDLE. The arithmetic operation signal generation circuit810A may enable the arithmetic operation signal MUL_OP when the calculation signal EWMUL is enabled in a state in which the reset signal RST and the idle signal IDLE are disabled. The arithmetic operation signal generation circuit810A may disable the arithmetic operation signal MUL_OP when one of the reset signal RST and the idle signal IDLE is enabled in a state in which the arithmetic operation signal MUL_OP is enabled. The arithmetic operation signal generation circuit810A may be configured by a NOR type RS latch. The arithmetic operation signal generation circuit810A may include a first NOR gate811A and a second NOR gate812A. A first input terminal of the first NOR gate811A may receive the reset signal RST, a second input terminal of the first NOR gate811A may receive the idle signal IDLE, and a third input terminal of the first NOR gate811A may receive a signal output from an output terminal of the second NOR gate812A. The arithmetic operation signal MUL_OP may be output through an output terminal of the first NOR gate811A. A first input terminal of the second NOR gate812A may receive the arithmetic operation signal MUL_OP, and a second input terminal of the second NOR gate812A may receive the calculation signal EWMUL. The output terminal of the second NOR gate812A may be coupled to the third input terminal of the first NOR gate811A. When the calculation signal EWMUL is enabled to a logic high level in a state in which the reset signal RST and the idle signal IDLE are disabled to logic low levels, a signal with a logic low level may be input to the third input terminal of the first NOR gate811A, and thus, the arithmetic operation signal MUL_OP may be enabled to a logic high level. In a state in which the arithmetic operation signal MUL_OP is enabled to a logic high level, when at least one of the reset signal RST and the idle signal IDLE is enabled to a logic high level, the arithmetic operation signal MUL_OP may be disabled to a logic low level. The access signal generation circuit820A may receive the calculation signal EWMUL, and may generate the first and second bank access control signals CASP<0> and CASP<1> based on the calculation signal EWMUL. When the calculation signal EWMUL is enabled, the access signal generation circuit820A may enable both the first and second bank access control signals CASP<0> and CASP<1>. By simultaneously enabling the first and second bank access control signals CASP<0> and CASP<1>, the access signal generation circuit820A may cause the first and second memory banks BK0and BK1to be simultaneously accessed. FIG.35is a diagram illustrating a configuration of an arithmetic circuit900among components of the MAC operator MAC_A illustrated inFIG.33. Referring toFIG.35, the arithmetic circuit900may perform a multiplication-accumulative addition calculation on inputted data, and may output a multiplication-accumulative addition calculation result. The arithmetic circuit900may include a plurality of multipliers, a plurality of adders and an accumulator. Each of the plurality of multipliers may receive allocated data, and the number of the plurality of multipliers may vary depending on the number of bits of the allocated data. For example, the MAC operator MAC_A may include 16 multipliers to each perform an arithmetic operation on 16 elements. A first multiplier910-1may receive first to 16{circumflex over ( )}th bit data A<0:15> that is output from the first memory bank BK0and first to 16{circumflex over ( )}th bit data B<0:15> that is output from the second memory bank BK1, and may output 16-bit arithmetic data Y<0:15> by multiplying the first to 16{circumflex over ( )}th bit data A<0:15> that is output from the first memory bank BK0and the first to 16{circumflex over ( )}th bit data B<0:15> that is output from the second memory bank BK1. A second multiplier910-2may receive 17{circumflex over ( )}th to 32{circumflex over ( )}rd bit data A<16:31> that is output from the first memory bank BK0and 17{circumflex over ( )}th to 32{circumflex over ( )}rd bit data B<16:31> that is output from the second memory bank BK1, and may output arithmetic data Y<16:31> by multiplying the 17{circumflex over ( )}th to 32{circumflex over ( )}rd bit data A<16:31> that is output from the first memory bank BK0and the 17{circumflex over ( )}th to 32{circumflex over ( )}rd bit data B<16:31> that is output from the second memory bank BK1. A third multiplier910-3may receive 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data A<32:47> that is output from the first memory bank BK0and 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data B<32:47> that is output from the second memory bank BK1, and may output arithmetic data Y<32:47> by multiplying the 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data A<32:47> that is output from the first memory bank BK0and the 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data B<32:47> that is output from the second memory bank BK1. A fourth multiplier910-4may receive 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data A<48:63> that is output from the first memory bank BK0and 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data B<48:63> that is output from the second memory bank BK1, and may output arithmetic data Y<48:63> by multiplying the 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data A<48:63> that is output from the first memory bank BK0and the 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data B<48:63> that is output from the second memory bank BK1. A thirteenth multiplier910-13may receive 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data A<192:207> that is output from the first memory bank BK0and 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data B<192:207> that is output from the second memory bank BK1, and may output arithmetic data Y<192:207> by multiplying the 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data A<192:207> that is output from the first memory bank BK0and the 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data B<192:207> that is output from the second memory bank BK1. A fourteenth multiplier910-14may receive 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data A<208:223> that is output from the first memory bank BK0and 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data B<208:223> that is output from the second memory bank BK1, and may output arithmetic data Y<208:223> by multiplying the 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data A<208:223> that is output from the first memory bank BK0and the 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data B<208:223> that is output from the second memory bank BK1. A fifteenth multiplier910-15may receive 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data A<224:239> that is output from the first memory bank BK0and 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data B<224:239> that is output from the second memory bank BK1, and may output arithmetic data Y<224:239> by multiplying the 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data A<224:239> that is output from the first memory bank BK0and the 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data B<224:239> that is output from the second memory bank BK1. A sixteenth multiplier910-16may receive 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data A<240:255> that is output from the first memory bank BK0and 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data B<240:255> that is output from the second memory bank BK1, and may output arithmetic data Y<240:255> by multiplying the 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data A<240:255> that is output from the first memory bank BK0and the 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data B<240:255> that is output from the second memory bank BK1. The MAC operator MAC_A may include 15 adders. A first adder930-1may receive data that is output from the first and second multipliers910-1and910-2, and may add the data that is output from the first and second multipliers910-1and910-2. A second adder930-2may receive data that is output from the third and fourth multipliers910-3and910-4, and may add the data that is output from the third and fourth multipliers910-3and910-4. A seventh adder930-7may receive data that is output from the thirteenth and fourteenth multipliers910-13and910-14, and may add the data that is output from the thirteenth and fourteenth multipliers910-13and910-14. An eighth adder930-8may receive data that is output from the fifteenth and sixteenth multipliers910-15and910-16, and may add the data that is output from the fifteenth and sixteenth multipliers910-15and910-16. The first to eighth adders930-1,930-2, . . . ,930-7and930-8may be floating point adders. A ninth adder930-9may receive data that is output from the first and second adders930-1and930-2, and may add the data that is output from the first and second adders930-1and930-2. A twelfth adder930-12may receive data that is output from the seventh and eighth adders930-7and930-8, and may add the data that is output from the seventh and eighth adders930-7and930-8. A fifteenth adder930-15may receive data that is output from thirteenth and fourteenth adders (not illustrated), and may add the data that is output from the thirteenth and fourteenth adders. An accumulator940may receive and store data that is output from the fifteenth adder930-15. The accumulator940may add data, newly output from the fifteenth adder930-15, to a stored data value each time an update signal UPDATE is enabled, and may store added data again. The accumulator940may include one adder941and an updater942. The adder941may receive data that is output from the fifteenth adder930-15, and may store the received data. The adder941may output stored data to the updater942. The adder941may receive data that is output from the updater942, and may add the data that is output from the updater942and the data that is output from the fifteenth adder930-15. The updater942may be implemented by a flip-flop FF. An input terminal of the flip-flop FF may receive an output of the adder941, and a clock terminal of the flip-flop FF may receive the update signal UPDATE. An output terminal of the flip-flop FF may be coupled to the adder941, and the adder941may receive data that is output through the output terminal of the flip-flop FF. The input terminal of the flip-flop FF may be coupled to an output terminal OUT of the arithmetic circuit900. When the PIM device700A performs the element-wise multiplication operation, the arithmetic circuit900may perform only a multiplication calculation, and may output only a multiplication calculation result. The arithmetic circuit900may further include 16 demultiplexers. A first demultiplexer950-1may be coupled between the first multiplier910-1and the first adder930-1. An input terminal of the first demultiplexer950-1may receive arithmetic data Y<0:15> that is output from the first multiplier910-1, a first output terminal of the first demultiplexer950-1may be coupled to the first adder930-1, and a second output terminal of the first demultiplexer950-1may be coupled to the output terminal OUT of the arithmetic circuit900. The first demultiplexer950-1may receive the arithmetic operation signal MUL_OP as a control signal. When the arithmetic operation signal MUL_OP is enabled, the first demultiplexer950-1may output the arithmetic data Y<0:15>, output from the first multiplier910-1, to the output terminal OUT of the arithmetic circuit900. When the arithmetic operation signal MUL_OP is disabled, the first demultiplexer950-1may output the arithmetic data Y<0:15>, output from the first multiplier910-1, to the first adder930-1. A second demultiplexer950-2may be coupled between the second multiplier910-2and the second adder930-2. An input terminal of the second demultiplexer950-2may receive arithmetic data Y<16:31> that is output from the second multiplier910-2, a first output terminal of the second demultiplexer950-2may be coupled to the first adder930-1, and a second output terminal of the second demultiplexer950-2may be coupled to the output terminal OUT of the arithmetic circuit900. The second demultiplexer950-2may receive the arithmetic operation signal MUL_OP as a control signal. When the arithmetic operation signal MUL_OP is enabled, the second demultiplexer950-2may output the arithmetic data Y<16:31>, output from the second multiplier910-2, to the output terminal OUT of the arithmetic circuit900. When the arithmetic operation signal MUL_OP is disabled, the second demultiplexer950-2may output the arithmetic data Y<16:31>, output from the second multiplier910-2, to the first adder930-1. The third to sixteenth demultiplexers950-3,950-4, . . . ,950-13,950-14,950-15and950-16may be coupled between the third to sixteenth multipliers910-3,910-4, . . . ,910-13,910-14,910-15and910-16and the third to sixteenth adders930-3,930-4, . . . ,930-13,930-14,930-15and930-16, respectively. When the arithmetic operation signal MUL_OP is enabled, the third to sixteenth demultiplexers950-3,950-4, . . . ,950-13,950-14,950-15and950-16may output arithmetic data, output from the third to sixteenth multipliers910-3,910-4, . . . ,910-13,910-14,910-15and910-16, respectively, to the output terminal OUT of the arithmetic circuit900. When the arithmetic operation signal MUL_OP is disabled, the third to sixteenth demultiplexers950-3,950-4, . . . ,950-13,950-14,950-15and950-16may output arithmetic data, output from the third to sixteenth multipliers910-3,910-4, . . . ,910-13,910-14,910-15and910-16, to the third to sixteenth adders930-3,930-4, . . . ,930-13,930-14,930-15and930-16, respectively. Therefore, when the arithmetic operation signal MUL_OP is enabled, the first to sixteenth demultiplexers950-1,950-2,950-3,950-4, . . . ,950-13,950-14,950-15and950-16may directly output arithmetic data, output from the first to sixteenth multipliers910-1,910-2,910-3,910-4, . . . ,910-13,910-14,910-15and910-16, to the output terminal OUT of the arithmetic circuit900, so that the arithmetic circuit900is able to perform only a multiplication calculation. In an embodiment, the arithmetic circuit900might not include the demultiplexers, and the plurality of adders may be modified to receive the arithmetic operation signal MUL_OP. The plurality of adders may be modified to, when the arithmetic operation signal MUL_OP is enabled, activate bypass paths and output arithmetic data, output from the plurality of multipliers, to the output terminal OUT of the arithmetic circuit900. FIGS.36A and36Bare diagrams illustrating other parts among the components of the MAC operator MAC_A configured inFIG.33. Referring toFIG.36A, the MAC operator MAC_A may include a write control circuit1000A. The write control circuit1000A may generate control signals for writing arithmetic data, generated through an arithmetic operation of the MAC operator MAC_A, to the third memory bank BK2. The write control circuit1000A may generate the delayed bank access control signal CASP_M<2> and the delayed column address signal CA_M<0:4> based on the arithmetic operation signal MUL_OP, the first data enable signal DEN<0>, and the bank column address signal CA<0:4>. The write control circuit1000A may include an access control circuit1010A and an address control circuit1020A. The access control circuit1010A may generate a plurality of delay signals DLs and the delayed bank access control signal CASP_M<2> based on the arithmetic operation signal MUL_OP and the first data enable signal DEN<0>. The access control circuit1010A may generate a write start signal WTS based on the arithmetic operation signal MUL_OP and the first data enable signal DEN<0>, and may generate a delayed write start signal WTSD and the plurality of delay signals DLs by delaying the write start signal WTS. The access control circuit1010A may generate the plurality of delay signals DLs by sequentially delaying the write start signal WTS by a predetermined time when the write start signal WTS is generated. The predetermined time may be a time during which the MAC operator MAC_A performs an arithmetic operation, and may correspond to a time during which the MAC operator MAC_A performs a multiplication calculation. Also, the predetermined time may correspond to a time from after the arithmetic circuit900of the MAC operator MAC_A receives data that is output from the first and second memory banks BK0and BK1to till the arithmetic circuit900of the MAC operator MAC_A outputs arithmetic data to the third memory bank BK2. The access control circuit1010A may generate the delayed bank access control signal CASP_M<2> each time the delayed write start signal WTSD is generated. The access control circuit1010A may include a write start signal generation circuit1011A, a first delay circuit (DELAY)1012A and a delayed access signal generation circuit1013A. The write start signal generation circuit1011A may generate the write start signal WTS by receiving the first data enable signal DEN<0> and the arithmetic operation signal MUL_OP. The write start signal generation circuit1011A may enable the write start signal WTS each time the first data enable signal DEN<0> is enabled in a state in which the arithmetic operation signal MUL_OP is enabled. The write start signal generation circuit1011A may include an AND gate which outputs the write start signal WTS by AND-gating the first data enable signal DEN<0> and the arithmetic operation signal MUL_OP. In an embodiment, the write start signal generation circuit1011A may be modified to generate the write start signal WTS by receiving the second data enable signal DEN<1> instead of the first data enable signal DEN<0>. The first delay circuit1012A may generate the delayed write start signal WTSD by delaying the write start signal WTS by the predetermined time. The first delay circuit1012A may generate the plurality of delay signals DLs by delaying the write start signal WTS by the predetermined time. For example, the first delay circuit1012A may generate a first delay signal DL by delaying the write start signal WTS, input first, by the predetermined time, and may generate a second delay signal DL by delaying the first delay signal DL by the predetermined time. The delayed access signal generation circuit1013A may receive the delayed write start signal WTSD, and may generate the delayed bank access control signal CASP_M<2> based on the delayed write start signal WTSD. The delayed access signal generation circuit1013A may be implemented by a pulse generator. The address control circuit1020A may generate the delayed column address signal CA_M<0:4> by delaying the bank column address signal CA<0:4>. The address control circuit1020A may receive the bank column address signal CA<0:4> and the plurality of delay signals DLs, and may generate the delayed column address signal CA_M<0:4> based on the bank column address signal CA<0:4> and the plurality of delay signals DLs. The address control circuit1020A may sequentially store the bank column address signal CA<0:4> each time the bank column address signal CA<0:4> is input. The address control circuit1020A may sequentially output the stored bank column address signal CA<0:4> based on the plurality of delay signals DLs. The address control circuit1020A may generate the delayed column address signal CA_M<0:4> by delaying the bank column address signal CA<0:4> sequentially output. The address control circuit1020A may include a pipe circuit1021A and a second delay circuit (DELAY)1022A. The pipe circuit1021A may be a FIFO (first-in first-out) circuit, may receive the bank column address signal CA<0:4>, and may store the bank column address signal CA<0:4>. The pipe circuit1021A may sequentially store the bank column address signal CA<0:4> each time the bank column address signal CA<0:4> is input. The pipe circuit1021A may receive the plurality of delay signals DLs. The pipe circuit1021A may sequentially output the bank column address signal CA<0:4> sequentially stored, based on the plurality of delay signals DLs. For example, the pipe circuit1021A may output the bank column address signal CA<0:4> stored first, when the first delay signal DL is enabled, and may output the bank column address signal CA<0:4> stored second, when the second delay signal DL is enabled. The second delay circuit1022A may receive the output of the pipe circuit1021A, and may generate the delayed column address signal CA_M<0:4> by delaying the output of the pipe circuit1021A. A delay time of the second delay circuit1022A may correspond to a time during which the delayed bank access control signal CASP_M<2> is generated from the delayed write start signal WTSD by the delayed access signal generation circuit1013A. The second delay circuit1022A may synchronize a point of time at which the delayed column address signal CA_M<0:4> is output and a point of time at which the delayed bank access control signal CASP_M<2> is output. Referring toFIG.36B, the MAC operator MAC_A may include a write control circuit1000B. The write control circuit1000B may include a write start signal generation circuit1011B, and may have the same configuration as the write control circuit1000A illustrated inFIG.36Aexcept the write start signal generation circuit1011B. Repeated descriptions for the same components will be omitted herein. The write start signal generation circuit1011B may generate a write start signal WTS based on the first data enable signal DEN<0>, the second data enable signal DEN<1> and the arithmetic operation signal MUL_OP. The write start signal generation circuit1011B may enable the write start signal WTS when the first and second data enable signals DEN<0> and DEN<1> are enabled in a state in which the arithmetic operation signal MUL_OP is enabled. Since the first and second memory banks BK0and BK1are simultaneously accessed when the PIM device700A performs an element-wise arithmetic operation, the first and second data enable signals DEN<0> and DEN <1> may be simultaneously enabled. FIG.37Ais a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0ofFIG.33. Referring toFIG.37A, the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may include a delay circuit1110A. The delay circuit1110A may receive the first bank access control signal CASP<0>, and may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. A delay time of the delay circuit1110A may correspond to an amount of time between the first bank access control signal CASP<0> being generated and data being output from the first memory bank BK0. FIG.37Bis a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1ofFIG.33. Referring toFIG.37B, the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may include a delay circuit1110B. The delay circuit1110B may receive the second bank access control signal CASP<1>, and may generate the second data enable signal DEN<1> by delaying the second bank access control signal CASP<1>. A delay time of the delay circuit1110B may correspond to an amount of time between the second bank access control signal CASP<1> being generated and data being output from the second memory bank BK1. FIG.38is a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2ofFIG.33. Referring toFIG.38, the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2may include a first selection circuit1210A and a second selection circuit1220A. The first selection circuit1210A may receive the arithmetic operation signal MUL_OP, the bank column address signal CA<0:4> and the delayed column address signal CA_M<0:4>, and may output an internal column address signal ICA<0:4>. The first selection circuit1210A may output one of the bank column address signal CA<0:4> and the delayed column address signal CA_M<0:4> as the internal column address signal ICA<0:4> based on the arithmetic operation signal MUL_OP. When the arithmetic operation signal MUL_OP is disabled to a logic low level, the first selection circuit1210A may output the bank column address signal CA<0:4> as the internal column address signal ICA<0:4>. When the arithmetic operation signal MUL_OP is enabled to a logic high level, the first selection circuit1210A may output the delayed column address signal CA_M<0:4> as the internal column address signal ICA<0:4>. The third memory bank BK2may be accessed based on the internal column address signal ICA<0:4>. The second selection circuit1220A may receive the arithmetic operation signal MUL_OP, the third bank access control signal CASP<2> and the delayed bank access control signal CASP_M<2>, and may output an internal bank access control signal ICASP<2>. The second selection circuit1220A may output one of the third bank access control signal CASP<2> and the delayed bank access control signal CASP_M<2> as the internal bank access control signal ICASP<2> based on the arithmetic operation signal MUL_OP. When the arithmetic operation signal MUL_OP is disabled to a logic low level, the second selection circuit1220A may output the third bank access control signal CASP<2> as the internal bank access control signal ICASP<2>. When the arithmetic operation signal MUL_OP is enabled to a logic high level, the second selection circuit1220A may output the delayed bank access control signal CASP_M<2> as the internal bank access control signal ICASP<2>. The third memory bank BK2may be accessed based on the internal bank access control signal ICASP<2>. FIG.39is a timing diagram illustrating the operation method of the PIM device700A in accordance with the embodiment of the present disclosure. The operation method of the PIM device700A in accordance with the embodiment of the present disclosure will be described below with reference toFIGS.33to39. The PIM device700A may store elements of first and second matrices in the first and second memory banks BK0and BK1, respectively, to perform an element-wise arithmetic operation. When all the elements of the first and second matrices are stored in the first and second memory banks BK0and BK1, the PIM device700A may generate the active signal ACT and the row address signal ADDR_R based on the external command signal E_CMD and the input address signal I_ADDR for performing an active operation. The external command signal E_CMD and the input address signal I_ADDR may be input to the PIM device700A in synchronization with a clock signal CLK. Rows with the same order among the plurality of rows of the first to third memory banks BK0, BK1and BK2may be enabled based on the active signal ACT and the row address signal ADDR_R. When a time corresponding to tRCD elapses after the first to third memory banks BK0, BK1and BK2are activated and the external command signal E_CMD instructing the active operation is received, a first external command signal E_CMD and a first input address signal I_ADDR for performing the element-wise arithmetic operation may be input to the PIM device700A. The tRCD may be defined by a time interval during which a column command signal is input after a row command signal is input. The external command signal E_CMD for performing the active operation may be included in the row command signal, and the external command signal E_CMD for performing the element-wise arithmetic operation may be included in the column command signal. The command decoder750may generate a first calculation signal EWMUL based on the first external command signal E_CMD, and the address latch760may output the first input address signal I_ADDR as a first column address signal ADDR_C<0:n>. The column control circuit770A may enable the arithmetic operation signal MUL_OP based on the calculation signal EWMUL, may enable the first and second bank access signals CASP<0:1>, and may provide at least a part of the first column address signal ADDR_C<0:n> as a first bank column address signal CA<0:4> (CA0). A column that is coupled to an enabled row of the first memory bank BK0may be accessed based on the first first bank access control signal CASP<0> and the first bank column address signal CA0. For example, the bank column address signal CA<0:4> may include 5 bits, and 16 columns may be accessed based on the bank column address signal CA<0:4>. First to sixteenth columns may be accessed based on the first bank column address signal CA0. At the same time, a column that is coupled to an enabled row of the second memory bank BK1may be accessed based on the first second bank access control signal CASP<1> and the first bank column address signal CA0. Accordingly, 16-bit data A0corresponding to a first element of the first matrix may be read from the first memory bank BK0, and 16-bit data B0corresponding to a first element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A0and B0corresponding to the first elements, respectively, of the first and second matrices. The data A0and B0corresponding to the first elements of the first and second matrices may be provided to the MAC operator MAC_A through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a second external command signal E_CMD and a second input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700A. The tCCD may be defined by a time interval during which another column command signal is input after one column command signal is input. The command decoder750may generate a second calculation signal EWMUL based on the second external command signal E_CMD, and the address latch760may output the second input address signal I_ADDR as a second column address signal ADDR_C<0:n>. The column control circuit770A may second enable the first and second bank access control signals CASP<0:1> based on the second calculation signal EWMUL, and may provide at least a part of the second column address signal ADDR_C<0:n> as a second bank column address signal CA<0:4> (CA1). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the second bank column address signal CA1. For example, seventeenth to 32{circumflex over ( )}nd columns may be accessed based on the second bank column address signal CA1. Accordingly, 16-bit data A1corresponding to a second element of the first matrix may be read from the first memory bank BK0, and 16-bit data B1corresponding to a second element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A1and B1corresponding to the second elements of the first and second matrices. The data A1and B1corresponding to the second elements of the first and second matrices may be provided to the MAC operator MAC_A through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a third external command signal E_CMD and a third input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700A. The command decoder750may generate a third calculation signal EWMUL based on the third external command signal E_CMD, and the address latch760may output the third input address signal I_ADDR as a third column address signal ADDR_C<0:n>. The column control circuit770A may third enable the first and second bank access control signals CASP<0:1> based on the third calculation signal EWMUL, and may provide at least a part of the third column address signal ADDR_C<0:n> as a third bank column address signal CA<0:4> (CA2). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the third bank column address signal CA2. For example, 33{circumflex over ( )}rd to 48{circumflex over ( )}th columns may be accessed based on the third bank column address signal CA2. Accordingly, 16-bit data A2corresponding to a third element of the first matrix may be read from the first memory bank BK0, and 16-bit data B2corresponding to a third element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A2and B2corresponding to the third elements of the first and second matrices. The data A2and B2corresponding to the third elements of the first and second matrices may be provided to the MAC operator MAC_A through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a fourth external command signal E_CMD and a fourth input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700A. The command decoder750may generate a fourth calculation signal EWMUL based on the fourth external command signal E_CMD, and the address latch760may output the fourth input address signal I_ADDR as a fourth column address signal ADDR_C<0:n>. The column control circuit770A may fourth enable the first and second bank access control signals CASP<0:1> based on the fourth calculation signal EWMUL, and may provide at least a part of the fourth column address signal ADDR_C<0:n> as a fourth bank column address signal CA<0:4> (CA3). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the fourth bank column address signal CA3. For example, 49{circumflex over ( )}th to 64{circumflex over ( )}th columns may be accessed based on the fourth bank column address signal CA3. Accordingly, 16-bit data A3corresponding to a fourth element of the first matrix may be read from the first memory bank BK0, and 16-bit data B3corresponding to a fourth element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A3and B3corresponding to the fourth elements of the first and second matrices. The data A3and B3corresponding to the fourth elements of the first and second matrices may be provided to the MAC operator MAC_A through the first and second bank I/O lines791and792. The MAC operator MAC_A may receive data, read from the first and second memory banks BK0and BK1, through the first and second bank I/O lines791and792, and may perform a calculation on the received data. The MAC operator MAC_A may receive the 16-bit data A0and B0, corresponding to the first elements of the first and second matrices, from the first and second memory banks BK0and BK1, respectively. The arithmetic circuit900of the MAC operator MAC_A may generate a first arithmetic data Y0by performing only a multiplication calculation on the 16-bit data A0and B0, corresponding to the first elements of the first and second matrices, based on the arithmetic operation signal MUL_OP, and may output the first arithmetic data Y0to the third memory bank BK2through the third bank I/O line793. When the predetermined time elapses after the first and second data enable signals DEN<0:1> are first received, the MAC operator MAC_A may enable the delayed bank access control signal CASP_M<2>. The MAC operator MAC_A may sequentially store the first to fourth bank column address signals CA0, CA1, CA2and CA3, and may output the first bank column address signal CA0as a first delayed column address signal CA_M<0:4> (CA_M0) when a first delayed bank access control signal CASP_M<2> is enabled. The third memory bank BK2may receive the first delayed bank access control signal CASP_M<2> and the first delayed column address signal CA_M0. A column that is coupled to an enabled row of the third memory bank BK2may be accessed based on the first delayed bank access control signal CASP_M<2> and the first delayed column address signal CA_M0. First to sixteenth columns may be accessed based on the first delayed column address signal CA_M0, and the first arithmetic data Y0as a first element of the third matrix may be written into the third memory bank BK2. The MAC operator MAC_A may receive the 16-bit data A1and B1, corresponding to the second elements of the first and second matrices, from the first and second memory banks BK0and BK1, respectively. The arithmetic circuit900of the MAC operator MAC_A may generate second arithmetic data Y1by performing only a multiplication calculation on the 16-bit data A1and B1, corresponding to the second elements of the first and second matrices, based on the arithmetic operation signal MUL_OP, and may output the second arithmetic data Y1to the third memory bank BK2through the third bank I/O line793. When the predetermined time elapses after the first and second data enable signals DEN<0:1> are second received, the MAC operator MAC_A may second enable the delayed bank access control signal CASP_M<2>. The MAC operator MAC_A may output the second bank column address signal CA1as a second delayed column address signal CA_M<0:4> (CA_M1) when the second delayed bank access control signal CASP_M<2> is enabled. The third memory bank BK2may receive the second delayed bank access control signal CASP_M<2> and the second delayed column address signal CA_M1. A column that is coupled to the enabled row of the third memory bank BK2may be accessed based on the second delayed bank access control signal CASP_M<2> and the second delayed column address signal CA_M1. Seventeenth to 32{circumflex over ( )}nd columns may be accessed based on the second delayed column address signal CA_M1, and the second arithmetic data Y1as a second element of the third matrix may be written into the third memory bank BK2. When data that corresponds to all elements of the first and second matrices are read from the first and second memory banks BK0and BK1and all arithmetic data generated by the MAC operator MAC_A are written into the third memory bank BK2, the element-wise arithmetic operation of the PIM device700A may be ended. FIG.40is a diagram illustrating a configuration of a PIM device700B in accordance with an embodiment of the present disclosure. Referring toFIG.40, the PIM device700B may include components for performing an element-wise addition operation among element-wise arithmetic operations. The PIM device700B may include the same or similar components as or to those of the PIM device700A illustrated inFIG.33, and repeated descriptions for the same components will be omitted herein. The PIM device700B may include an MAC unit. The MAC unit may include a plurality of memory banks and an MAC operator MAC_B. The MAC unit may include a first memory bank BK0, a second memory bank BK1, a third memory bank BK2and a fourth memory bank BK3. Each of the first to fourth memory banks BK0, BK1, BK2and BK3may include a Y-decoder/I/O circuit YDEC/IO. The first and third memory banks BK0and BK2may share one X-decoder XDEC, and the second and fourth memory banks BK1and BK3may share one X-decoder XDEC. Each of the first to fourth memory banks BK0, BK1, BK2and BK3may be accessed through the X-decoder XDEC and the Y-decoder/I/O circuit YDEC/IO. The first memory bank BK0may be accessed based on a first bank access control signal CASP<0> and a bank column address signal CA<0:4>. The first bank access control signal CASP<0> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0. The second memory bank BK1may be accessed based on a second bank access control signal CASP<1> and the bank column address signal CA<0:4>. The second bank access control signal CASP<1> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1. The third memory bank BK2may be accessed based on a third bank access control signal CASP<2> and the bank column address signal CA<0:4>. The third bank access control signal CASP<2> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2. The fourth memory bank BK3may be accessed based on a fourth bank access control signal CASP<3> and the bank column address signal CA<0:4>. The fourth bank access control signal CASP<3> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the fourth memory bank BK3. In the MAC unit, it may be prescribed that data that corresponds to elements of first and second matrices are stored in the first and second memory banks BK0and BK1, respectively. In the MAC unit, it may be prescribed that arithmetic data generated through an element-wise arithmetic operation on the first and second matrices (i.e., data that corresponds to elements of a third matrix) are stored in the third memory bank BK2. The MAC operator MAC_B may be coupled to at least the first to third memory banks BK0, BK1and BK2. The MAC operator MAC_B may be coupled even to the fourth memory bank BK3. The MAC operator MAC_B may be coupled to the first to third memory banks BK0, BK1and BK2through bank I/O lines791,792and793. The MAC operator MAC_B may be coupled to the first memory bank BK0through a first bank I/O line791. The MAC operator MAC_B may be coupled to the second memory bank BK1through a second bank I/O line792. The MAC operator MAC_B may be coupled to the third memory bank BK2through a third bank I/O line793. The MAC operator MAC_B may receive data, output from the first and second memory banks BK0and BK1, through the first and second bank I/O lines791and792, and may output arithmetic data, generated by an arithmetic operation, to the third memory bank BK2through the third bank I/O line793. The MAC operator MAC_B may perform an arithmetic operation on data that is output from the first and second memory banks BK0and BK1. In general, the MAC operator MAC_B may perform both multiplication and addition calculations. In order to allow the PIM device700B to perform an element-wise addition operation, the MAC operator MAC_B may perform only an addition calculation on data that is output from the first and second memory banks BK0and BK1. The PIM device700B may include a column control circuit770B which controls the MAC unit to perform an element-wise arithmetic operation. The column control circuit770B may generate various control signals so that the MAC unit of the PIM device700B may perform an element-wise arithmetic operation. The column control circuit770B may receive a calculation signal EWADD and a column address signal ADDR_C<0:n> (n is an arbitrary integer), and may generate an arithmetic operation signal ADD_OP, the bank access control signals CASP<0:3> and the bank column address signal CA<0:4> based on the calculation signal EWADD and the column address signal ADDR_C<0:n>. The column control circuit770B may enable the first bank access control signal CASP<0> and the second bank access control signal CASP<1> among the bank access control signals CASP<0:3> based on the calculation signal EWADD. When the calculation signal EWADD is enabled, the column control circuit770B may enable the arithmetic operation signal ADD_OP, and may enable the first and second bank access control signals CASP<0> and CASP<1> together. The column control circuit770B may output at least a part of the column address signal ADDR_C<0:n> as the bank column address signal CA<0:4>. For example, the bank column address signal CA<0:4> may be a 5-bit signal. The MAC operator MAC_B may receive the arithmetic operation signal ADD_OP from the column control circuit770B. The MAC operator MAC_B may generate a delayed bank access control signal CASP_A<2> based on the arithmetic operation signal ADD_OP and at least one of the first and second bank access control signals CASP<0> and CASP<1>. The MAC operator MAC_B may generate a delayed column address signal CA_A<0:4> based on the bank column address signal CA<0:4>. The MAC operator MAC_B may provide the delayed bank access control signal CASP_A<2> and the delayed column address signal CA_A<0:4> to the third memory bank BK2. The third memory bank BK2may be accessed based on the delayed bank access control signal CASP_A<2> and the delayed column address signal CA_A<0:4>. When the PIM device700B performs an element-wise addition operation, the third memory bank BK2may be accessed based on the delayed bank access control signal CASP_A<2> and the delayed column address signal CA_A<0:4> instead of the third bank access control signal CASP<2> and the bank column address signal CA<0:4>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate a first data enable signal DEN<0> based on the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may provide the first data enable signal DEN<0> to the MAC operator MAC_B. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may generate a second data enable signal DEN<1> based on the second bank access control signal CASP<1>. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may generate the second data enable signal DEN<1> by delaying the second bank access control signal CASP<1>. The Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may provide the second data enable signal DEN<1> to the MAC operator MAC_B. The MAC operator MAC_B may further receive the first and second data enable signals DEN<0> and DEN<1>. The MAC operator MAC_B may generate the delayed bank access control signal CASP_A<2> based on the arithmetic operation signal ADD_OP and at least one of the first and second data enable signals DEN<0> and DEN<1>. The MAC operator MAC_B may generate the delayed column address signal CA_A<0:4> based on the arithmetic operation signal ADD_OP, at least one of the first and second data enable signals DEN<0> and DEN<1> and the bank column address signal CA<0:4>. The PIM device700B may further include a receiving driver (RX)730, a data I/O circuit (DQ)740, a command decoder (CMD DECODER)750, an address latch760, and a serializer/deserializer (SER/DES)780. When the external command signal E_CMD has information for performing an element-wise arithmetic operation, the command decoder750may generate the calculation signal EWADD by decoding the external command signal E_CMD. For example, when the external command signal E_CMD has information for performing an element-wise addition operation, the command decoder750may generate the calculation signal EWADD by decoding the external command signal E_CMD. FIG.41is a diagram illustrating at least a part of components of the column control circuit770B illustrated inFIG.40. Referring toFIG.41, the column control circuit770B may include an arithmetic operation signal generation circuit810B and an access signal generation circuit820B. The arithmetic operation signal generation circuit810B may receive the calculation signal EWADD, and may generate the arithmetic operation signal ADD_OP based on the calculation signal EWADD. The arithmetic operation signal generation circuit810B may further receive a reset signal RST and an idle signal IDLE. The arithmetic operation signal generation circuit810B may generate the arithmetic operation signal ADD_OP based on the calculation signal EWADD, the reset signal RST and the idle signal IDLE. The arithmetic operation signal generation circuit810B may enable the arithmetic operation signal ADD_OP when the calculation signal EWADD is enabled in a state in which the reset signal RST and the idle signal IDLE are disabled. The arithmetic operation signal generation circuit810B may disable the arithmetic operation signal ADD_OP when one of the reset signal RST and the idle signal IDLE is enabled in a state in which the arithmetic operation signal ADD_OP is enabled. The arithmetic operation signal generation circuit810B may be configured by a NOR type RS latch. The arithmetic operation signal generation circuit810B may include a first NOR gate811B and a second NOR gate812B. A first input terminal of the first NOR gate811B may receive the reset signal RST, a second input terminal of the first NOR gate811B may receive the idle signal IDLE, and a third input terminal of the first NOR gate811B may receive a signal output from an output terminal of the second NOR gate812B. The arithmetic operation signal ADD_OP may be output through an output terminal of the first NOR gate811B. A first input terminal of the second NOR gate812B may receive the arithmetic operation signal ADD_OP, and a second input terminal of the second NOR gate812B may receive the calculation signal EWADD. The output terminal of the second NOR gate812B may be coupled to the third input terminal of the first NOR gate811B. When the calculation signal EWADD is enabled to a logic high level in a state in which the reset signal RST and the idle signal IDLE are disabled to logic low levels, a signal with a logic low level may be input to the third input terminal of the first NOR gate811B, and thus, the arithmetic operation signal ADD_OP may be enabled to a logic high level. In a state in which the arithmetic operation signal ADD_OP is enabled to a logic high level, when at least one of the reset signal RST and the idle signal IDLE is enabled to a logic high level, the arithmetic operation signal ADD_OP may be disabled to a logic low level. The access signal generation circuit820B may receive the calculation signal EWADD, and may generate the first and second bank access control signals CASP<0> and CASP<1> based on the calculation signal EWADD. When the calculation signal EWADD is enabled, the access signal generation circuit820B may enable both the first and second bank access control signals CASP<0> and CASP<1>. By simultaneously enabling the first and second bank access control signals CASP<0> and CASP<1>, the access signal generation circuit820B may cause the first and second memory banks BK0and BK1to be simultaneously accessed. FIGS.42A and42Bare diagrams illustrating parts among components of the MAC operator MAC_B configured inFIG.40. Referring toFIG.42A, the MAC operator MAC_B may include a write control circuit1000C. The write control circuit1000C may generate control signals for writing arithmetic data, generated through an arithmetic operation of the MAC operator MAC_B, to the third memory bank BK2. The write control circuit1000C may generate the delayed bank access control signal CASP_A<2> and the delayed column address signal CA_A<0:4> based on the arithmetic operation signal ADD_OP, the first data enable signal DEN<0> and the bank column address signal CA<0:4>. The write control circuit1000C may include an access control circuit1010C and an address control circuit1020C. The access control circuit1010C may generate the delayed bank access control signal CASP_A<2> based on the arithmetic operation signal ADD_OP and the first data enable signal DEN<0>. The access control circuit1010C may generate a write start signal WTS based on the arithmetic operation signal ADD_OP and the first data enable signal DEN<0>, and may generate a delayed write start signal WTSD by delaying the write start signal WTS by a predetermined time. The predetermined time may be a time during which the MAC operator MAC_B performs an arithmetic operation, and may correspond to a time from after the MAC operator MAC_B receives data that is output from the first and second memory banks BK0and BK1to till the MAC operator MAC_B outputs arithmetic data to the third memory bank BK2. The access control circuit1010C may generate the delayed bank access control signal CASP_A<2> each time the delayed write start signal WTSD is generated. The access control circuit1010C may include a write start signal generation circuit1011C, a first delay circuit (DELAY)1012C and a delayed access signal generation circuit1013C. The write start signal generation circuit1011C may generate the write start signal WTS by receiving the first data enable signal DEN<0> and the arithmetic operation signal ADD_OP. The write start signal generation circuit1011C may enable the write start signal WTS each time the first data enable signal DEN<0> is enabled in a state in which the arithmetic operation signal ADD_OP is enabled. The write start signal generation circuit1011C may include an AND gate which outputs the write start signal WTS by AND-gating the first data enable signal DEN<0> and the arithmetic operation signal ADD_OP. In an embodiment, the write start signal generation circuit1011C may be modified to generate the write start signal WTS by receiving the second data enable signal DEN<1> instead of the first data enable signal DEN<0>. The first delay circuit1012C may generate the delayed write start signal WTSD by delaying the write start signal WTS by the predetermined time. The delayed access signal generation circuit1013C may receive the delayed write start signal WTSD, and may generate the delayed bank access control signal CASP_A<2> based on the delayed write start signal WTSD. The delayed access signal generation circuit1013C may be implemented by a pulse generator. The address control circuit1020C may generate the delayed column address signal CA_A<0:4> by delaying the bank column address signal CA<0:4>. The address control circuit1020C may receive the arithmetic operation signal ADD_OP, the bank column address signal CA<0:4>, the first bank access control signal CASP<0> and the delayed bank access control signal CASP_A<2>. The address control circuit1020C may generate the delayed column address signal CA_A<0:4> based on the arithmetic operation signal ADD_OP, the bank column address signal CA<0:4>, the first bank access control signal CASP<0> and the delayed bank access control signal CASP_A<2>. The address control circuit1020C may sequentially store the bank column address signal CA<0:4> each time the first bank access control signal CASP<0> is enabled in a state in which the arithmetic operation signal ADD_OP is enabled. The address control circuit1020C may sequentially output the sequentially stored bank column address signal CA<0:4> as the delayed column address signal CA_A<0:4> each time the delayed bank access control signal CASP_A<2> is enabled. By sequentially outputting the stored bank column address signal CA<0:4> as the delayed column address signal CA_A<0:4> each time the delayed bank access control signal CASP_A<2> is enabled, the address control circuit1020C may synchronize a point of time at which the delayed bank access control signal CASP_A<2> is output and a point of time at which the delayed column address signal CA_A<0:4> is output. When the delayed bank access control signal CASP_A<2> is first enabled, the address control circuit1020C may provide the bank column address signal CA<0:4> received when the first bank access control signal CASP<0> is first enabled, as the delayed column address signal CA_A<0:4>. When the delayed bank access control signal CASP_A<2> is second enabled, the address control circuit1020C may provide the bank column address signal CA<0:4> received when the first bank access control signal CASP<0> is second enabled, as the delayed column address signal CA_A<0:4>. Accordingly, after the predetermined time elapses, a column of the third memory bank BK2with the same order as columns accessed in the first and second memory banks BK0and BK1may be accessed. In an embodiment, the address control circuit1020C may be modified to receive the second bank access control signal CASP<1> instead of the first bank access control signal CASP<0>. The address control circuit1020C may include a pipe circuit1021C. The pipe circuit1021C may generate the delayed column address signal CA_A<0:4> based on the arithmetic operation signal ADD_OP, the first bank access control signal CASP<0>, the delayed bank access control signal CASP_A<2> and the bank column address signal CA<0:4>. The pipe circuit1021C may generate a plurality of input strobe signals based on the arithmetic operation signal ADD_OP and the first bank access control signal CASP<0>. The pipe circuit1021C may generate a plurality of output strobe signals based on the delayed bank access control signal CASP_A<2>. The pipe circuit1021C may sequentially store the bank column address signal CA<0:4>, input to the pipe circuit1021C, based on the plurality of input strobe signals. The pipe circuit1021C may sequentially output the bank column address signal CA<0:4> sequentially stored in the pipe circuit1021C, as the delayed column address signal CA_A<0:4>, based on the plurality of output strobe signals. Referring toFIG.42B, the MAC operator MAC_B may include a write control circuit1000D. The write control circuit1000D may include a write start signal generation circuit1011D, and may have the same configuration as the write control circuit1000C illustrated inFIG.42Aexcept the write start signal generation circuit1011D. Repeated descriptions for the same components will be omitted herein. The write start signal generation circuit1011D may generate a write start signal WTS based on the first data enable signal DEN<0>, the second data enable signal DEN<1> and the arithmetic operation signal ADD_OP. The write start signal generation circuit1011D may enable the write start signal WTS when the first and second data enable signals DEN<0> and DEN<1> are enabled in a state in which the arithmetic operation signal ADD_OP is enabled. Since the first and second memory banks BK0and BK1are simultaneously accessed when the PIM device700B performs an element-wise arithmetic operation, the first and second data enable signals DEN<0> and DEN <1> may be simultaneously enabled. FIG.43is a diagram illustrating a configuration of the pipe circuit1021C illustrated inFIGS.42A and42B. Referring toFIG.43, the pipe circuit1021C may include an input strobe signal generation circuit1310, an output strobe signal generation circuit1320, and a plurality of pipes (PIPE)1331,1332,1333and1334. The input strobe signal generation circuit1310may generate a plurality of input strobe signals PIN<0:3> by receiving the first bank access control signal CASP<0> and the arithmetic operation signal ADD_OP. The number of the plurality of input strobe signals PIN<0:3> may be changed depending on a depth of the pipe circuit1021C. InFIG.43, the depth of the pipe circuit1021C is illustrated as 4, and each of the number of the plurality of input strobe signals PIN<0:3> and the number of a plurality of output strobe signals POUT<0:3> may be four. When the arithmetic operation signal ADD_OP is enabled, the input strobe signal generation circuit1310may generate first to fourth input strobe signals PIN<0:3> each time the first bank access control signal CASP<0> is enabled. For example, in a state in which the arithmetic operation signal ADD_OP is enabled, the input strobe signal generation circuit1310may generate the first input strobe signal PIN<0> when the first bank access control signal CASP<0> is first enabled, and may generate the second input strobe signal PIN<1> when the first bank access control signal CASP<1> is second enabled. In the same manner, the input strobe signal generation circuit1310may generate the third and fourth input strobe signals PIN<2> and PIN<3> when the first bank access control signal CASP<0> is third and fourth enabled. The input strobe signal generation circuit1310may generate the first input strobe signal PIN<0> again when the first bank access control signal CASP<0> is fifth enabled. When the first bank access control signal CASP<0> is enabled a predetermined number of times, the input strobe signal generation circuit1310might not generate the first to fourth input strobe signals PIN<0:3> any more. For example, when the first bank access control signal CASP<0> is counted a predetermined number of times, the input strobe signal generation circuit1310may block the first to fourth input strobe signals PIN<0:3> from being generated. The input strobe signal generation circuit1310may further receive the reset signal RST, and may be initialized based on the reset signal RST. The output strobe signal generation circuit1320may generate the plurality of output strobe signals POUT<0:3> based on the delayed bank access control signal CASP_A<2>. The output strobe signal generation circuit1320may generate the first to fourth output strobe signals POUT<0:3> each time the delayed bank access control signal CASP_A<2> is enabled. For example, the output strobe signal generation circuit1320may generate the first output strobe signal POUT<0> when the delayed bank access control signal CASP_A<2> is first enabled, and may generate the second output strobe signal POUT<1> when the delayed bank access control signal CASP_A<2> is second enabled. In the same manner, the output strobe signal generation circuit1320may generate the third and fourth output strobe signals POUT<2> and POUT<3> when the delayed bank access control signal CASP_A<2> is third and fourth enabled. The output strobe signal generation circuit1320may generate the first output strobe signal POUT<0> again when the delayed bank access control signal CASP_A<2> is fifth enabled. When the delayed bank access control signal CASP_A<2> is enabled a predetermined number of times, the output strobe signal generation circuit1320might not generate the first to fourth output strobe signals POUT<0:3> any more. For example, when the delayed bank access control signal CASP_A<2> is counted a predetermined number of times, the output strobe signal generation circuit1320may block the first to fourth output strobe signals POUT<0:3> from being generated. The output strobe signal generation circuit1320may further receive the reset signal RST, and may be initialized based on the reset signal RST. The plurality of pipes1331,1332,1333and1334may include a first pipe1331, a second pipe1332, a third pipe1333and a fourth pipe1334. The first pipe1331, the second pipe1332, the third pipe1333and the fourth pipe1334may receive in common the bank column address signal CA<0:4>, and may output in common the delayed column address signal CA_A<0:4>. The first pipe1331may receive the first input strobe signal PIN<0> and the first output strobe signal POUT<0>. The first pipe1331may store the bank column address signal CA<0:4> based on the first input strobe signal PIN<0>, and may output the bank column address signal CA<0:4>, stored therein, as the delayed column address signal CA_A<0:4> based on the first output strobe signal POUT<0>. The second pipe1332may receive the second input strobe signal PIN<1> and the second output strobe signal POUT<1>. The second pipe1332may store the bank column address signal CA<0:4> based on the second input strobe signal PIN<1>, and may output the bank column address signal CA<0:4>, stored therein, as the delayed column address signal CA_A<0:4> based on the second output strobe signal POUT<1>. The third pipe1333may receive the third input strobe signal PIN<2> and the third output strobe signal POUT<2>. The third pipe1333may store the bank column address signal CA<0:4> based on the third input strobe signal PIN<2>, and may output the bank column address signal CA<0:4>, stored therein, as the delayed column address signal CA_A<0:4> based on the third output strobe signal POUT<2>. The fourth pipe1334may receive the fourth input strobe signal PIN<3> and the fourth output strobe signal POUT<3>. The fourth pipe1334may store the bank column address signal CA<0:4> based on the fourth input strobe signal PIN<3>, and may output the bank column address signal CA<0:4>, stored therein, as the delayed column address signal CA_A<0:4> based on the fourth output strobe signal POUT<3>. FIG.44Ais a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0ofFIG.40. Referring toFIG.44A, the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may include a delay circuit1110C. The delay circuit1110C may receive the first bank access control signal CASP<0>, and may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. A delay time of the delay circuit1110C may correspond to an amount of time between the first bank access control signal CASP<0> being generated and data being output from the first memory bank BK0. FIG.44Bis a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1ofFIG.40. Referring toFIG.44B, the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may include a delay circuit1110D. The delay circuit1110D may receive the second bank access control signal CASP<1>, and may generate the second data enable signal DEN<1> by delaying the second bank access control signal CASP<1>. A delay time of the delay circuit1110D may correspond to an amount of time between the second bank access control signal CASP<1> being generated and data being output from the second memory bank BK1. FIG.45is a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2ofFIG.40. Referring toFIG.45, the Y-decoder/I/O circuit YDEC/IO of the third memory bank BK2may include a first selection circuit1210B and a second selection circuit1220B. The first selection circuit1210B may receive the arithmetic operation signal ADD_OP, the bank column address signal CA<0:4> and the delayed column address signal CA_A<0:4>, and may output an internal column address signal ICA<0:4>. The first selection circuit1210B may output one of the bank column address signal CA<0:4> and the delayed column address signal CA_A<0:4> as the internal column address signal ICA<0:4> based on the arithmetic operation signal ADD_OP. When the arithmetic operation signal ADD_OP is disabled to a logic low level, the first selection circuit1210B may output the bank column address signal CA<0:4> as the internal column address signal ICA<0:4>. When the arithmetic operation signal ADD_OP is enabled to a logic high level, the first selection circuit1210B may output the delayed column address signal CA_A<0:4> as the internal column address signal ICA<0:4>. The third memory bank BK2may be accessed based on the internal column address signal ICA<0:4>. The second selection circuit1220B may receive the arithmetic operation signal ADD_OP, the third bank access control signal CASP<2> and the delayed bank access control signal CASP_A<2>, and may output an internal bank access control signal ICASP<2>. The second selection circuit1220B may output one of the third bank access control signal CASP<2> and the delayed bank access control signal CASP_A<2> as the internal bank access control signal ICASP<2> based on the arithmetic operation signal ADD_OP. When the arithmetic operation signal ADD_OP is disabled to a logic low level, the second selection circuit1220B may output the third bank access control signal CASP<2> as the internal bank access control signal ICASP<2>. When the arithmetic operation signal ADD_OP is enabled to a logic high level, the second selection circuit1220B may output the delayed bank access control signal CASP_A<2> as the internal bank access control signal ICASP<2>. The third memory bank BK2may be accessed based on the internal bank access control signal ICASP<2>. FIG.46is a timing diagram illustrating the operation method of the PIM device700B in accordance with the embodiment of the present disclosure. The operation method of the PIM device700B in accordance with the embodiment of the present disclosure will be described below with reference toFIGS.40to46. The PIM device700B may store elements of first and second matrices in the first and second memory banks BK0and BK1, respectively, to perform an element-wise arithmetic operation. When all the elements of the first and second matrices are stored in the first and second memory banks BK0and BK1, the PIM device700B may generate an active signal ACT and a row address signal ADDR_R based on the external command signal E_CMD and the input address signal I_ADDR for performing an active operation. The external command signal E_CMD and the input address signal I_ADDR may be input to the PIM device700B in synchronization with a clock signal CLK. Rows with the same order among the plurality of rows of the first to third memory banks BK0, BK1and BK2may be enabled based on the active signal ACT and the row address signal ADDR_R. When a time corresponding to tRCD elapses after the first to third memory banks BK0, BK1and BK2are activated and the external command signal E_CMD instructing the active operation is received, a first external command signal E_CMD and a first input address signal I_ADDR for performing the element-wise arithmetic operation may be input to the PIM device700B. The tRCD may be defined by a time interval during which a column command signal is input after a row command signal is input. The external command signal E_CMD for performing the active operation may be included in the row command signal, and the external command signal E_CMD for performing the element-wise arithmetic operation may be included in the column command signal. The command decoder750may generate a first calculation signal EWADD based on the first external command signal E_CMD, and the address latch760may output the first input address signal I_ADDR as a first column address signal ADDR_C<0:n>. The column control circuit770B may enable the arithmetic operation signal ADD_OP based on the calculation signal EWADD, may enable the first and second bank access signals CASP<0:1>, and may provide at least a part of the first column address signal ADDR_C<0:n> as a first bank column address signal CA<0:4> (CA0). A column that is coupled to an enabled row of the first memory bank BK0may be accessed based on the first first bank access control signal CASP<0> and the first bank column address signal CA0. For example, the bank column address signal CA<0:4> may include 5 bits, and 16 columns may be accessed based on the bank column address signal CA<0:4>. First to sixteenth columns may be accessed based on the first bank column address signal CA0. At the same time, a column that is coupled to an enabled row of the second memory bank BK1may be accessed based on the first second bank access control signal CASP<1> and the first bank column address signal CA0. Accordingly, 16-bit data A0corresponding to a first element of the first matrix may be read from the first memory bank BK0, and 16-bit data B0corresponding to a first element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A0and B0corresponding to the first elements, respectively, of the first and second matrices. The data A0and B0corresponding to the first elements of the first and second matrices may be provided to the MAC operator MAC_B through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a second external command signal E_CMD and a second input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700B. The tCCD may be defined by a time interval during which another column command signal is input after one column command signal is input. The command decoder750may generate a second calculation signal EWADD based on the second external command signal E_CMD, and the address latch760may output the second input address signal I_ADDR as a second column address signal ADDR_C<0:n>. The column control circuit770B may second enable the first and second bank access control signals CASP<0:1> based on the second calculation signal EWADD, and may provide at least a part of the second column address signal ADDR_C<0:n> as a second bank column address signal CA<0:4> (CA1). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the second bank column address signal CA1. For example, seventeenth to 32{circumflex over ( )}nd columns may be accessed based on the second bank column address signal CA1. Accordingly, 16-bit data A1corresponding to a second element of the first matrix may be read from the first memory bank BK0, and 16-bit data B1corresponding to a second element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A1and B1corresponding to the second elements of the first and second matrices. The data A1and B1corresponding to the second elements of the first and second matrices may be provided to the MAC operator MAC_B through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a third external command signal E_CMD and a third input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700B. The command decoder750may generate a third calculation signal EWADD based on the third external command signal E_CMD, and the address latch760may output the third input address signal I_ADDR as a third column address signal ADDR_C<0:n>. The column control circuit770B may third enable the first and second bank access control signals CASP<0:1> based on the third calculation signal EWADD, and may provide at least a part of the third column address signal ADDR_C<0:n> as a third bank column address signal CA<0:4> (CA2). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the third bank column address signal CA2. For example, 33{circumflex over ( )}rd to 48{circumflex over ( )}th columns may be accessed based on the third bank column address signal CA2. Accordingly, 16-bit data A2corresponding to a third element of the first matrix may be read from the first memory bank BK0, and 16-bit data B2corresponding to a third element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A2and B2corresponding to the third elements of the first and second matrices. The data A2and B2corresponding to the third elements of the first and second matrices may be provided to the MAC operator MAC_B through the first and second bank I/O lines791and792. When a time corresponding to tCCD elapses, a fourth external command signal E_CMD and a fourth input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device700B. The command decoder750may generate a fourth calculation signal EWADD based on the fourth external command signal E_CMD, and the address latch760may output the fourth input address signal I_ADDR as a fourth column address signal ADDR_C<0:n>. The column control circuit770B may fourth enable the first and second bank access control signals CASP<0:1> based on the fourth calculation signal EWADD, and may provide at least a part of the fourth column address signal ADDR_C<0:n> as a fourth bank column address signal CA<0:4> (CA3). Columns that are coupled to the enabled rows of the first and second memory banks BK0and BK1may be accessed based on the first and second bank access control signals CASP<0:1> and the fourth bank column address signal CA3. For example, 49{circumflex over ( )}th to 64{circumflex over ( )}th columns may be accessed based on the fourth bank column address signal CA3. Accordingly, 16-bit data A3corresponding to a fourth element of the first matrix may be read from the first memory bank BK0, and 16-bit data B3corresponding to a fourth element of the second matrix may be read from the second memory bank BK1. The first and second memory banks BK0and BK1may enable the first and second data enable signals DEN<0:1>, respectively, while outputting the data A3and B3corresponding to the fourth elements of the first and second matrices. The data A3and B3corresponding to the fourth elements of the first and second matrices may be provided to the MAC operator MAC_B through the first and second bank I/O lines791and792. The MAC operator MAC_B may receive data, read from the first and second memory banks BK0and BK1, through the first and second bank I/O lines791and792, and may perform a calculation on the received data. The MAC operator MAC_B may receive the 16-bit data A0and B0, corresponding to the first elements of the first and second matrices, from the first and second memory banks BK0and BK1, respectively. The MAC operator MAC_B may generate a first arithmetic data Y0by performing only an addition on the 16-bit data A0and B0, corresponding to the first elements of the first and second matrices, based on the arithmetic operation signal ADD_OP, and may output the first arithmetic data Y0to the third memory bank BK2through the third bank I/O line793. When the predetermined time elapses after the first and second data enable signals DEN<0:1> are first received, the MAC operator MAC_B may enable the delayed bank access control signal CASP_A<2>. The MAC operator MAC_B may sequentially store the first to fourth bank column address signals CA0, CA1, CA2and CA3based on the first bank access control signal CASP<0>, and may output the first bank column address signal CA0as a first delayed column address signal CA_A<0:4> (CA_A0) when a first delayed bank access control signal CASP_A<2> is enabled. The third memory bank BK2may receive the first delayed bank access control signal CASP_A<2> and the first delayed column address signal CA_A0. A column that is coupled to an enabled row of the third memory bank BK2may be accessed based on the first delayed bank access control signal CASP_A<2> and the first delayed column address signal CA_A0. First to sixteenth columns may be accessed based on the first delayed column address signal CA_A0, and the first arithmetic data Y0as a first element of the third matrix may be written into the third memory bank BK2. The MAC operator MAC_B may receive the 16-bit data A1and B1, corresponding to the second elements of the first and second matrices, from the first and second memory banks BK0and BK1, respectively. The MAC operator MAC_B may generate second arithmetic data Y1by performing only an addition on the 16-bit data A1and B1, corresponding to the second elements of the first and second matrices, based on the arithmetic operation signal ADD_OP, and may output the second arithmetic data Y1to the third memory bank BK2through the third bank I/O line793. When the predetermined time elapses after the first and second data enable signals DEN<0:1> are second received, the MAC operator MAC_B may second enable the delayed bank access control signal CASP_A<2>. The MAC operator MAC_B may output the second bank column address signal CA1as a second delayed column address signal CA_A<0:4> (CA_A1) when the second delayed bank access control signal CASP_A<2> is enabled. The third memory bank BK2may receive the second delayed bank access control signal CASP_A<2> and the second delayed column address signal CA_A1. A column that is coupled to the enabled row of the third memory bank BK2may be accessed based on the second delayed bank access control signal CASP_A<2> and the second delayed column address signal CA_A1. Seventeenth to 32{circumflex over ( )}nd columns may be accessed based on the second delayed column address signal CA_A1, and the second arithmetic data Y1as a second element of the third matrix may be written into the third memory bank BK2. When data that corresponds to all elements of the first and second matrices are read from the first and second memory banks BK0and BK1and all arithmetic data generated by the MAC operator MAC_B are written into the third memory bank BK2, the element-wise arithmetic operation of the PIM device700B may be ended. FIG.47is a diagram illustrating a configuration and an operation method of a PIM device1400in accordance with an embodiment of the present disclosure. Referring toFIG.47, the PIM device1400may perform an arithmetic operation. In particular, the PIM device1400A may perform an element-wise arithmetic operation. The element-wise arithmetic operation may mean an operation of calculating respective elements of two matrices with the same size. For example, an element-wise addition operation may be performed as follows. The PIM device1400may add an element ‘1’ of a first row of a first matrix A[0:7] and an element ‘2’ of the first row of a first matrix A[0:7] and an element ‘2’ of the first row of a second matrix B[0:7] to output an addition result of an element ‘3’ that is seen in the first row of a third matrix Y[0:7]. The PIM device1400may add an element ‘2’ of the second row of the first matrix A[0:7] and an element ‘3’ of the second row of the second matrix B[0:7] to output an addition result of an element ‘5’ that is seen in the second row of the third matrix Y[0:7]. The PIM device1400may add an element ‘3’ of the third row of the first matrix A[0:7] and an element ‘4’ of the third row of the second matrix B[0:7] to output an addition result of an element ‘7’ that is seen in the third row of the third matrix Y[0:7]. The PIM device1400may add an element ‘4’ of the fourth row of the first matrix A[0:7] and an element ‘5’ of the fourth row of the second matrix B[0:7] to output an addition result of an element ‘9’ that is seen in the fourth row of the third matrix Y[0:7]. In the same manner, the PIM device1400may add elements ‘5,’ ‘6,’ ‘7,’ and ‘8’ of fifth to eighth rows of the first matrix A[0:7] and elements ‘6,’ ‘7,’ ‘8’ and ‘9’ of fifth to eighth rows of the second matrix B[0:7], respectively, to output addition results of elements ‘11,’ ‘13,’ ‘15,’ and ‘17,’ respectively, seen in the fifth to eighth rows of the third matrix Y[0:7]. For the sake of clarity in explanation, it is illustrated that each of the first to third matrices A[0:7], B[0:7] and Y[0:7] includes only elements of a plurality of rows. However, the spirit of the present disclosure may be applied to cases in which each of the first to third matrices A[0:7], B[0:7] and Y[0:7] includes elements of a plurality of columns or a plurality of rows and columns. Hereinafter, the elements of the first to eighth rows may be described as first to eighth elements, respectively. The PIM device1400may include a plurality of MAC units. One MAC unit may include a plurality of first storage regions and an MAC operator MAC. The plurality of first storage regions may be memory banks for storing data. The plurality of first storage regions may include a plurality of memory banks. The MAC operator MAC may be coupled to the plurality of memory banks, and may perform an arithmetic operation on data that is output from the plurality of memory banks. The MAC operator MAC may store result data of the arithmetic operation in a memory bank. For example, in order to perform the element-wise addition operation, one MAC operator MAC may be coupled to at least two memory banks. The at least two memory banks and the MAC operator MAC may configure one MAC unit. InFIG.47, first and second memory banks BK0and BK1are illustrated, and the first and second memory banks BK0and BK1and the MAC operator MAC may configure one MAC unit. However, the present disclosure is not limited thereto, and the number of memory banks configuring one MAC unit may be variously changed. Each of the first and second memory banks BK0and BK1may include a plurality of rows and a plurality of columns, and a plurality of memory cells may be coupled to points at which the plurality of rows and the plurality of columns intersect with each other. In order to perform the element-wise addition operation, the first matrix A[0:7] and the second matrix B[0:7] may be merged, and a merge matrix AB[0:15] may be generated as the first and second matrices A[0:7] and B[0:7] are merged. The merge matrix AB[0:15] may include elements which are obtained as elements with the same orders among the elements of the first and second matrices A[0:7] and B[0:7] are merged. By the merging, the first element ‘1’ of the first matrix A[0:7] may become a first element of the merge matrix AB[0:15], and the first element ‘2’ of the second matrix B[0:7] may become a second element of the merge matrix AB[0:15]. The second element ‘2’ of the first matrix A[0:7] may become a third element of the merge matrix AB[0:15], and the second element ‘3’ of the second matrix B[0:7] may become a fourth element of the merge matrix AB[0:15]. The third element ‘3’ of the first matrix A[0:7] may become a fifth element of the merge matrix AB[0:15], and the third element ‘4’ of the second matrix B[0:7] may become a sixth element of the merge matrix AB[0:15]. The fourth element ‘4’ of the first matrix A[0:7] may become a seventh element of the merge matrix AB[0:15], and the fourth element ‘5’ of the second matrix B[0:7] may become an eighth element of the merge matrix AB[0:15]. The fifth element ‘5’ of the first matrix A[0:7] may become a ninth element of the merge matrix AB[0:15], and the fifth element ‘6’ of the second matrix B[0:7] may become a tenth element of the merge matrix AB[0:15]. The sixth element ‘6’ of the first matrix A[0:7] may become an eleventh element of the merge matrix AB[0:15], and the sixth element ‘7’ of the second matrix B[0:7] may become a twelfth element of the merge matrix AB[0:15]. The seventh element ‘7’ of the first matrix A[0:7] may become a thirteenth element of the merge matrix AB[0:15], and the seventh element ‘8’ of the second matrix B[0:7] may become a fourteenth element of the merge matrix AB[0:15]. The eighth element ‘8’ of the first matrix A[0:7] may become a fifteenth element of the merge matrix AB[0:15], and the eighth element ‘9’ of the second matrix B[0:7] may become a sixteenth element of the merge matrix AB[0:15]. The merge matrix AB[0:15] may be generated by an external device (not illustrated) which communicates with the PIM device1400. The external device may be controlled to generate the merge matrix AB[0:15] by merging the first and second matrices A[0:7] and B[0:7] and to transmit data that corresponds to the elements of the merge matrix AB[0:15] to the PIM device1400, so that the PIM device1400may store the data that corresponds to the elements of the merge matrix AB[0:15]. Alternatively, in an embodiment, the merge matrix AB[0:15] may be generated by a control circuit (not illustrated) included in the PIM device1400. The control circuit may be programmed with software for generating the merge matrix AB[0:15] by merging the first and second matrices A[0:7] and B[0:7]. The control circuit may receive data that corresponds to the elements of the first and second matrices A[0:7] and B[0:7] from the external device, and may generate a series of data that corresponds to the elements of the merge matrix AB[0:15] by merging the received data. The PIM device1400may store data, corresponding to the first to sixteenth elements ‘1,’ ‘2,’ ‘2,’ ‘3,’ ‘3,’ ‘4,’ ‘4,’ ‘5,’ ‘5,’ ‘6,’ ‘6,’ ‘7,’ ‘7,’ ‘8,’ ‘8’ and ‘9’ of the merge matrix AB[0:15], in the first memory bank BK0. The PIM device1400may independently store elements with the same order (that is, a pair of elements with the same order) of the first and second matrices A[0:7] and B[0:7] among the elements of the merge matrix AB[0:15], in a storage space which can be read based on a single command signal. For example, the PIM device1400may store the first and second elements ‘1’ and ‘2’ of the merge matrix AB[0:15], corresponding to the first elements of the first and second matrices A[0:7] and B[0:7], in a first storage space S11of the first memory bank BK0, may store the third and fourth elements ‘2’ and ‘3’ of the merge matrix AB[0:15], corresponding to the second elements of the first and second matrices A[0:7] and B[0:7], in a second storage space S12of the first memory bank BK0, and may store the fifth and sixth elements ‘3’ and ‘4’ of the merge matrix AB[0:15], corresponding to the third elements of the first and second matrices A[0:7] and B[0:7], in a third storage space S13of the first memory bank BK0. Although not illustrated, elements of the merge matrix AB[0:15] corresponding to elements with the same order of the first and second matrices A[0:7] and B[0:7] may be independently stored in an allocated storage space of the first memory bank BK0. The PIM device1400may read data that is stored in the first memory bank BK0, and may provide the read data to the MAC operator MAC. The PIM device1400may control data, corresponding to the elements with the same orders of the first and second matrices A[0:7] and B[0:7], to be sequentially output from the first memory bank BK0. The PIM device1400may read data that is stored in one of a plurality of storage spaces of the first memory bank BK0, during an operation that is performed based on a single command signal. For example, during a first operation that is performed based on the single command signal, the PIM device1400may output data that is stored in the first storage space S11among data that is stored in the first memory bank BK0. Thereafter, during a second operation that is performed based on the single command signal, the PIM device1400may output data that is stored in the second storage space S12among the data that is stored in the first memory bank BK0. Thereafter, during a third operation that is performed based on the single command signal, the PIM device1400may output data that is stored in the third storage space S13among the data that is stored in the first memory bank BK0. The PIM device1400may control data that corresponds to the respective fourth to eighth elements of the first and second matrices A[0:7] and B[0:7] and corresponding to elements of the merge matrix AB[0:15], to be sequentially output from the first memory bank BK0. The MAC operator MAC may perform an arithmetic operation on data that is output from the first memory bank BK0. The MAC operator MAC may add data that is output from the first memory bank BK0. The MAC operator MAC may sequentially add data that is output from the first memory bank BK0. The MAC operator MAC may receive the data, stored in the first storage space S11, from the first memory bank BK0, and may generate arithmetic data by adding the received data. The arithmetic data may be data that corresponds to the first element ‘3’ of the third matrix Y[0:7]. The MAC operator MAC may receive the data, stored in the second storage space S12, from the first memory bank BK0, and may generate arithmetic data by adding the received data. The arithmetic data may be data that corresponds to the second element ‘5’ of the third matrix Y[0:7]. The MAC operator MAC may receive the data, stored in the third storage space S13, from the first memory bank BK0, and may generate arithmetic data by adding the received data. The arithmetic data may be data that corresponds to the third element ‘7’ of the third matrix Y[0:7]. In the same manner, the MAC operator MAC may sequentially receive data that is stored in a plurality of storage spaces of the first memory bank BK0(that is, data that corresponds to the seventh and eighth elements, the ninth and tenth elements, the eleventh and twelfth elements, the thirteenth and fourteenth elements, and the fifteenth and sixteenth elements of the merge matrix AB[0:15]), and may generate a plurality of arithmetic data by adding the received data. The plurality of arithmetic data may be data that corresponds to the fourth to eighth elements ‘9,’ ‘11,’ ‘13,’ ‘15’ and ‘17’ of the third matrix Y[0:7]. The MAC operator MAC may provide the arithmetic data to the second memory bank BK1, and the arithmetic data may be written into the second memory bank BK1. The second memory bank BK1may sequentially receive the arithmetic data, corresponding to the first to eighth elements ‘3,’ ‘5,’ ‘7,’ ‘9,’ ‘11,’ ‘13,’ ‘15’ and ‘17’ of the third matrix Y[0:7], from the MAC operator MAC, and the arithmetic data may be sequentially stored in the second memory bank BK1. The PIM device1400may complete the element-wise arithmetic operation by writing the arithmetic data to the second memory bank BK1. The PIM device1400may independently store the elements of the third matrix Y[0:7] in storage spaces of the second memory bank BK1corresponding to the storage spaces in which the elements of the merge matrix AB[0:15] are independently stored in the first memory bank BK0. For example, the PIM device1400may store arithmetic data, corresponding to the first element ‘3’ of the third matrix Y[0:7], in a first storage space S21of the second memory bank BK1, may store arithmetic data, corresponding to the second element ‘5’ of the third matrix Y[0:7], in a second storage space S22of the second memory bank BK1, and may store arithmetic data, corresponding to the third element ‘7’ of the third matrix Y[0:7], in a third storage space S23of the second memory bank BK1. The first to third storage spaces S11, S12, S13, S21, S22and S23of the first and second memory banks BK0and BK1may be specified as a row with the same order and columns with the same orders. For example, when data that corresponds to the elements of the merge matrix AB[0:15] are stored in a first row of the first memory bank BK0, the elements of the third matrix Y[0:7] may be stored in a first row of the second memory bank BK1. When the first and second elements of the merge matrix AB[0:15] are stored in a first column that is coupled to the first row, the first element of the third matrix Y[0:7] may be stored in a first column that is coupled to the first row of the second memory bank BK1. When the third and fourth elements of the merge matrix AB[0:15] are stored in a second column that is coupled to the first row, the second element of the third matrix Y[0:7] may be stored in a second column that is coupled to the first row of the second memory bank BK1. In the same manner, the fifth and sixth elements of the merge matrix AB[0:15] and the third element of the third matrix Y[0:7] may be stored in third columns that are coupled to the first rows of the first and second memory banks BK0and BK1, and the seventh and eighth elements of the merge matrix AB[0:15] and the fourth element of the third matrix Y[0:7] may be stored in fourth columns that are coupled to the first rows of the first and second memory banks BK0and BK1. The ninth and tenth elements of the merge matrix AB[0:15] and the fifth element of the third matrix Y[0:7] may be stored in fifth columns that are coupled to the first rows of the first and second memory banks BK0and BK1, and the eleventh and twelfth elements of the merge matrix AB[0:15] and the sixth element of the third matrix Y[0:7] may be stored in sixth columns that are coupled to the first rows of the first and second memory banks BK0and BK1. The thirteenth and fourteenth elements of the merge matrix AB[0:15] and the seventh element of the third matrix Y[0:7] may be stored in seventh columns that are coupled to the first rows of the first and second memory banks BK0and BK1, and the fifteenth and sixteenth elements of the merge matrix AB[0:15] and the eighth element of the third matrix Y[0:7] may be stored in eighth columns that are coupled to the first rows of the first and second memory banks BK0and BK1. Each of the first to eighth columns may include a plurality of columns. FIG.48is a flow chart illustrating an operation method of the PIM device1400in accordance with an embodiment of the present disclosure. The operation method of the PIM device1400will be described below with reference toFIGS.47and48. In order for the PIM device1400to perform an element-wise arithmetic operation, at step S481, the merge matrix AB[0:15] may be generated as the elements with the same orders of the first matrix A[0:7] and the second matrix B[0:7 are merged by the external device or the control circuit. Pairs of elements with the same orders of the first and second matrices A[0:7] and B[0:7] may sequentially configure the elements of the merge matrix AB[0:15]. At step S482, the PIM device1400may receive data that corresponds to elements of the merge matrix AB[0:15], and may write the data to a first target memory bank. The first target memory bank may be the first memory bank BK0. The PIM device1400may activate the first target memory bank and enable a specific row (e.g., a first row) of the first target memory bank. The PIM device1400may access a first column that is coupled to the first row, and may write the first and second elements ‘1’ and ‘2’ of the merge matrix AB[0:15] to the first storage space S11which is specified by the first row and the first column. At step S483, the PIM device1400may determine whether all the elements of the merge matrix AB[0:15] have been written into the first target memory bank. If all the elements of the merge matrix AB[0:15] have not been written (No of the step S483), the steps S481and S482may be repeatedly performed, and the PIM device1400may sequentially write data, corresponding to elements of the merge matrix AB[0:5], to the first target memory bank. The PIM device1400may sequentially access second to eighth columns that are coupled to the first row of the first target memory bank, and may sequentially write data, corresponding to elements of the merge matrix AB[0:15], to a plurality of storage spaces specified by the first row and the second to eighth columns. If all the elements of the merge matrix AB[0:15] have been written (Yes of the step S483), the process may proceed to step S484. At the step S484, the PIM device1400may sequentially read the elements of the merge matrix AB[0:15] from the first target memory bank. The PIM device1400may activate the first target memory bank, and may enable a specific row of the first target memory bank. Also, the PIM device1400may activate a second target memory bank, and may enable a specific row of the second target memory bank. The second target memory bank may be the second memory bank BK1. The second target memory bank may be activated simultaneously with the first target memory bank, or may be sequentially activated after the first target memory bank is activated. The PIM device1400may sequentially access columns of the first target memory bank, and may read data, corresponding to the elements of the merge matrix AB[0:15], from storage spaces specified by the row and the columns. At step S485, the PIM device1400may generate arithmetic data by performing an arithmetic operation on data that is read from the first target memory bank. The PIM device1400may generate the arithmetic data by adding data that corresponds to two elements among the elements of the merge matrix AB[0:15] read from the first memory bank BK0. The arithmetic data, as a result of calculating data, corresponding to the first and second elements of the merge matrix AB[0:15], by the PIM device1400, may be the first element of the third matrix Y[0:7]. At step S486, the PIM device1400may determine whether data that corresponds to all the elements of the merge matrix AB[0:15] have been read. If data that corresponds to all the elements have not been read (No of the step S486), the steps S484and S485may be repeatedly performed. The PIM device1400may sequentially read data, corresponding to the third to sixteenth elements of the merge matrix AB[0:15], from the first memory bank BK0, and may generate arithmetic data by performing an arithmetic operation on the read data. The arithmetic data may be the second to eighth elements, respectively, of the third matrix Y[0:7]. If data that corresponds to all the elements have been read (Yes of the step S486), the process may proceed to step S488to be described below. Step S487may be performed in parallel with the step S486. At the step S487, the PIM device1400may provide the arithmetic data, generated at the step S485, to the second target memory bank, and may write the arithmetic data to the second target memory bank. At the step S488, the PIM device1400may determine whether arithmetic data for all the elements of the merge matrix AB[0:15] (that is, all the elements of the third matrix Y[0:7]) have been written into the second target memory bank. If arithmetic data that corresponds to all the elements of the third matrix Y[0:7] have not been written into the second target memory bank (No of the step S488), the steps S487and S488may be repeatedly performed. Each time arithmetic data are sequentially generated at the step S487, the PIM device1400may sequentially write the arithmetic data to the second target memory bank. The arithmetic data may be stored, in the second memory bank BK1, in storage spaces corresponding to the storage spaces of the first memory bank BK0, in which the elements of the merge matrix AB[0:15] are stored. Arithmetic data (that is, the first element of the third matrix Y[0:7]) that is generated by adding the first and second elements of the merge matrix AB[0:15] may be stored in the first storage space S21specified by a first row and a first column of the second target memory bank. Arithmetic data (that is, the second to eighth elements of the third matrix Y[0:7]) generated by adding the third to sixteenth elements of the merge matrix AB[0:15] may be stored in storage spaces specified by second to eighth columns that are coupled to the first row of the second target memory bank. If arithmetic data for all the elements have been written into the second target memory bank (Yes of the step S488), the element-wise arithmetic operation of the PIM device1400may be ended. FIG.49is a diagram illustrating a configuration of a PIM device1500in accordance with an embodiment of the present disclosure and an external device1501coupled to the PIM device1500. The PIM device1500may include the same or similar components as or to those of the PIM device700A illustrated inFIG.33, and repeated descriptions for the same components will be omitted herein. Referring toFIG.49, the PIM device1500may perform an arithmetic operation by being coupled to the external device1501. The PIM device1500may receive an external command signal E_CMD, an input address signal I_ADDR and data DA from the external device1501, and may perform an arithmetic operation on the received data. The PIM device1500may output arithmetic data, generated through the arithmetic operation, to the external device1501. Referring toFIG.49, the PIM device1500may include an MAC unit. The MAC unit may include a plurality of memory banks and an MAC operator MAC. The MAC unit may include at least a first memory bank BK0and a second memory bank BK1. Each of the first and second memory banks BK0and BK1may include a Y-decoder/I/O circuit YDEC/IO and an X-decoder XDEC. Each of the first and second memory banks BK0and BK1may be accessed through the X-decoder XDEC and the Y-decoder/I/O circuit YDEC/IO. The first memory bank BK0may be accessed based on a first bank access control signal CASP<0> and a bank column address signal CA<0:4>. The first bank access control signal CASP<0> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0. The second memory bank BK1may be accessed based on a second bank access control signal CASP<1> and the bank column address signal CA<0:4>. The second bank access control signal CASP<1> and the bank column address signal CA<0:4> may be provided to the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1. In the MAC unit, it may be prescribed that data that corresponds to an element of a merge matrix is stored in the first memory bank BK0. In the MAC unit, it may be prescribed that arithmetic data generated through an arithmetic operation on elements of the merge matrix (i.e., data that corresponds to elements of a third matrix) are stored in the second memory bank BK1. The MAC operator MAC may be coupled to the first and second memory banks BK0and BK1. The MAC operator MAC may be coupled to the first and second memory banks BK0and BK1through bank I/O lines791and792. The MAC operator MAC may be coupled to the first memory bank BK0through a first bank I/O line791. The MAC operator MAC may be coupled to the second memory bank BK1through a second bank I/O line792. The MAC operator MAC may receive data, output from the first memory bank BK0, through the first bank I/O line791, and may output arithmetic data, generated by an arithmetic operation, to the second memory bank BK1through the second bank I/O line792. The MAC operator MAC may perform an arithmetic operation on data that is output from the first memory bank BK0. In general, the MAC operator MAC may perform both multiplication and addition calculations. In order to allow the PIM device1500to perform an element-wise addition operation, the MAC operator MAC may perform only an addition calculation on data that is output from the first memory bank BK0. For example, the bank column address signal CA<0:4> may be a 5-bit signal, and one element may be mapped as 16-bit data. During a single write operation or a single read operation of the PIM device1500, the PIM device1500may write 256-bit data to the first memory bank BK0or read 256-bit data from the first memory bank BK0, based on the bank column address signal CA<0:4>. Two elements of the merge matrix may be mapped as total 32-bit data. Accordingly, the PIM device1500may perform an element-wise arithmetic operation on total 8 pairs of matrices. When the PIM device1500performs an element-wise arithmetic operation on two matrices, 32-bit data that corresponds to two elements of the merge matrix may be written into the first memory bank BK0through a single write operation, and the remaining 224-bit data may be written as 0. Among 256 bits that are output from the first memory bank BK0during a single read operation, 32-bit data may be data to which two elements of the merge matrix are mapped, and the remaining 224-bit data may be 0. However, the number of bits of data for mapping one element and the total number of bits of data to be stored in and output from the first and second memory banks BK0and BK1may be variously changed. The PIM device1500may include a column control circuit1570which controls the MAC unit to perform an element-wise arithmetic operation. The column control circuit1570may generate various control signals so that the MAC unit of the PIM device1500may perform an element-wise arithmetic operation. The column control circuit1570may receive a calculation signal TEWADD and a column address signal ADDR_C<0:n> (n is an arbitrary integer), and may generate an arithmetic operation signal TADD_OP, the bank access control signals CASP<0:1> and the bank column address signal CA<0:4> based on the calculation signal TEWADD and the column address signal ADDR_C<0:n>. The column control circuit1570may enable the first bank access control signal CASP<0> of the bank access control signals CASP<0:1> and the arithmetic operation signal TADD_OP based on the calculation signal TEWADD. The column control circuit1570may output at least a part of the column address signal ADDR_C<0:n> as the bank column address signal CA<0:4>. For example, the bank column address signal CA<0:4> may be a 5-bit signal. The MAC operator MAC may receive the arithmetic operation signal TADD_OP from the column control circuit1570. The MAC operator MAC may generate a delayed bank access control signal CASP_TA<1> based on the arithmetic operation signal TADD_OP and the first bank access control signal CASP<0>. The MAC operator MAC may generate a delayed column address signal CA_TA<0:4> based on the bank column address signal CA<0:4>. The MAC operator MAC may provide the delayed bank access control signal CASP_TA<1> and the delayed column address signal CA_TA<0:4> to the second memory bank BK1. The second memory bank BK1may be accessed based on the delayed bank access control signal CASP_TA<1> and the delayed column address signal CA_TA<0:4>. When the PIM device1500performs an element-wise addition operation, the second memory bank BK1may be accessed based on the delayed bank access control signal CASP_TA<1> and the delayed column address signal CA_TA<0:4> instead of the second bank access control signal CASP<1> and the bank column address signal CA<0:4>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate a first data enable signal DEN<0> based on the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. The Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may provide the first data enable signal DEN<0> to the MAC operator MAC. The MAC operator MAC may further receive the first data enable signal DEN<0>. The MAC operator MAC may generate the delayed bank access control signal CASP_TA<1> based on the arithmetic operation signal TADD_OP and the first data enable signal DEN<0>. The MAC operator MAC may generate the delayed column address signal CA_TA<0:4> based on the arithmetic operation signal ADD_OP, the first data enable signal DEN<0> and the bank column address signal CA<0:4>. The PIM device1500may further include a receiving driver (RX)730, a data I/O circuit (DQ)740, a command decoder (CMD DECODER)750, an address latch760, a serializer/deserializer (SER/DES)780, and a global buffer1595. When the external command signal E_CMD has information for performing an element-wise arithmetic operation, the command decoder750may generate the calculation signal TEWADD by decoding the external command signal E_CMD. For example, when the external command signal E_CMD has information for performing an element-wise addition operation, the command decoder750may generate the calculation signal TEWADD by decoding the external command signal E_CMD. The global buffer1595may be coupled to the first and second memory banks BK0and BK1and the MAC operator MAC through a global I/O line790. The global buffer1595may provide data to the first and second memory banks BK0and BK1, and may store data that is output from the first and second memory banks BK0and BK1. The global buffer1595may provide data used for an arithmetic operation of the MAC operator MAC, and may store arithmetic data generated from the MAC operator MAC. The global buffer1595may receive the arithmetic operation signal TADD_OP. The global buffer1595may provide preset data to the MAC operator MAC based on the arithmetic operation signal TADD_OP. Descriptions will be made later for the preset data. FIG.50is a diagram illustrating at least a part of components of the column control circuit1570illustrated inFIG.49. Referring toFIG.50, the column control circuit1570may include an arithmetic operation signal generation circuit1610and an access signal generation circuit1620. The arithmetic operation signal generation circuit1610may receive the calculation signal TEWADD, and may generate the arithmetic operation signal TADD_OP based on the calculation signal TEWADD. The arithmetic operation signal generation circuit1610may further receive a reset signal RST and an idle signal IDLE. The arithmetic operation signal generation circuit1610may generate the arithmetic operation signal TADD_OP based on the calculation signal TEWADD, the reset signal RST and the idle signal IDLE. The arithmetic operation signal generation circuit1610may enable the arithmetic operation signal TADD_OP when the calculation signal TEWADD is enabled in a state in which the reset signal RST and the idle signal IDLE are disabled. The arithmetic operation signal generation circuit1610may disable the arithmetic operation signal TADD_OP when one of the reset signal RST and the idle signal IDLE is enabled in a state in which the arithmetic operation signal TADD_OP is enabled. The arithmetic operation signal generation circuit1610may be configured by a NOR type RS latch. The arithmetic operation signal generation circuit1610may include a first NOR gate1611and a second NOR gate1612. A first input terminal of the first NOR gate1611may receive the reset signal RST, a second input terminal of the first NOR gate1611may receive the idle signal IDLE, and a third input terminal of the first NOR gate1611may receive a signal output from an output terminal of the second NOR gate1612. The arithmetic operation signal TADD_OP may be output through an output terminal of the first NOR gate1611. A first input terminal of the second NOR gate1612may receive the arithmetic operation signal TADD_OP, and a second input terminal of the second NOR gate1612may receive the calculation signal TEWADD. The output terminal of the second NOR gate1612may be coupled to the third input terminal of the first NOR gate1611. When the calculation signal TEWADD is enabled to a logic high level in a state in which the reset signal RST and the idle signal IDLE are disabled to logic low levels, a signal with a logic low level may be input to the third input terminal of the first NOR gate1611, and thus, the arithmetic operation signal TADD_OP may be enabled to a logic high level. In a state in which the arithmetic operation signal TADD_OP is enabled to a logic high level, when at least one of the reset signal RST and the idle signal IDLE is enabled to a logic high level, the arithmetic operation signal TADD_OP may be disabled to a logic low level. The access signal generation circuit1620may receive the calculation signal TEWADD, and may generate the first bank access control signal CASP<0> based on the calculation signal TEWADD. When the calculation signal TEWADD is enabled, the access signal generation circuit1620may enable the first bank access control signal CASP<0>. By enabling the first bank access control signal CASP<0>, the access signal generation circuit1620may cause the first memory bank BK0to be accessed. FIG.51is a diagram illustrating a configuration of an arithmetic circuit1700among components of the MAC operator MAC illustrated inFIG.49. Referring toFIG.51, the arithmetic circuit1700may perform a multiplication-accumulative addition calculation on input data, and may output a multiplication-accumulative addition calculation result. The arithmetic circuit1700may include a plurality of multipliers, a plurality of adders and an accumulator. Each of the plurality of multipliers may receive allocated data, and the number of the plurality of multipliers may vary depending on the number of bits of the allocated data. For example, the MAC operator MAC may include 16 multipliers to each perform an arithmetic operation on 16 elements. A first multiplier1710-1may receive first to sixteenth bit data A<0:15> output from the first memory bank BK0and first to sixteenth bit data that is output from a memory bank different from the first memory bank BK0or the global buffer1595. The first multiplier1710-1may multiply the first to sixteenth bit data A<0:15> that is output from the first memory bank BK0and the data that is output from the different memory bank or the global buffer1595. A second multiplier1710-2may receive seventeenth to 32{circumflex over ( )}nd bit data A<16:31> that is output from the first memory bank BK0and seventeenth to 32{circumflex over ( )}nd bit data that is output from the different memory bank or the global buffer1595, and may multiply the seventeenth to 32{circumflex over ( )}nd bit data A<16:31> that is output from the first memory bank BK0and the seventeenth to 32{circumflex over ( )}nd bit data that is output from the different memory bank or the global buffer1595. A third multiplier1710-3may receive 33{circumflex over ( )}nd to 48{circumflex over ( )}th bit data A<32:47> that is output from the first memory bank BK0and 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data A<32:47> that is output from the first memory bank BK0and the 33{circumflex over ( )}rd to 48{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. A fourth multiplier1710-4may receive 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data A<48:63> that is output from the first memory bank BK0and 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data A<48:63> that is output from the first memory bank BK0and the 49{circumflex over ( )}th to 64{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. A thirteenth multiplier1710-13may receive 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data A<192:207> that is output from the first memory bank BK0and 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data A<192:207> that is output from the first memory bank BK0and the 193{circumflex over ( )}rd to 208{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. A fourteenth multiplier1710-14may receive 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data A<208:223> that is output from the first memory bank BK0and 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data A<208:223> that is output from the first memory bank BK0and the 209{circumflex over ( )}th to 224{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. A fifteenth multiplier1710-15may receive 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data A<224:239> that is output from the first memory bank BK0and 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data A<224:239> that is output from the first memory bank BK0and the 225{circumflex over ( )}th to 240{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. A sixteenth multiplier1710-16may receive 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data A<240:255> that is output from the first memory bank BK0and 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595, and may multiply the 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data A<240:255> that is output from the first memory bank BK0and the 241{circumflex over ( )}st to 256{circumflex over ( )}th bit data that is output from the different memory bank or the global buffer1595. In order to ensure that the arithmetic circuit1700performs only an addition operation, the plurality of multipliers1710-1,1710-2,1710-3,1710-4, . . . ,1710-13,1710-14,1710-15, and1710-16may receive the data A<0:255> that is output from the first memory bank BK0and data with the value of ‘1.’ Therefore, the plurality of multipliers1710-1,1710-2,1710-3,1710-4, . . . ,1710-13,1710-14,1710-15, and1710-16may output data with the same value as the data A<0:255> that is output from the first memory bank BK0. The global buffer1595may receive the arithmetic operation signal TADD_OP, and may provide data with the value of ‘1’ to the plurality of multipliers1710-1,1710-2,1710-3,1710-4, . . . ,1710-13,1710-14,1710-15, and1710-16, based on the arithmetic operation signal TADD_OP. The MAC operator MAC may include 15 adders. A first adder1730-1may receive data that is output from the first and second multipliers1710-1and1710-2, and may add the data that is output from the first and second multipliers1710-1and1710-2. A second adder1730-2may receive data that is output from the third and fourth multipliers1710-3and1710-4, and may add the data that is output from the third and fourth multipliers1710-3and1710-4. A seventh adder1730-7may receive data that is output from the thirteenth and fourteenth multipliers1710-13and1710-14, and may add the data that is output from the thirteenth and fourteenth multipliers1710-13and1710-14. An eighth adder1730-8may receive data that is output from the fifteenth and sixteenth multipliers1710-15and1710-16, and may add the data that is output from the fifteenth and sixteenth multipliers1710-15and1710-16. The first to eighth adders1730-1,1730-2, . . . ,1730-7and1730-8may be floating point adders. A ninth adder1730-9may receive data that is output from the first and second adders1730-1and1730-2, and may add the data that is output from the first and second adders1730-1and1730-2. A twelfth adder1730-12may receive data that is output from the seventh and eighth adders1730-7and1730-8, and may add the data that is output from the seventh and eighth adders1730-7and1730-8. A fifteenth adder1730-15may receive data that is output from thirteenth and fourteenth adders (not illustrated), and may add the data that is output from the thirteenth and fourteenth adders. An accumulator1740may receive and store data that is output from the fifteenth adder1730-15. The accumulator1740may add data, newly output from the fifteenth adder1730-15, to a stored data value each time an update signal UPDATE is enabled, and may store added data again. The accumulator1740may include one adder1741and an updater1742. The adder1741may receive data that is output from the fifteenth adder1730-15, and may store the received data. The adder1741may output stored data to the updater1742. The adder1741may receive data that is output from the updater1742, and may add the data that is output from the updater1742and the data that is output from the fifteenth adder1730-15. The updater1742may be implemented by a flip-flop FF. An input terminal of the flip-flop FF may receive an output of the adder1741, and a clock terminal of the flip-flop FF may receive the update signal UPDATE. An output terminal of the flip-flop FF may be coupled to the adder1741, and the adder1741may receive data that is output through the output terminal of the flip-flop FF. The input terminal of the flip-flop FF may be coupled to an output terminal OUT of the arithmetic circuit1700. When performing the element-wise addition operation, the arithmetic circuit1700may output data, output from the fifteenth adder1730-15, as the arithmetic data. The arithmetic circuit1700may generate arithmetic data Y<0:15> with at least 16 bits each time an addition operation on data that is output from the first memory bank BK0is performed. For example, the arithmetic circuit1700may output data, output from the fifteenth adder1730-15, as the arithmetic data to the second memory bank BK1based on the arithmetic operation signal TADD_OP. FIG.52is a diagram illustrating a part among the components of the MAC operator MAC configured inFIG.49. Referring toFIG.52, the MAC operator MAC may include a write control circuit1800. The write control circuit1800may generate control signals for writing arithmetic data, generated through an arithmetic operation of the MAC operator MAC, to the second memory bank BK1. The write control circuit1800may generate the delayed bank access control signal CASP_TA<1> and the delayed column address signal CA_TA<0:4> based on the arithmetic operation signal TADD_OP, the first data enable signal DEN<0> and the bank column address signal CA<0:4>. The write control circuit1800may include an access control circuit1810and an address control circuit1820. The access control circuit1810may generate the delayed bank access control signal CASP_TA<1> based on the arithmetic operation signal TADD_OP and the first data enable signal DEN<0>. The access control circuit1810may generate a write start signal WTS based on the arithmetic operation signal TADD_OP and the first data enable signal DEN<0>, and may generate a delayed write start signal WTSD by delaying the write start signal WTS by a predetermined time. The predetermined time may be a time during which the MAC operator MAC performs an arithmetic operation, and may correspond to a time from after the MAC operator MAC receives data that is output from the first memory bank BK0to till the MAC operator MAC outputs arithmetic data to the second memory bank BK1. The access control circuit1810may generate the delayed bank access control signal CASP_TA<1> each time the delayed write start signal WTSD is generated. The access control circuit1810may include a write start signal generation circuit1811, a first delay circuit (DELAY)1812and a delayed access signal generation circuit1813. The write start signal generation circuit1811may generate the write start signal WTS by receiving the first data enable signal DEN<0> and the arithmetic operation signal TADD_OP. The write start signal generation circuit1811may enable the write start signal WTS each time the first data enable signal DEN<0> is enabled in a state in which the arithmetic operation signal TADD_OP is enabled. The write start signal generation circuit1811may include an AND gate which outputs the write start signal WTS by AND-gating the first data enable signal DEN<0> and the arithmetic operation signal TADD_OP. The first delay circuit1812may generate the delayed write start signal WTSD by delaying the write start signal WTS by the predetermined time. The delayed access signal generation circuit1813may receive the delayed write start signal WTSD, and may generate the delayed bank access control signal CASP_TA<1> based on the delayed write start signal WTSD. The delayed access signal generation circuit1813may be implemented by a pulse generator. The address control circuit1820may generate the delayed column address signal CA_TA<0:4> by delaying the bank column address signal CA<0:4>. The address control circuit1820may receive the arithmetic operation signal TADD_OP, the bank column address signal CA<0:4>, the first bank access control signal CASP<0> and the delayed bank access control signal CASP_TA<1>. The address control circuit1820may generate the delayed column address signal CA_TA<0:4> based on the arithmetic operation signal TADD_OP, the bank column address signal CA<0:4>, the first bank access control signal CASP<0> and the delayed bank access control signal CASP_TA<1>. The address control circuit1820may sequentially store the bank column address signal CA<0:4> each time the first bank access control signal CASP<0> is enabled in a state in which the arithmetic operation signal TADD_OP is enabled. The address control circuit1820may sequentially output the sequentially stored bank column address signal CA<0:4> as the delayed column address signal CA_TA<0:4> each time the delayed bank access control signal CASP_TA<1> is enabled. By sequentially outputting the stored bank column address signal CA<0:4> as the delayed column address signal CA_TA<0:4> each time the delayed bank access control signal CASP_TA<1> is enabled, the address control circuit1820may synchronize a point of time at which the delayed bank access control signal CASP_TA<1> is output and a point of time at which the delayed column address signal CA_TA<0:4> is output. When the delayed bank access control signal CASP_TA<1> is first enabled, the address control circuit1820may provide the bank column address signal CA<0:4> received when the first bank access control signal CASP<0> is first enabled, as the delayed column address signal CA_TA<0:4>. When the delayed bank access control signal CASP_TA<1> is second enabled, the address control circuit1820may provide the bank column address signal CA<0:4> received when the first bank access control signal CASP<0> is second enabled, as the delayed column address signal CA_TA<0:4>. Accordingly, after the predetermined time elapses, a column of the second memory bank BK1with the same order as a column accessed in the first memory bank BK0may be accessed. The address control circuit1820may include a pipe circuit1821. The pipe circuit1821may generate the delayed column address signal CA_TA<0:4> based on the arithmetic operation signal TADD_OP, the first bank access control signal CASP<0>, the delayed bank access control signal CASP_TA<1> and the bank column address signal CA<0:4>. The pipe circuit1821may generate a plurality of input strobe signals based on the arithmetic operation signal TADD_OP and the first bank access control signal CASP<0>. The pipe circuit1821may generate a plurality of output strobe signals based on the delayed bank access control signal CASP_TA<1>. The pipe circuit1821may sequentially store the bank column address signal CA<0:4>, input to the pipe circuit1821, based on the plurality of input strobe signals. The pipe circuit1821may sequentially output the bank column address signal CA<0:4> sequentially stored in the pipe circuit1821, as the delayed column address signal CA_TA<0:4>, based on the plurality of output strobe signals. The pipe circuit1821may have substantially the same configuration as the pipe circuit1021C illustrated inFIG.43except a part of input signals. FIG.53is a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0ofFIG.49. Referring toFIG.53, the Y-decoder/I/O circuit YDEC/IO of the first memory bank BK0may include a delay circuit1910. The delay circuit1910may receive the first bank access control signal CASP<0>, and may generate the first data enable signal DEN<0> by delaying the first bank access control signal CASP<0>. A delay time of the delay circuit1910may correspond to an amount of time between the first bank access control signal CASP<0> being generated and data being output from the first memory bank BK0. FIG.54is a diagram illustrating a part among components of the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1ofFIG.49. Referring toFIG.54, the Y-decoder/I/O circuit YDEC/IO of the second memory bank BK1may include a first selection circuit2010and a second selection circuit2020. The first selection circuit2010may receive the arithmetic operation signal TADD_OP, the bank column address signal CA<0:4>, and the delayed column address signal CA_TA<0:4>, and may output an internal column address signal ICA<0:4>. The first selection circuit2010may output one of the bank column address signal CA<0:4> and the delayed column address signal CA_TA<0:4> as the internal column address signal ICA<0:4> based on the arithmetic operation signal TADD_OP. When the arithmetic operation signal TADD_OP is disabled to a logic low level, the first selection circuit2010may output the bank column address signal CA<0:4> as the internal column address signal ICA<0:4>. When the arithmetic operation signal TADD_OP is enabled to a logic high level, the first selection circuit2010may output the delayed column address signal CA_TA<0:4> as the internal column address signal ICA<0:4>. The second memory bank BK1may be accessed based on the internal column address signal ICA<0:4>. The second selection circuit2020may receive the arithmetic operation signal TADD_OP, the second bank access control signal CASP<1>, and the delayed bank access control signal CASP_TA<1>, and may output an internal bank access control signal ICASP<1>. The second selection circuit2020may output one of the second bank access control signal CASP<1> and the delayed bank access control signal CASP_TA<1> as the internal bank access control signal ICASP<1> based on the arithmetic operation signal TADD_OP. When the arithmetic operation signal TADD_OP is disabled to a logic low level, the second selection circuit2020may output the second bank access control signal CASP<1> as the internal bank access control signal ICASP<1>. When the arithmetic operation signal TADD_OP is enabled to a logic high level, the second selection circuit2020may output the delayed bank access control signal CASP_TA<1> as the internal bank access control signal ICASP<1>. The second memory bank BK1may be accessed based on the internal bank access control signal ICASP<1>. FIG.55is a timing diagram illustrating the operation method of the PIM device1500in accordance with the embodiment of the present disclosure. The operation method of the PIM device1500will be described below with reference toFIGS.49to55. The external device1501or the control circuit inside the PIM device1500may generate a merge matrix by merging first and second matrices so that the PIM device1500may perform an element-wise arithmetic operation. In order to perform the element-wise arithmetic operation, the PIM device1500may store elements of the merge matrix in the first memory bank BK0. When all the elements of the merge matrix are stored in the first memory bank BK0, the PIM device1500may generate the active signal ACT and the row address signal ADDR_R based on the external command signal E_CMD and the input address signal I_ADDR to perform an active operation. The external command signal E_CMD and the input address signal I_ADDR may be input to the PIM device1500in synchronization with a clock signal CLK. Rows with the same order among the plurality of rows of the first and second memory banks BK0and BK1may be enabled based on the active signal ACT and the row address signal ADDR_R. When a time that corresponds to tRCD elapses after the first and second memory banks BK0and BK1are activated and the external command signal E_CMD that instructs the active operation is received, a first external command signal E_CMD and a first input address signal I_ADDR for performing the element-wise arithmetic operation may be input to the PIM device1500. The tRCD may be defined by a time interval during which a column command signal is input after a row command signal is input. The external command signal E_CMD for performing the active operation may be included in the row command signal, and the external command signal E_CMD for performing the element-wise arithmetic operation may be included in the column command signal. The command decoder750may generate a first calculation signal TEWADD based on the first external command signal E_CMD, and the address latch760may output the first input address signal I_ADDR as a first column address signal ADDR_C<0:n>. The column control circuit1570may enable the arithmetic operation signal TADD_OP based on the calculation signal TEWADD, may enable the first bank access control signal CASP<0>, and may provide at least a part of the first column address signal ADDR_C<0:n> as a first bank column address signal CA<0:4> (CA0). A column that is coupled to an enabled row of the first memory bank BK0may be accessed based on the first bank access control signal CASP<0> and the first bank column address signal CA0. For example, the bank column address signal CA<0:4> may include 5 bits, and 32 columns may be accessed based on the bank column address signal CA<0:4>. First to 32{circumflex over ( )}nd columns may be accessed based on the first bank column address signal CA0. Accordingly, data AB0and AB1that correspond to a first element and a second element of the merge matrix (that is, data that corresponds to first elements of the first and second matrices) may be read from the first memory bank BK0. The first memory bank BK0may enable the first data enable signal DEN<0> while outputting the data AB0and AB1that correspond to the first and second elements of the merge matrix. The data AB0and AB1that correspond to the first and second elements of the merge matrix may be provided to the MAC operator MAC through the first bank I/O line791. When a time corresponding to tCCD elapses, a second external command signal E_CMD and a second input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device1500. The tCCD may be defined by a time interval during which another column command signal is input after one column command signal is input. The command decoder750may generate a second calculation signal TEWADD based on the second external command signal E_CMD, and the address latch760may output the second input address signal I_ADDR as a second column address signal ADDR_C<0:n>. The column control circuit1570may second enable the first bank access control signal CASP<0> based on the second calculation signal TEWADD, and may provide at least a part of the second column address signal ADDR_C<0:n> as a second bank column address signal CA<0:4> (CA1). Columns that are coupled to the enabled row of the first memory bank BK0may be accessed based on the first bank access control signal CASP<0> and the second bank column address signal CA1. For example, 33{circumflex over ( )}rd to 64{circumflex over ( )}th columns may be accessed based on the second bank column address signal CA1. Accordingly, 32-bit data AB2and AB3that correspond to third and fourth elements of the merge matrix may be read from the first memory bank BK0. The first memory bank BK0may enable the first data enable signal DEN<0> while outputting the data AB2and AB3that correspond to the third and fourth elements of the merge matrix. The data AB2and AB3that correspond to the third and fourth elements of the merge matrix may be provided to the MAC operator MAC through the first bank I/O line791. When a time corresponding to tCCD elapses, a third external command signal E_CMD and a third input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device1500. The command decoder750may generate a third calculation signal TEWADD based on the third external command signal E_CMD, and the address latch760may output the third input address signal I_ADDR as a third column address signal ADDR_C<0:n>. The column control circuit1570may third enable the first bank access control signal CASP<0> based on the third calculation signal TEWADD and may provide at least a part of the third column address signal ADDR_C<0:n> as a third bank column address signal CA<0:4> (CA2). Columns that are coupled to the enabled row of the first memory bank BK0may be accessed based on the first bank access control signal CASP<0> and the third bank column address signal CA2. For example, 65{circumflex over ( )}th to 96{circumflex over ( )}th columns may be accessed based on the third bank column address signal CA2. Accordingly, 32-bit data AB4and AB5that correspond to fifth and sixth elements of the merge matrix may be read from the first memory bank BK0. The first memory bank BK0may enable the first data enable signal DEN<0> while outputting the data AB4and AB5that correspond to the fifth and sixth elements of the merge matrix. The data AB4and AB5that correspond to the fifth and sixth elements of the merge matrix may be provided to the MAC operator MAC through the first bank I/O line791. When a time that corresponds to tCCD elapses, a fourth external command signal E_CMD and a fourth input address signal I_ADDR for performing the element-wise arithmetic operation may be received in the PIM device1500. The command decoder750may generate a fourth calculation signal TEWADD based on the fourth external command signal E_CMD, and the address latch760may output the fourth input address signal I_ADDR as a fourth column address signal ADDR_C<0:n>. The column control circuit1570may fourth enable the first bank access control signal CASP<0> based on the fourth calculation signal TEWADD and may provide at least a part of the fourth column address signal ADDR_C<0:n> as a fourth bank column address signal CA<0:4> (CA3). Columns that are coupled to the enabled row of the first memory bank BK0may be accessed based on the first bank access control signal CASP<0> and the fourth bank column address signal CA3. For example, 97{circumflex over ( )}th to 128{circumflex over ( )}th columns may be accessed based on the fourth bank column address signal CA3. Accordingly, 32-bit data AB6and AB7that correspond to seventh and eighth elements of the merge matrix may be read from the first memory bank BK0. The first memory bank BK0may enable the first data enable signal DEN<0> while outputting the data AB6and AB7that correspond to the seventh and eighth elements of the merge matrix. The data AB6and AB7that correspond to the seventh and eighth elements of the merge matrix may be provided to the MAC operator MAC through the first bank I/O line791. The MAC operator MAC may receive data, read from the first memory bank BK0, through the first bank I/O line791, and may perform a calculation on the received data. The MAC operator MAC may receive the 32-bit data AB0and AB1, corresponding to the first and second elements of the merge matrix, from the first memory bank BK0. The global buffer1595may provide data with the value of ‘1’ to the MAC operator MAC based on the arithmetic operation signal TADD_OP. The MAC operator MAC may generate a first arithmetic data Y0by performing a calculation on the 16-bit data AB0that corresponds to the first element of the merge matrix and the 16-bit data AB1that corresponds to the second element of the merge matrix, and may output the first arithmetic data Y0to the second memory bank BK1through the second bank I/O line792. When the predetermined time elapses after the first data enable signal DEN<0> is first received, the MAC operator MAC may enable the delayed bank access control signal CASP_TA<1>. The MAC operator MAC may sequentially store the first to fourth bank column address signals CA0, CA1, CA2, and CA3based on the first bank access control signal CASP<0>, and may output the first bank column address signal CA0as a first delayed column address signal CA_TA<0:4> (CA_TA0) when the first delayed bank access control signal CASP_TA<1> is enabled. The second memory bank BK1may receive the first delayed bank access control signal CASP_TA<1> and the first delayed column address signal CA_TA0. Columns that are coupled to an enabled row of the second memory bank BK1may be accessed based on the first delayed bank access control signal CASP_TA<1> and the first delayed column address signal CA_TA0. First to 32{circumflex over ( )}nd columns may be accessed based on the first delayed column address signal CA_TA0, and the first arithmetic data Y0as a first element of the third matrix may be written into the second memory bank BK1. The 16-bit arithmetic data Y0may be written into the first to sixteenth columns that are coupled to the enabled row of the second memory bank BK1, and ‘0’ may be stored in the seventeenth to 32{circumflex over ( )}nd columns. ‘0’ may be stored in the seventeenth to 32{circumflex over ( )}nd columns for zero padding. The MAC operator MAC may receive the 32-bit data AB2and AB3, corresponding to the third and fourth elements of the merge matrix, from the first memory bank BK0. The MAC operator MAC may generate second arithmetic data Y1by performing a calculation on the 16-bit data AB2corresponding to the third element of the merge matrix and the 16-bit data AB3corresponding to the fourth element of the merge matrix, and may output the second arithmetic data Y1to the second memory bank BK1through the second bank I/O line792. When the predetermined time elapses after the first data enable signal DEN<0> is second received, the MAC operator MAC may second enable the delayed bank access control signal CASP_TA<1>. The MAC operator MAC may output the second bank column address signal CA1as a second delayed column address signal CA_TA<0:4> (CA_TA1) when the second delayed bank access control signal CASP_TA<1> is enabled. The second memory bank BK1may receive the second delayed bank access control signal CASP_TA<1> and the second delayed column address signal CA_TA1. Columns that are coupled to an enabled row of the second memory bank BK1may be accessed based on the second delayed bank access control signal CASP_TA<1> and the second delayed column address signal CA_TA1. 33{circumflex over ( )}rd to 64{circumflex over ( )}th columns may be accessed based on the second delayed column address signal CA_TA1, and the second arithmetic data Y1as a second element of the third matrix may be written into the second memory bank BK1. The 16-bit arithmetic data Y1may be written into the 33{circumflex over ( )}rd to 48{circumflex over ( )}th columns that are coupled to the enabled row of the second memory bank BK1, and ‘0’ may be stored in the 49{circumflex over ( )}th to 64{circumflex over ( )}th columns. When data that corresponds to all the elements of the merge matrix are read from the first memory bank BK0and all arithmetic data generated by the MAC operator MAC are written into the second memory bank BK1, the element-wise arithmetic operation of the PIM device1500may be ended. A limited number of possible embodiments for the present teachings have been presented above for illustrative purposes. Those of ordinary skill in the art will appreciate that various modifications, additions, and substitutions are possible. While this patent document contains many specifics, these should not be construed as limitations on the scope of the present teachings or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
300,401
11861370
DETAILED DESCRIPTION A vehicle may include systems that employ memory devices, such as a NOT-AND (NAND) device, that aid in one or more services performed by the systems. However, in some examples, a delay between the vehicle powering on the systems (e.g., due to the vehicle being started) and other systems of vehicle coming online (e.g., safety systems, which may include a back-up camera or parking camera) may occur due at least in part to latency from the NAND device during a boot-up procedure. Accordingly, reducing the duration of the boot-up procedure (e.g., by reducing latency associated with the NAND device) may reduce latency from powering the system to the other systems being online. Techniques are described herein that reduce the duration. For instance, a boot-up procedure may be characterized by multiple phases (e.g., a universal flash storage (UFS) boot recording phase, a kernel loading phase, and a kernel start phase), where each phase of the boot-up procedure may be preceded by a hardware reset of one or more components of the system. The system may record the commands associated with each phase of the boot-up procedure as well as one or more logical block addresses (LBAs) associated with information retrieved during the boot-up procedure. In subsequent boot-up procedures, the system may use the recorded commands and the recorded LBAs to transfer information from a non-volatile memory device (e.g., a NAND device) to a volatile memory device (e.g., a cache of the memory system) before the associated commands are received by the memory system. The requested information may be retrieved from the volatile memory device more quickly than from the non-volatile memory device. Accordingly, upon receiving the commands, the memory system may more quickly provide the associated information to the host system. Thus, the duration of boot-up may be reduced. Features of the disclosure are initially described in the context of systems as described with reference toFIGS.1-2. Features of the disclosure are described in the context of process flows with reference toFIGS.3-4. These and other features of the disclosure are further illustrated by and described in the context of an apparatus diagram and a flowchart that relate to automotive boot optimization with reference toFIGS.5-6. FIG.1illustrates an example of a system100that supports automotive boot optimization in accordance with examples as disclosed herein. The system100includes a host system105coupled with a memory system110. A memory system110may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system110may be or include a Universal Flash Storage (UFS) device, an embedded Multi-Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities. The system100may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device. The system100may include a host system105, which may be coupled with the memory system110. In some examples, this coupling may include an interface with a host system controller106, which may be an example of a controller or control component configured to cause the host system105to perform various operations in accordance with examples as described herein. The host system105may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system105may include an application configured for communicating with the memory system110or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA) controller). The host system105may use the memory system110, for example, to write data to the memory system110and read data from the memory system110. Although one memory system110is shown inFIG.1, the host system105may be coupled with any quantity of memory systems110. The host system105may be coupled with the memory system110via at least one physical host interface. The host system105and the memory system110may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system110and the host system105). Examples of a physical host interface may include, but are not limited to, a SATA interface, a UFS interface, an eMMC interface, a PCIe interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), and a Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller106of the host system105and a memory system controller115of the memory system110. In some examples, the host system105may be coupled with the memory system110(e.g., the host system controller106may be coupled with the memory system controller115) via a respective physical host interface for each memory device130included in the memory system110, or via a respective physical host interface for each type of memory device130included in the memory system110. The memory system110may include a memory system controller115and one or more memory devices130. A memory device130may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereof). Although two memory devices130-aand130-bare shown in the example ofFIG.1, the memory system110may include any quantity of memory devices130. Further, if the memory system110includes more than one memory device130, different memory devices130within the memory system110may include the same or different types of memory cells. The memory system controller115may be coupled with and communicate with the host system105(e.g., via the physical host interface) and may be an example of a controller or control component configured to cause the memory system110to perform various operations in accordance with examples as described herein. The memory system controller115may also be coupled with and communicate with memory devices130to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device130—among other such operations—which may generically be referred to as access operations. In some cases, the memory system controller115may receive commands from the host system105and communicate with one or more memory devices130to execute such commands (e.g., at memory arrays within the one or more memory devices130). For example, the memory system controller115may receive commands or operations from the host system105and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices130. In some cases, the memory system controller115may exchange data with the host system105and with one or more memory devices130(e.g., in response to or otherwise in association with commands from the host system105). For example, the memory system controller115may convert responses (e.g., data packets or other signals) associated with the memory devices130into corresponding signals for the host system105. The memory system controller115may be configured for other operations associated with the memory devices130. For example, the memory system controller115may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from the host system105and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices130. The memory system controller115may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller115. The memory system controller115may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry. The memory system controller115may also include a local memory120. In some cases, the local memory120may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller115to perform functions ascribed herein to the memory system controller115. In some cases, the local memory120may additionally or alternatively include static random access memory (SRAM) or other memory that may be used by the memory system controller115for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller115. Additionally or alternatively, the local memory120may serve as a cache for the memory system controller115. For example, data may be stored in the local memory120if read from or written to a memory device130, and the data may be available within the local memory120for subsequent retrieval for or manipulation (e.g., updating) by the host system105(e.g., with reduced latency relative to a memory device130) in accordance with a cache policy. Although the example of the memory system110inFIG.1has been illustrated as including the memory system controller115, in some cases, a memory system110may not include a memory system controller115. For example, the memory system110may additionally or alternatively rely upon an external controller (e.g., implemented by the host system105) or one or more local controllers135, which may be internal to memory devices130, respectively, to perform the functions ascribed herein to the memory system controller115. In general, one or more functions ascribed herein to the memory system controller115may in some cases instead be performed by the host system105, a local controller135, or any combination thereof. In some cases, a memory device130that is managed at least in part by a memory system controller115may be referred to as a managed memory device. An example of a managed memory device is a managed NAND (MNAND) device. A memory device130may include one or more arrays of non-volatile memory cells. For example, a memory device130may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), electrically erasable programmable ROM (EEPROM), or any combination thereof. Additionally or alternatively, a memory device130may include one or more arrays of volatile memory cells. For example, a memory device130may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells. In some examples, a memory device130may include (e.g., on a same die or within a same package) a local controller135, which may execute operations on one or more memory cells of the respective memory device130. A local controller135may operate in conjunction with a memory system controller115or may perform one or more functions ascribed herein to the memory system controller115. For example, as illustrated inFIG.1, a memory device130-amay include a local controller135-aand a memory device130-bmay include a local controller135-b. In some cases, a memory device130may be or include a NAND device (e.g., NAND flash device). A memory device130may be or include a memory die160. For example, in some cases, a memory device130may be a package that includes one or more dies160. A die160may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die160may include one or more planes165, and each plane165may include a respective set of blocks170, where each block170may include a respective set of pages175, and each page175may include a set of memory cells. In some cases, a NAND memory device130may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device130may include memory cells configured to each store multiple bits of information, which may be referred to as multi-level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more generically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry. In some cases, planes165may refer to groups of blocks170, and in some cases, concurrent operations may take place within different planes165. For example, concurrent operations may be performed on memory cells within different blocks170so long as the different blocks170are in different planes165. In some cases, an individual block170may be referred to as a physical block, and a virtual block180may refer to a group of blocks170within which concurrent operations may occur. For example, concurrent operations may be performed on blocks170-a,170-b,170-c, and170-dthat are within planes165-a,165-b,165c, and165-d, respectively, and blocks170-a,170-b,170-c, and170-dmay be collectively referred to as a virtual block180. In some cases, a virtual block may include blocks170from different memory devices130(e.g., including blocks in one or more planes of memory device130-aand memory device130-b). In some cases, the blocks170within a virtual block may have the same block address within their respective planes165(e.g., block170-amay be “block0” of plane165-a, block170-bmay be “block0” of plane165-b, and so on). In some cases, performing concurrent operations in different planes165may be subject to one or more restrictions, such as concurrent operations being performed on memory cells within different pages175that have the same page address within their respective planes165(e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes165). In some cases, a block170may include memory cells organized into rows (pages175) and columns (e.g., strings, not shown). For example, memory cells in a same page175may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line). For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page175may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block170may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re-written with new data. Thus, for example, a used page175may in some cases not be updated until the entire block170that includes the page175has been erased. In some cases, to update some data within a block170while retaining other data within the block170, the memory device130may copy the data to be retained to a new block170and write the updated data to one or more remaining pages of the new block170. The memory device130(e.g., the local controller135) or the memory system controller115may mark or otherwise designate the data that remains in the old block170as invalid or obsolete and may update a logical-to-physical (L2P) mapping table to associate the logical address (e.g., LBA) for the data with the new, valid block170rather than the old, invalid block170. In some cases, such copying and remapping may be performed instead of erasing and rewriting the entire old block170due to latency or wearout considerations, for example. In some cases, one or more copies of an L2P mapping table may be stored within the memory cells of the memory device130(e.g., within one or more blocks170or planes165) for use (e.g., reference and updating) by the local controller135or memory system controller115. In some cases, L2P mapping tables may be maintained and data may be marked as valid or invalid at the page level of granularity, and a page175may contain valid data, invalid data, or no data. Invalid data may be data that is outdated due to a more recent or updated version of the data being stored in a different page175of the memory device130. Invalid data may have been previously programmed to the invalid page175but may no longer be associated with a valid logical address, such as a logical address referenced by the host system105. Valid data may be the most recent version of such data being stored on the memory device130. A page175that includes no data may be a page175that has never been written to or that has been erased. In some cases, a memory system controller115or a local controller135may perform operations (e.g., as part of one or more media management algorithms) for a memory device130, such as wear leveling, background refresh, garbage collection, scrub, block scans, health monitoring, or others, or any combination thereof. For example, within a memory device130, a block170may have some pages175containing valid data and some pages175containing invalid data. To avoid waiting for all of the pages175in the block170to have invalid data in order to erase and reuse the block170, an algorithm referred to as “garbage collection” may be invoked to allow the block170to be erased and released as a free block for subsequent write operations. Garbage collection may refer to a set of media management operations that include, for example, selecting a block170that contains valid and invalid data, selecting pages175in the block that contain valid data, copying the valid data from the selected pages175to new locations (e.g., free pages175in another block170), marking the data in the previously selected pages175as invalid, and erasing the selected block170. As a result, the quantity of blocks170that have been erased may be increased such that more blocks170are available to store subsequent data (e.g., data subsequently received from the host system105). The system100may include any quantity of non-transitory computer readable media that support automotive boot optimization. For example, the host system105, the memory system controller115, or a memory device130(e.g., a local controller135) may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system105, memory system controller115, or memory device130. For example, such instructions, if executed by the host system105(e.g., by the host system controller106), by the memory system controller115, or by a memory device130(e.g., by a local controller135), may cause the host system105, memory system controller115, or memory device130to perform one or more associated functions as described herein. In some examples, a vehicle may include the system100. In some such examples, a delay between the vehicle powering on the system100(e.g., due to the vehicle being started) and other systems of vehicle coming online (e.g., safety systems, which may include a back-up camera or parking camera) may occur due at least in part to latency from the memory system110and/or memory devices130-aand130-bduring a boot-up procedure. Accordingly, reducing the duration of the boot-up procedure (e.g., by reducing latency associated with the memory system110and/or memory devices130-aand130-b) may reduce latency from powering the system to the other systems being online. Techniques are described herein that reduce the duration. For instance, a boot-up procedure may be characterized by multiple phases (e.g., a UFS boot recording phase, a kernel loading phase, and a kernel start phase), where each phase of the boot-up procedure may be preceded by a hardware reset of one or more components of the memory system110. The memory system110may record the commands associated with each phase of the boot-up procedure as well as one or more LBAs associated with information retrieved during the boot-up procedure. In subsequent boot-up procedures, the memory system110may use the recorded commands and the recorded LBAs to transfer information from a non-volatile memory device (e.g., a NAND device, such as memory device130-aor memory device130-b) to a volatile memory device (e.g., a cache of the memory system110) before the associated commands are received by the memory system110. The requested information may be retrieved from the volatile memory device more quickly than from the non-volatile memory device. Accordingly, upon receiving the commands, the memory system110may more quickly retrieve the associated information. Thus, the duration of boot-up may be reduced. FIG.2illustrates an example of a system200that supports automotive boot optimization in accordance with examples as disclosed herein. The system200may be an example of a system100as described with reference toFIG.1or aspects thereof. Vehicle205may include a memory system210and a host system225. In some examples, memory system210may be an example of a memory system110as described with reference toFIG.1and host system225may be an example of a host system105as described with reference toFIG.1. In some examples, the memory system210may include a non-volatile memory device215(e.g., a NAND) and a volatile memory device220(e.g., a cache). In some examples, one or both of non-volatile memory device215and volatile memory device220may be an example of a memory device130-aor130-bas described with reference toFIG.1. The vehicle205may be a device capable of performing locomotion, carrying, transporting, or any combination thereof. Examples of the vehicle205may include a motor vehicle (e.g., a car, a truck, a train, a motorcycle), an aircraft (e.g., a plane, a helicopter), a boat, or a human-powered transport (e.g., a bicycle). In some examples, the vehicle205may include systems that employ the use of the memory system210and/or the host system225. For instance, the vehicle may include a parking camera or a back-up camera that stores information at or retrieves information from the memory system210. The memory system210may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system210may be or include a UFS device, an eMMC device, a flash device, a USB flash device, an SD card, an SSD, an HDD, a DIMM, a SO-DIMM, or an NVDIMM, among other possibilities. The non-volatile memory device215may be a device that includes one or more memory arrays of non-volatile memory cells (e.g., a NAND memory device) and the volatile memory device220may be a devices that includes one or more memory arrays of volatile memory cells (e.g., a cache). In some examples, the host system225and/or the memory system210may operate according to an operating system (OS). One such OS may include QNX OS, which may be a microkernel-based OS executed in the form of multiple tasks referred to as resource managers. One example of a resource manager for the vehicle205may be a parking camera or a back-up camera. Generally, the methods described herein may enable boot-up time to be reduced by recording commands and LBAs (e.g., and/or transfer lengths) used in a prior boot-up procedure to retrieve information from the non-volatile memory device215and using the recorded LBAs for subsequent boot-up procedures in order to store the information in the volatile memory device220before an associated command is received. For instance, a boot-up procedure may be characterized by multiple phases (e.g., a UFS boot recording phase, a kernel loading phase, and a kernel start phase), where each phase of the boot-up procedure may be preceded by a hardware reset of one or more components of the memory system110. The memory system110may record the commands associated with each phase of the boot-up procedure as well as at least one logical block address (LBA) associated with information retrieved during the boot-up procedure. In subsequent boot-up procedures, the memory system110may use the recorded commands and the recorded LBAs to transfer information from a non-volatile memory device (e.g., a NAND device, such as memory device130-aor memory device130-b) to a volatile memory device (e.g., a cache) before the associated commands are received by the memory system110. The requested information may be retrieved from the volatile memory device220more quickly than from the non-volatile memory device215. Accordingly, upon receiving the commands, the memory system110may more quickly retrieve the associated information. Thus, the duration of boot-up may be reduced. Additional details may be described herein, for instance, with reference toFIG.3. Additionally, the methods described herein may enable the memory system210to detect if changes occur to the boot-up procedure after the initial recording (e.g., due to a system update). For instance, if the memory system210detects that above a threshold number of LBAs are associated with information not stored at the volatile memory device220(e.g., due to the LBAs not being recorded during a prior boot-up procedure), the memory system210may rerecord the commands and LBAs of the phase. Additional details may be described herein, for instance, with reference toFIG.4. For example, if the memory system210experiences a quantity of cache ‘misses’ that satisfy a threshold in response to replaying the recorded boot-up procedure, the memory system210may re-record at least portion of the boot-up procedure. The described methods may be implemented in an embedded environment (e.g., a UFS or eMMC). Once a first boot-up is completed, the recorded trace (e.g., recorded LBAs and/or commands) may be used to improve and/or decrease boot time. In order to decrease the latency of each command and improve boot time, a memory system210may pre-load LBAs in the volatile memory device220and may group NAND pages together during garbage collection execution (e.g., as the memory system210may identify the LBAs to be in sequence during the first boot-up time). FIG.3illustrates an example of a process flow300that supports automotive boot optimization in accordance with examples as disclosed herein. In some examples, process flow300may be implemented by one or more aspects of systems100and/or200. For instance, process flow300may be implemented by a memory system110as described with reference toFIG.1and/or a memory system210as described with reference toFIG.2. In some examples, process flow300may correspond to a self-learning phase. Aspects of the process flow300may be implemented by a controller, among other components. Additionally or alternatively, aspects of the process flow300may be implemented as instructions stored in memory (e.g., firmware stored in a memory coupled with a controller). For example, the instructions, in response to being executed by a controller (e.g., the memory system controller115), may cause the controller to perform the operations of the process flow300. At305, power on for a host system may occur. For instance, a host system and an associated memory system may power on. Powering on may include coupling one or more components of the memory system with one or more sources. Between305and310, the memory system may undergo a hardware reset, in which the one or more components are isolated from the one or more sources and, subsequently, recoupled with the one or more sources. At310, a UFS boot phase of a boot-up procedure may be performed. For instance, the memory system may perform a UFS boot phase (e.g., a first phase). The memory system may determine that the UFS boot phase is occurring based on one or more signatures (e.g., the one or more signatures may indicate that the UFS boot procedure is occurring). For instance, a voltage may transition from a first value to a second value (e.g., VCCand/or VCCQmay transition from 0 to a high state). Additionally or alternatively, a speed mode may be set to a particular mode (e.g., an HS-G1B mode). In response to identifying that the UFS boot phase is occurring, the memory system may record one or more commands received during the UFS boot phase and/or one or more LBAs from which information is retrieved during the UFS boot phase. Between310and315, the memory system may undergo a hardware reset. At315, a kernel loading phase of a boot-up procedure may be performed. For instance, the memory system may perform a kernel loading phase (e.g., a second phase). The memory system may determine that the kernel loading phase is occurring based on one or more signatures (e.g., the one or more signatures may indicate that the kernel loading procedure is occurring). For instance, the memory system may determine that the kernel loading phase is occurring based on the hardware reset occurring between310and315(e.g., the hardware reset may indicate that the kernel loading procedure is occurring). Additionally or alternatively, the memory system may poll a particular flag (e.g., an fDeviceInit flag), may set the speed mode to a particular mode (e.g., an HS-G3B x1 mode), or any combination thereof. During the kernel loading phase, the memory system may load a boot image, which may be a range of LBAs retrieved during a read operation. Additionally or alternatively, the memory system may read but may not write during the kernel loading phase. In response to identifying that the kernel loading phase is occurring, the memory system may record one or more commands received during the kernel loading phase and/or one or more LBAs from which information is retrieved during the kernel loading phase. Between315and320, the memory system may undergo a hardware reset. At320, a kernel start phase of a boot-up procedure may be performed. For instance, the memory system may perform a kernel start phase (e.g., a third phase). The memory system may determine that the kernel start phase is occurring based on one or more signatures (e.g., the one or more signatures may indicate that the kernel start phase is occurring). For instance, the memory system may determine that the kernel start phase is occurring based on the hardware reset occurring between315and320(e.g., the hardware reset may indicate that the kernel start phase is occurring). Additionally or alternatively, the memory system may poll a particular flag (e.g., an fDeviceInit flag), may bring the UFS protocol into a particular mode (e.g., an HS-G3B x2 mode and/or an HS-G4A x2 mode), may issue both read and write commands, or any combination thereof. During the kernel start phase, the memory system may issue write commands and read commands issued at specific LBAs. In response to identifying that the kernel start phase is occurring, the memory system may record one or more commands received during the kernel start phase and/or one or more LBAs from which information is retrieved during the kernel start phase. At325, the memory system may end boot-up. FIG.4illustrates an example of a process flow400that supports automotive boot optimization in accordance with examples as disclosed herein. In some examples, process flow400may be implemented by one or more aspects of systems100and/or200. For instance, process flow400may be implemented by a memory system110as described with reference toFIG.1and/or a memory system210as described with reference toFIG.2. In some examples, process flow400may correspond to a replay phase. Aspects of the process flow400may be implemented by a controller, among other components. Additionally or alternatively, aspects of the process flow400may be implemented as instructions stored in memory (e.g., firmware stored in a memory coupled with a controller). For example, the instructions, in response to being executed by a controller (e.g., the memory system controller115), may cause the controller to perform the operations of the process flow400. In response to a boot-up procedure (or one or more phases of a boot-up procedure) being recorded, the recording may be used to improve the latency for performing a boot-up procedure. For example, in subsequent boot-up procedures (after the boot-up procedure used to record), a memory system may use the recording to pre-load information in some LBAs in a cache of the memory system. Pre-loading information into a cache (from a NAND device) may increase the likelihood that a cache hit occurs. Retrieving information from a cache after receiving a command may take less time than retrieving information from a NAND device after receiving the command. Thus, pre-loading information into a cache based on a recording may reduce the total time it takes to perform a boot-up procedure. At405, power on may occur. For instance, the memory system may power on. Powering on may include coupling one or more components of the memory system with one or more sources. Between405and410, the memory system may undergo a hardware reset, in which the one or more components are isolated from the one or more sources and, subsequently, recoupled with the one or more sources. At410, the LBAs recorded for a prior boot-up may be pre-loaded. For instance, the memory system may pre-load the LBAs recorded as part of a prior boot-up (e.g., as described with reference toFIG.3). For instance, the memory system may transfer information associated with the LBAs from a non-volatile memory device (e.g., NAND) to a volatile memory device (e.g., a cache). At415, boot-replay may begin. For instance, the memory system may begin boot replay. During boot replay, after the memory system receives a command (e.g., from a host system) to retrieve information from one of the LBAs, the memory system may retrieve the information corresponding to the LBA from the volatile memory system and may transmit the information (e.g., to the host system). At420, whether or not information associated with an LBA is retrievable from a volatile memory device of the memory system may be determined (e.g., did the memory system experience a ‘cache hit’ or a ‘cache miss’). For instance, the memory system may determine whether or not information associated with an LBA (e.g., indicated by a received command) is retrievable from a volatile memory device of the memory system (e.g., a cache hit). If the information is not retrievable (e.g., the information was not transferred from the non-volatile memory device to the volatile memory device at410), the memory system may proceed to425(e.g., a cache miss). However, if the information is retrievable (e.g., the information was transferred from the non-volatile memory device to the volatile memory device at410), the memory system may transmit the requested information to the host system and may proceed to440. At440, whether the memory system has reached end of boot-up may be determined. For instance, the memory system may determine whether the memory system has reached the end of boot-up. If the memory system has not reached the end of boot-up (e.g., the memory system is still to retrieve information associated with more LBAs) the memory system may proceed back to420. However, if the memory system has reached the end of boot-up, the memory system may proceed to455. At425, the LBA whose information was not retrievable from the volatile memory device at420may be recorded. For instance, the memory system may record the LBA whose information was not retrievable from the volatile memory device at420and may proceed to430. At430, a miss counter may be incremented by 1 to indicate how many LBAs information have failed to have been retrieved from the volatile memory device. For instance, the memory system may increment a miss counter by 1 to indicate how many LBAs the memory system has failed to retrieve corresponding information from the volatile memory device for. At435, whether the miss counter is at or above a threshold quantity may be determined. the memory system may determine whether the miss counter satisfies a threshold quantity. If the miss counter is below the threshold quantity, the memory system may proceed back to420. However, if the miss counter is at or above the threshold quantity, the memory system may proceed to445. At445, boot replay may be exited. For instance, the memory system may exit boot replay and may proceed to450. At450, any remaining LBAs that are to be accessed during boot-up may be recorded. For instance, the memory system may record any remaining LBAs that the memory system is to access during boot-up and may proceed to455. At455, the memory system may end boot-up. FIG.5shows a block diagram500of a memory system520that supports automotive boot optimization in accordance with examples as disclosed herein. The memory system520may be an example of aspects of a memory system as described with reference toFIGS.1through4. The memory system520, or various components thereof, may be an example of means for performing various aspects of automotive boot optimization as described herein. For example, the memory system520may include a recording component525, a phase detecting component530, an information transferring component535, a command receiving component540, a reset component545, a threshold determination component550, an isolating component555, a recoupling component560, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The recording component525may be configured as or otherwise support a means for recording a first phase of a first boot-up procedure and a second phase of the first boot-up procedure. The phase detecting component530may be configured as or otherwise support a means for detecting that the first phase of a second boot-up procedure is occurring, where the second boot-up procedure occurs after the first boot-up procedure. The information transferring component535may be configured as or otherwise support a means for transferring, from a first logical block address of a non-volatile memory device to a volatile memory device, first information for the first phase of the second boot-up procedure based at least in part on the recording of the first phase of the first boot-up procedure and in response to detecting that the first phase of the second boot-up procedure is occurring. The command receiving component540may be configured as or otherwise support a means for receiving a first command to transmit the first information to a host system as part of the first phase of the second boot-up procedure after transferring the first information from the non-volatile memory device to the volatile memory device. In some examples, the phase detecting component530may be configured as or otherwise support a means for detecting that the second phase of the second boot-up procedure is occurring. In some examples, the information transferring component535may be configured as or otherwise support a means for transferring, from a second logical block address of the non-volatile memory device to the volatile memory device, second information for the second phase of the second boot-up procedure based at least in part on the recording of the second phase of the first boot-up procedure and in response to detecting that the second phase of the second boot-up procedure is occurring. In some examples, the command receiving component540may be configured as or otherwise support a means for receiving a second command to transmit the second information to the host system as part of the second phase of the second boot-up procedure after transferring the second information from the non-volatile memory device to the volatile memory device. In some examples, the recording component525may be configured as or otherwise support a means for recording a third phase of the first boot-up procedure. In some examples, the phase detecting component530may be configured as or otherwise support a means for detecting that the third phase of the second boot-up procedure is occurring. In some examples, the information transferring component535may be configured as or otherwise support a means for transferring, from a third logical block address of the non-volatile memory device to the volatile memory device, third information for the third phase of the second boot-up procedure based at least in part on the recording of the third phase of the first boot-up procedure and in response to detecting that the third phase of the second boot-up procedure is occurring. In some examples, the command receiving component540may be configured as or otherwise support a means for receiving a third command to transmit the third information to the host system as part of transferring the third information from the non-volatile memory device to the volatile memory device. In some examples, the first phase includes a universal flash storage boot phase, the second phase includes kernel loading boot phase, and the third phase includes a kernel start boot phase. In some examples, to support recording the first phase of the first boot-up procedure and the second phase of the first boot-up procedure, the recording component525may be configured as or otherwise support a means for recording the first phase of the first boot-up procedure. In some examples, to support recording the first phase of the first boot-up procedure and the second phase of the first boot-up procedure, the reset component545may be configured as or otherwise support a means for performing a reset on one or more components of the non-volatile memory device based at least in part on recording the first phase of the first boot-up procedure. In some examples, to support recording the first phase of the first boot-up procedure and the second phase of the first boot-up procedure, the recording component525may be configured as or otherwise support a means for recording the second phase of the first boot-up procedure based at least in part on performing the reset on the one or more components of the non-volatile memory device. In some examples, to support performing the reset on the one or more components, the isolating component555may be configured as or otherwise support a means for isolating the one or more components of the non-volatile memory device from a voltage source. In some examples, to support performing the reset on the one or more components, the recoupling component560may be configured as or otherwise support a means for recoupling the one or more components of the non-volatile memory device with the voltage source. In some examples, the reset component545may be configured as or otherwise support a means for performing a second reset on the one or more components of the non-volatile memory device before recording the first phase of the first boot-up procedure, where recording the first phase of the first boot-up procedure is based at least in part on performing the second reset. In some examples, the reset component545may be configured as or otherwise support a means for performing a second reset on the one or more components of the non-volatile memory device based at least in part on recording the second phase of the first boot-up procedure. In some examples, the recording component525may be configured as or otherwise support a means for recording a third phase of the first boot-up procedure based at least in part on performing the second reset. In some examples, the command receiving component540may be configured as or otherwise support a means for receiving one or more commands to transmit one or more instances of information to the host system. In some examples, the information transferring component535may be configured as or otherwise support a means for retrieving the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information are not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure is occurring and receiving the one or more commands. In some examples, the threshold determination component550may be configured as or otherwise support a means for determining that a quantity of the one or more instances of information is greater than a threshold quantity. In some examples, the recording component525may be configured as or otherwise support a means for rerecording the first phase or the second phase of the second boot-up procedure based at least in part on determining that the quantity of the one or more instances of information is greater than the threshold quantity. In some examples, the command receiving component540may be configured as or otherwise support a means for receiving one or more commands to transmit one or more instances of information to the host system. In some examples, the information transferring component535may be configured as or otherwise support a means for retrieving the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information are not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure is occurring and receiving the one or more commands. In some examples, the threshold determination component550may be configured as or otherwise support a means for determining that a quantity of the one or more instances of information is less than a threshold quantity. In some examples, the recording component525may be configured as or otherwise support a means for transferring third information stored at a third logical block address of the non-volatile memory device into the volatile memory device for the first phase or the second phase of the second boot-up procedure based at last in part on determining that the quantity of the one or more instances of information is less than the threshold quantity. In some examples, the non-volatile memory device includes a NAND memory device and the volatile memory device includes a DRAM memory device. FIG.6shows a flowchart illustrating a method600that supports automotive boot optimization in accordance with examples as disclosed herein. The operations of method600may be implemented by a memory system or its components as described herein. For example, the operations of method600may be performed by a memory system as described with reference toFIGS.1through5. In some examples, a memory system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory system may perform aspects of the described functions using special-purpose hardware. At605, the method may include recording a first phase of a first boot-up procedure and a second phase of the first boot-up procedure. The operations of605may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of605may be performed by a recording component525as described with reference toFIG.5. At610, the method may include detecting that the first phase of a second boot-up procedure is occurring, where the second boot-up procedure occurs after the first boot-up procedure. The operations of610may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of610may be performed by a phase detecting component530as described with reference toFIG.5. At615, the method may include transferring, from a first logical block address of the non-volatile memory device to the volatile memory device, first information for the first phase of the second boot-up procedure based at least in part on the recording of the first phase of the first boot-up procedure and in response to detecting that the first phase of the second boot-up procedure is occurring. The operations of615may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of615may be performed by an information transferring component535as described with reference toFIG.5. At620, the method may include receiving a first command to transmit the first information to a host system as part of the first phase of the second boot-up procedure after transferring the first information from the non-volatile memory device to the volatile memory device. The operations of620may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of620may be performed by a command receiving component540as described with reference toFIG.5. At625, the method may include detecting that the second phase of the second boot-up procedure is occurring. The operations of625may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of625may be performed by a phase detecting component530as described with reference toFIG.5. At630, the method may include transferring, from a second logical block address of the non-volatile memory device to the volatile memory device, second information for the second phase of the second boot-up procedure based at least in part on the recording of the second phase of the first boot-up procedure and in response to detecting that the second phase of the second boot-up procedure is occurring. The operations of630may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of630may be performed by an information transferring component535as described with reference toFIG.5. At635, the method may include receiving a second command to transmit the second information to the host system as part of the second phase of the second boot-up procedure after transferring the second information from the non-volatile memory device to the volatile memory device. The operations of635may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of635may be performed by a command receiving component540as described with reference toFIG.5. In some examples, an apparatus as described herein may perform a method or methods, such as the method600. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for recording a first phase of a first boot-up procedure and a second phase of the first boot-up procedure, detecting that the first phase of a second boot-up procedure is occurring, where the second boot-up procedure occurs after the first boot-up procedure, transferring, from a first logical block address of a non-volatile memory device to a volatile memory device, first information for the first phase of the second boot-up procedure based at least in part on the recording of the first phase of the first boot-up procedure and in response to detecting that the first phase of the second boot-up procedure is occurring, receiving a first command to transmit the first information to a host system as part of the first phase of the second boot-up procedure after transferring the first information from the non-volatile memory device to the volatile memory device, detecting that the second phase of the second boot-up procedure is occurring, transferring, from a second logical block address of the non-volatile memory device to the volatile memory device, second information for the second phase of the second boot-up procedure based at least in part on the recording of the second phase of the first boot-up procedure and in response to detecting that the second phase of the second boot-up procedure is occurring, and receiving a second command to transmit the second information to the host system as part of the second phase of the second boot-up procedure after transferring the second information from the non-volatile memory device to the volatile memory device. Some examples of the method600and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for recording a third phase of the first boot-up procedure, detecting that the third phase of the second boot-up procedure may be occurring, transferring, from a third logical block address of the non-volatile memory device to the volatile memory device, third information for the third phase of the second boot-up procedure based at least in part on the recording of the third phase of the first boot-up procedure and in response to detecting that the third phase of the second boot-up procedure may be occurring, and receiving a third command to transmit the third information to the host system as part of transferring the third information from the non-volatile memory device to the volatile memory device. In some examples of the method600and the apparatus described herein, the first phase includes a universal flash storage boot phase, the second phase includes kernel loading boot phase, and the third phase includes a kernel start boot phase. In some examples of the method600and the apparatus described herein, recording the first phase of the first boot-up procedure and the second phase of the first boot-up procedure may include operations, features, circuitry, logic, means, or instructions for recording the first phase of the first boot-up procedure, performing a reset on one or more components of the non-volatile memory device based at least in part on recording the first phase of the first boot-up procedure, and recording the second phase of the first boot-up procedure based at least in part on performing the reset on the one or more components of the non-volatile memory device. In some examples of the method600and the apparatus described herein, performing the reset on the one or more components may include operations, features, circuitry, logic, means, or instructions for isolating the one or more components of the non-volatile memory device from a voltage source and recoupling the one or more components of the non-volatile memory device with the voltage source. Some examples of the method600and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for performing a second reset on the one or more components of the non-volatile memory device before recording the first phase of the first boot-up procedure, where recording the first phase of the first boot-up procedure may be based at least in part on performing the second reset. Some examples of the method600and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for performing a second reset on the one or more components of the non-volatile memory device based at least in part on recording the second phase of the first boot-up procedure and recording a third phase of the first boot-up procedure based at least in part on performing the second reset. Some examples of the method600and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for receiving one or more commands to transmit one or more instances of information to the host system, retrieving the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information may be not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure may be occurring and receiving the one or more commands, determining that a quantity of the one or more instances of information may be greater than a threshold quantity, and rerecording the first phase or the second phase of the second boot-up procedure based at least in part on determining that the quantity of the one or more instances of information may be greater than the threshold quantity. Some examples of the method600and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for receiving one or more commands to transmit one or more instances of information to the host system, retrieving the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information may be not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure may be occurring and receiving the one or more commands, determining that a quantity of the one or more instances of information may be less than a threshold quantity, and transferring third information stored at a third logical block address of the non-volatile memory device into the volatile memory device for the first phase or the second phase of the second boot-up procedure based at last in part on determining that the quantity of the one or more instances of information may be less than the threshold quantity. In some examples of the method600and the apparatus described herein, the non-volatile memory device includes a NAND memory device and the volatile memory device includes a DRAM memory device. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. An apparatus is described. The apparatus may include a non-volatile memory device, a volatile memory device, a controller coupled with the non-volatile memory device and the volatile memory device, where the controller is configured to cause the apparatus to record a first phase of a first boot-up procedure and a second phase of the first boot-up procedure, detect that the first phase of a second boot-up procedure is occurring, where the second boot-up procedure occurs after the first boot-up procedure, transfer, from a first logical block address of the non-volatile memory device to the volatile memory device, first information for the first phase of the second boot-up procedure based at least in part on the recording of the first phase of the first boot-up procedure and in response to detecting that the first phase of the second boot-up procedure is occurring, receive a first command to transmit the first information to a host system as part of the first phase of the second boot-up procedure after transferring the first information from the non-volatile memory device to the volatile memory device, detect that the second phase of the second boot-up procedure is occurring, transfer, from a second logical block address of the non-volatile memory device to the volatile memory device, second information for the second phase of the second boot-up procedure based at least in part on the recording of the second phase of the first boot-up procedure and in response to detecting that the second phase of the second boot-up procedure is occurring, and receive a second command to transmit the second information to the host system as part of the second phase of the second boot-up procedure after transferring the second information from the non-volatile memory device to the volatile memory device. In some examples, the apparatus may include record a third phase of the first boot-up procedure, detect that the third phase of the second boot-up procedure may be occurring, transfer, from a third logical block address of the non-volatile memory device to the volatile memory device, third information for the third phase of the second boot-up procedure based at least in part on the recording of the third phase of the first boot-up procedure and in response to detecting that the third phase of the second boot-up procedure may be occurring, and receive a third command to transmit the third information to the host system as part of transferring the third information from the non-volatile memory device to the volatile memory device. In some examples of the apparatus, the first phase includes a universal flash storage boot phase, the second phase includes kernel loading boot phase, and the third phase includes a kernel start boot phase. In some examples, the apparatus may include record the first phase of the first boot-up procedure, perform a reset on one or more components of the non-volatile memory device based at least in part on recording the first phase of the first boot-up procedure, and record the second phase of the first boot-up procedure based at least in part on performing the reset on the one or more components of the non-volatile memory device. In some examples, the apparatus may include isolate the one or more components of the non-volatile memory device from a voltage source and recouple the one or more components of the non-volatile memory device with the voltage source. In some examples, the apparatus may include perform a second reset on the one or more components of the non-volatile memory device before recording the first phase of the first boot-up procedure, where recording the first phase of the first boot-up procedure may be based at least in part on performing the second reset. In some examples, the apparatus may include perform a second reset on the one or more components of the non-volatile memory device based at least in part on recording the second phase of the first boot-up procedure and record a third phase of the first boot-up procedure based at least in part on performing the second reset. In some examples, the apparatus may include receive one or more commands to transmit one or more instances of information to the host system, retrieve the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information may be not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure may be occurring and receiving the one or more commands, determine that a quantity of the one or more instances of information may be greater than a threshold quantity, and rerecord the first phase or the second phase of the second boot-up procedure based at least in part on determining that the quantity of the one or more instances of information may be greater than the threshold quantity. In some examples, the apparatus may include receive one or more commands to transmit one or more instances of information to the host system, retrieve the one or more instances of information from the non-volatile memory device based at least in part on receiving the one or more commands, where the one or more instances of information may be not transferred to the volatile memory device between detecting that the first phase of the second boot-up procedure may be occurring and receiving the one or more commands, determine that a quantity of the one or more instances of information may be less than a threshold quantity, and transfer third information stored at a third logical block address of the non-volatile memory device into the volatile memory device for the first phase or the second phase of the second boot-up procedure based at last in part on determining that the quantity of the one or more instances of information may be less than the threshold quantity. In some examples of the apparatus, the non-volatile memory device includes a NAND memory device and the volatile memory device includes a DRAM memory device. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to a condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. If a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other if the switch is open. If a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The terms “if,” “when,” “based on,” or “based at least in part on” may be used interchangeably. In some examples, if the terms “if,” “when,” “based on,” or “based at least in part on” are used to describe a conditional action, a conditional process, or connection between portions of a process, the terms may be interchangeable. The term “in response to” may refer to one condition or action occurring at least partially, if not fully, as a result of a previous condition or action. For example, a first condition or action may be performed and second condition or action may at least partially occur as a result of the previous condition or action occurring (whether directly after or after one or more other intermediate conditions or actions occurring after the first condition or action). Additionally, the terms “directly in response to” or “in direct response to” may refer to one condition or action occurring as a direct result of a previous condition or action. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring independent of whether other conditions or actions occur. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring, such that no other intermediate conditions or actions occur between the earlier condition or action and the second condition or action or a limited quantity of one or more intermediate steps or actions occur between the earlier condition or action and the second condition or action. Any condition or action described herein as being performed “based on,” “based at least in part on,” or “in response to” some other step, action, event, or condition may additionally or alternatively (e.g., in an alternative example) be performed “in direct response to” or “directly in response to” such other condition or action unless otherwise specified. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” if a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” if a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a hyphen and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. For example, the various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
78,587
11861371
DETAILED DESCRIPTION For purposes of reading the description of the various embodiments below, the following descriptions of the sections of the specification and their respective contents may be helpful:Section A provides an introduction to example embodiments of a system for automatically selecting a peripheral device from among a plurality of simultaneously connected peripheral devices;Section B describes a network environment which may be useful for practicing embodiments described herein;Section C describes a computing system which may be useful for practicing embodiments described herein;Section D describes embodiments of systems and methods for accessing computing resources using a cloud computing environment;Section E describes an example implementation of a resource delivery system which may be useful for practicing embodiments described herein;Section F provides a more detailed description of example embodiments of the system introduced in Section A; andSection G describes example implementations of methods, systems/devices, and computer-readable media in accordance with the present disclosure. A. Introduction to Illustrative Embodiments of a System for Automated Transfer of Peripheral Device Operations Applications on a computing system may utilize various devices for receiving input from a user and/or providing output to a user. These devices may be internal or external to the computing system, and may be connected through a wired or wireless connection. The computing system may have multiples of similar devices connected at one time. For example, a computing system may have both a wired headset and a wireless headset connected simultaneously. A user may employ multiple peripheral devices for various reasons and preferences. For instance, several peripheral devices of the same type may be simultaneously connected to a computing system and a user may physically switch among such devices based on user preferences or needs. For example, a user may prefer a headset for gaming and prefer a speaker for listening to music. As another example, a user may prefer a wireless headset, but may switch to a wired headset should the battery run out or interference occur with the wireless headset. A user may initialize an application, such as an audio/video conference, that utilizes a connected peripheral device, such as a headset. The user may configure a particular peripheral device as a default or preferred device for the computing system. While such a default or preferred device designation may be advantageous in some circumstances, the inventors have recognized and appreciated that the preferred device may not be the preferred device for a given application or scenario. For instance, it may be undesirable to use loudspeakers when in a public place. Thus, the user may need to manually direct the computing system to use a different peripheral device as the active device in such a scenario. This may require the user to not only physically switch devices, but also take one or more steps to designate the different device as the active device on their computer. The need to manually take such extra step(s) to designate the different device as the active device may result in a poor user experience. For example, using conventional techniques, if a user switches headsets during a video conference, the user would need to take the time to locate and manipulate the user interface feature for designating a new active device. Further, as a result of that extra effort and the added delay of having to select the different headset as the active device, the user may miss an important part of the conversation. Offered are systems and techniques for determining situations in which a user switches from using one peripheral device connected to a computing system to another peripheral device that is connected (e.g., simultaneously connected) to the computing system (e.g., by taking off a wireless headset and putting on a wired headset), and automatically designating the newly-used peripheral device as the active device (e.g., one currently in use) based on that determination. As explained in more detail below, in some implementations, the computing system may employ a device control engine that is configured to determine when the user physically switches between peripheral devices, such as a swap from a first peripheral device to a second peripheral device, as well as to update the computing system for data input and/or output from the peripheral device the user is presently utilizing. In some implementations, the device control engine may utilize sensors to identify a change between peripheral devices and determine which device is to be designated as the active device. In some implementations, the device control engine may additionally or alternatively use stored behavior data to select an appropriate peripheral device for use as the active device. For example, the device control engine may select an appropriate peripheral device based on one or more criteria, such as user preference, the application, time of day, location, etc. For the purposes of this description, the terms “active device” and “selected device” may be used interchangeably. As used herein, both such terms refer to a device that is currently in use by the computing system such that input and/or output data is being transferred between the device and the computing system. In some implementations, when one device is the active device, other connected devices of a similar type may be unable to transfer input and/or output data to and/or from the computing system to prevent conflicts. For the purposes of this description, the term “connected device” refers to a device with an established communication channel to the computing system which would, if the computing system made the device the active device, allow for input and/or output of data between the device and the computing system. The manner in which such a communication channel is established may vary based on the type of connection. For example, a peripheral device with a wired connection, such as a connection through Universal Serial Bus (USB) port or a headphone jack, may have an established communication channel with the computing system as long as the wired device is physically plugged into an appropriate port or jack of the computing system. Wireless peripheral devices may utilize wireless technologies such as Wi-Fi, Bluetooth, radio frequency (RF), Zigbee, etc. A technology such as Bluetooth requires the device and the computing system to be paired. This establishes that permission is granted for the device and computing system to communicate. After being paired, the device and computing system are capable of communicating, and thus being a connected device, only when the device is within a physical range of the computing system. For wireless devices, the computing system may be aware of various wireless devices, such as if the wireless device and computing system are paired, but the wireless device is a connected device for the computing system only when the wireless device is within range of the computing system such that a communication channel can be established. Depending on the type of device, the computing system, or operating system of the computing system, may determine that communication is possible with multiple devices, such as a wired headset and a Bluetooth headset that is within range of the computing system. In such a circumstance, the computing system/operating system may have multiple connected devices of the same type, and one of those connected devices may need to be selected as the active device for a particular type of input and/or output (e.g., audio input/output for a video conferencing application). FIG.1Ais a high-level diagram illustrating an example system for automatic transfer of peripheral device operations, from among a plurality of simultaneously connected peripheral devices, such as a wired device108and a wireless device110, as the active device for use with an application106, in accordance with some aspects of the present disclosure. As shown, the computing system100may be operated by a user102, and may include one or more applications106which may be configured to enable the user102to perform various tasks with the computing system100. The computing system100may, for example, be a client device202, such as described below. Examples of components that may be used to implement the computing system100, as well as examples of computing environments in which such components may be deployed, are described below in connection withFIGS.2-4. In the example implementation shown inFIG.1A, multiple similar peripheral devices, including the wired device108and the wireless device110, are simultaneously connected to computing system100. In the illustrated example, the wired device108and the wireless device110are both headsets that include speakers for providing audio to the user102and a microphone for receiving audio from the user102. The computing system100and/or the application106may be configured such that the user102can utilize only one of these devices at a time. Accordingly, one of the peripheral devices108,110may need to be designated as the active device while the other remains inactive. As shown, the computing system100may include a device control engine104that monitors the peripheral devices108,110to determine which device the user102is presently using. As described in more detail below, the device control engine104may, for example, receive one or more sensor inputs, such as images captured from a camera, audio received from a microphone, Bluetooth® signal strength, etc., to make a determination about the presently utilized peripheral device108,110. The device control engine104may access a database112to retrieve information about the user102and the peripheral devices108,110connected to the computing system100. For example, the database112may include data identifying a preferred headset of the user102. As explained in more detail below, the preferred headset may be the headset that the device control engine104initially selects as the active device until the device control engine104determines which headset the user102is presently using. FIG.1Afurther shows an example process120the device control engine104may perform. At an operation122of the process120, while both a first device of a first type and a second device of the first type are simultaneously connected to a client device202, the device control engine104may use the first device, rather than the second device, as an active device of the first type for at least one application106, the first device and the second device being peripheral devices. The user102may initially use one of the peripheral devices connected to the computing system100, such as wireless device110. The device control engine104may indicate the wireless device110as the active device for the application106and/or the computing system100in a default manner. The device control engine104may, for example, select the wireless device110as the active device based on the wireless device110having been identified as the default or preferred device, e.g., based on data stored in the database112. A device type may be based at least in part on the functionality of a device or at least in part on the input and output of the device. For example, a device type may be based at least in part on the functionality of capturing images, such as a camera. In another example, a device type may be based at least in part on providing audio output, such as a headset or speaker. The headset and speaker, while different in form, both provide the same functionality of audio output, and thus may be considered the same device type for the computing system100. Thus, a first device of a first type and a second device of the first type, for example, may present when the wireless device110and the wired device108are both audio devices that are simultaneously connected to the computing system100. The user102may switch from using the wireless device110to the wired device108. For example, the charge on the battery in the wireless device110may expire, thus prompting the user102to switch to the wired device108as a backup. At an operation124of the process120, while both the first device and the second device remain connected to the client device202, the device control engine104may determine a switch from the first device (e.g., the wireless device110) to the second device (e.g., the wired device108) by a user102. For example, the device control engine104may determine that a signal is no longer being received from the wireless device110, such as when the device powers down. The device control engine104may attempt to determine which device the user102is using presently. For example, a camera may be used to capture images of the user102to determine that wired device108is in use. In another example, movement sensor signals from a device, such as from a gyroscope or accelerometer, may be received that indicate the user102has picked up wired device108. At an operation126of the process120, based at least in part on the switch from the first device (e.g., the wireless device110) to the second device (e.g., the wired device108), the device control engine104may use the second device (e.g., the wired device108), rather than the first device (e.g., the wireless device110), as the active device of the first type for the at least one application106. For example, the user102may be using an application106for playing music. Based on determining the user102has switched from the wireless device110to the wired device108, the device control engine104may select the wired device108as the active device such that the computing system100performs operations, such as providing the music from application106, to the wired device108. The wired device108and the wireless device110are provided as examples of two different styles of the same type of device, but the systems and techniques offered here are limited to these two options. For example, a user102may have two wireless devices of the same type connected simultaneously. In this instance, data such as signal strength may be used to determine the device presently in use. Further, in some implementations, one or more devices may be integrated into the computing system100, such as a laptop computer with built-in microphone and speakers. In such implementations, the device control engine104may select the active device from among one or more integrated devices and one or more external devices of the same type, or/or from among multiple different integrated devices of the same type. FIG.1Billustrates an example embodiment of the system for automatic transfer of peripheral device operations, from among a plurality of simultaneously connected peripheral devices, such as a wired device108and a wireless device110, as the active device for use in a virtual workspace, in accordance with some aspects of the present disclosure. In some implementations, the computing system100may include a client device202to which a virtual application or desktop is delivered by a remote computing system, such as a shared computing resource502(described below in Section E). As explained in more detail below, in some such implementations, at least a portion of the device control engine104may be included within a resource access application524that the client device202uses to access the application or desktop delivered by a resource delivery agent504of a shared computing resource502. When, in such implementations, the device control engine104determines that user has switched from using one peripheral device of the client device202(e.g., the wireless device110) to another simultaneously connected peripheral device of the same type (e.g., the wired device108), the device control engine104may cause the newly-used peripheral device to begin sending and/or receiving data over one or more virtual channels (e.g., as a part of the connection548described below in connection withFIG.5C) established between the resource access application524and the resource delivery agent504for the type of peripheral device in question (e.g., an audio input/output device). In addition, in some implementations, the device control engine104may inform the resource delivery agent504of the newly-used peripheral device that will be sending and/or receiving data over such virtual channel(s), thus allowing the delivered application or desktop to properly communicate with the newly-used peripheral device via the virtual channel(s). FIG.1Cillustrates a first example user interface showing a setting to automatically transfer operations between a plurality of peripheral devices simultaneously connected to a computing device, in accordance with some embodiments of the present disclosure. The illustrated user interface may be presented, for example, by a computing system100that includes a device control engine104(e.g., as shown inFIG.1A). As shown, the list150of connected devices may be supplemented to include a device monitor option154as one of the selectable peripheral devices. As explained in more detail below, selecting the device monitor option154may trigger the device control engine104to automatically transfer operations to a connected peripheral device as the active device for the computing system100and/or the application106. In the circumstance illustrated inFIG.1C, the text of the device monitor option154indicates that the Bluetooth Headset is the presently active device, based on the automated selection process employed by the device control engine104. FIG.1Dillustrates a second example user interface showing a setting to automatically transfer operations between a plurality of peripheral devices simultaneously connected to a computing device, in accordance with some embodiments of the present disclosure.FIG.1Dillustrates the same user interface asFIG.1C, but for a circumstance in which the device control engine104detected that the user102has switched peripheral devices. The list150shown inFIG.1Dstill indicates that the device monitor option156is selected, but indicates the active device, as determined by the automated selection process performed by the device control engine104, is now the wired headset, rather than the Bluetooth headset. That change may be indicated, for example, because the device control engine104may have determined that the user102switched from using the Bluetooth headset to using the wired headset, and thus may have automatically switched the active device from the Bluetooth headset to the wired headset, without requiring the user102to make a manual selection of the different active device in list150. In some implementations, a notification may be displayed in the user interface to notify the user102that the device control engine104has detected a switch between the peripheral devices. The notification may include identification of the presently active device. In some implementations, the notification may prompt the user102to confirm the device presently identified as the active device. In such an implementation, the user102may reject the device identified in the notification as the presently active device, thus prompting the device control engine to return the previous device as the designated active device. The device control engine104may be located at any of a number of locations within the computing system100. In some implementations, for example, the device control engine104may be included, in whole or in part, as a part of an operating system of a client device202or a shared computing resource502. Additionally or alternatively, in some implementations, the device control engine104may be implemented, in whole or in part, as part of an application with which an automatically selected peripheral device is to be used. As yet another alternative, the device control engine104may be deployed as one or more separate applications executed on a client device202and/or a shared computing resource502. When implemented as part of an application with which an automatically selected peripheral device is to be used, the device control engine104may monitor device usage for devices that relate to the application. For example, a music application may monitor headphone usage but would not monitor camera usage by the user. Such an application level device control engine104may monitor usage of devices and identify which device is the active device for the application. When the application is executing or in use by the computing system100, the device control engine104for the application may select an active device for the application from devices that are applicable to the application. Additional details and example implementations of embodiments of the present disclosure are set forth below in Section F, following a description of example systems and network environments in which such embodiments may be deployed. B. Network Environment Referring toFIG.2, an illustrative network environment200is depicted. As shown, the network environment200may include one or more clients202(1)-202(n) (also generally referred to as local machine(s)202or client(s)202) in communication with one or more servers204(1)-204(n) (also generally referred to as remote machine(s)204or server(s)204) via one or more networks206(1)-206(n) (generally referred to as network(s)206). In some embodiments, a client202may communicate with a server204via one or more appliances208(1)-208(n) (generally referred to as appliance(s)208or gateway(s)208). In some embodiments, a client202may have the capacity to function as both a client node seeking access to resources provided by a server204and as a server204providing access to hosted resources for other clients202. Although the embodiment shown inFIG.2shows one or more networks206between the clients202and the servers204, in other embodiments, the clients202and the servers204may be on the same network206. When multiple networks206are employed, the various networks206may be the same type of network or different types of networks. For example, in some embodiments, the networks206(1) and206(n) may be private networks such as local area network (LANs) or company Intranets, while the network206(2) may be a public network, such as a metropolitan area network (MAN), wide area network (WAN), or the Internet. In other embodiments, one or both of the network206(1) and the network206(n), as well as the network206(2), may be public networks. In yet other embodiments, all three of the network206(1), the network206(2) and the network206(n) may be private networks. The networks206may employ one or more types of physical networks and/or network topologies, such as wired and/or wireless networks, and may employ one or more communication transport protocols, such as transmission control protocol (TCP), internet protocol (IP), user datagram protocol (UDP) or other similar protocols. In some embodiments, the network(s)206may include one or more mobile telephone networks that use various protocols to communicate among mobile devices. In some embodiments, the network(s)206may include one or more wireless local-area networks (WLANs). For short range communications within a WLAN, clients202may communicate using 802.11, Bluetooth, and/or Near Field Communication (NFC). As shown inFIG.2, one or more appliances208may be located at various points or in various communication paths of the network environment200. For example, the appliance208(1) may be deployed between the network206(1) and the network206(2), and the appliance208(n) may be deployed between the network206(2) and the network206(n). In some embodiments, the appliances208may communicate with one another and work in conjunction to, for example, accelerate network traffic between the clients202and the servers204. In some embodiments, appliances208may act as a gateway between two or more networks. In other embodiments, one or more of the appliances208may instead be implemented in conjunction with or as part of a single one of the clients202or servers204to allow such device to connect directly to one of the networks206. In some embodiments, one of more appliances208may operate as an application delivery controller (ADC) to provide one or more of the clients202with access to business applications and other data deployed in a datacenter, the cloud, or delivered as Software as a Service (SaaS) across a range of client devices, and/or provide other functionality such as load balancing, etc. In some embodiments, one or more of the appliances208may be implemented as network devices sold by Citrix Systems, Inc., of Fort Lauderdale, FL, such as Citrix Gateway™ or Citrix ADC™. A server204may be any server type such as, for example: a file server; an application server; a web server; a proxy server; an appliance; a network appliance; a gateway; an application gateway; a gateway server; a virtualization server; a deployment server; a Secure Sockets Layer Virtual Private Network (SSL VPN) server; a firewall; a web server; a server executing an active directory; a cloud server; or a server executing an application acceleration program that provides firewall functionality, application functionality, or load balancing functionality. A server204may execute, operate or otherwise provide an application that may be any one of the following: software; a program; executable instructions; a virtual machine; a hypervisor; a web browser; a web-based client; a client-server application; a thin-client computing client; an ActiveX control; a Java applet; software related to voice over internet protocol (VoIP) communications like a soft IP telephone; an application for streaming video and/or audio; an application for facilitating real-time-data communications; a HTTP client; a FTP client; an Oscar client; a Telnet client; or any other set of executable instructions. In some embodiments, a server204may execute a remote presentation services program or other program that uses a thin-client or a remote-display protocol to capture display output generated by an application executing on a server204and transmit the application display output to a client device202. In yet other embodiments, a server204may execute a virtual machine providing, to a user of a client202, access to a computing environment. The client202may be a virtual machine. The virtual machine may be managed by, for example, a hypervisor, a virtual machine manager (VMM), or any other hardware virtualization technique within the server204. As shown inFIG.2, in some embodiments, groups of the servers204may operate as one or more server farms210. The servers204of such server farms210may be logically grouped, and may either be geographically co-located (e.g., on premises) or geographically dispersed (e.g., cloud based) from the clients202and/or other servers204. In some embodiments, two or more server farms210may communicate with one another, e.g., via respective appliances208connected to the network206(2), to allow multiple server-based processes to interact with one another. As also shown inFIG.2, in some embodiments, one or more of the appliances208may include, be replaced by, or be in communication with, one or more additional appliances, such as WAN optimization appliances212(1)-212(n), referred to generally as WAN optimization appliance(s)212. For example, WAN optimization appliances212may accelerate, cache, compress or otherwise optimize or improve performance, operation, flow control, or quality of service of network traffic, such as traffic to and/or from a WAN connection, such as optimizing Wide Area File Services (WAFS), accelerating Server Message Block (SMB) or Common Internet File System (CIFS). In some embodiments, one or more of the appliances212may be a performance enhancing proxy or a WAN optimization controller. In some embodiments, one or more of the appliances208,212may be implemented as products sold by Citrix Systems, Inc., of Fort Lauderdale, FL, such as Citrix SD-WAN™ or Citrix Cloud™. For example, in some implementations, one or more of the appliances208,212may be cloud connectors that enable communications to be exchanged between resources within a cloud computing environment and resources outside such an environment, e.g., resources hosted within a data center of an organization. C. Computing Environment FIG.3illustrates an example of a computing system300that may be used to implement one or more of the respective components (e.g., the clients202, the servers204, the appliances208,212) within the network environment200shown inFIG.2. As shown inFIG.3, the computing system300may include one or more processors302, volatile memory304(e.g., RAM), non-volatile memory306(e.g., one or more hard disk drives (HDDs) or other magnetic or optical storage media, one or more solid state drives (SSDs) such as a flash drive or other solid state storage media, one or more hybrid magnetic and solid state drives, and/or one or more virtual storage volumes, such as a cloud storage, or a combination of such physical storage volumes and virtual storage volumes or arrays thereof), a user interface (UI)308, one or more communications interfaces310, and a communication bus312. The user interface308may include a graphical user interface (GUI)314(e.g., a touchscreen, a display, etc.) and one or more input/output (I/O) devices316(e.g., a mouse, a keyboard, etc.). The non-volatile memory306may store an operating system318, one or more applications320, and data322such that, for example, computer instructions of the operating system318and/or applications320are executed by the processor(s)302out of the volatile memory304. Data may be entered using an input device of the GUI314or received from I/O device(s)316. Various elements of the computing system300may communicate via communication the bus312. The computing system300as shown inFIG.3is shown merely as an example, as the clients202, servers204and/or appliances208and212may be implemented by any computing or processing environment and with any type of machine or set of machines that may have suitable hardware and/or software capable of operating as described herein. The processor(s)302may be implemented by one or more programmable processors executing one or more computer programs to perform the functions of the system. As used herein, the term “processor” describes an electronic circuit that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals. In some embodiments, the “processor” can be embodied in one or more application specific integrated circuits (ASICs), microprocessors, digital signal processors, microcontrollers, field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), multi-core processors, or general-purpose computers with associated memory. The “processor” may be analog, digital or mixed-signal. In some embodiments, the “processor” may be one or more physical processors or one or more “virtual” (e.g., remotely located or “cloud”) processors. The communications interfaces310may include one or more interfaces to enable the computing system300to access a computer network such as a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or the Internet through a variety of wired and/or wireless connections, including cellular connections. As noted above, in some embodiments, one or more computing systems300may execute an application on behalf of a user of a client computing device (e.g., a client202shown inFIG.2), may execute a virtual machine, which provides an execution session within which applications execute on behalf of a user or a client computing device (e.g., a client202shown inFIG.2), such as a hosted desktop session, may execute a terminal services session to provide a hosted desktop environment, or may provide access to a computing environment including one or more of: one or more applications, one or more desktop applications, and one or more desktop sessions in which one or more applications may execute. D. Systems and Methods for Delivering Shared Resources Using a Cloud Computing Environment Referring toFIG.4, a cloud computing environment400is depicted, which may also be referred to as a cloud environment, cloud computing or cloud network. The cloud computing environment400can provide the delivery of shared computing services and/or resources to multiple users or tenants. For example, the shared resources and services can include, but are not limited to, networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, databases, software, hardware, analytics, and intelligence. In the cloud computing environment400, one or more clients202(such as those described in connection withFIG.2) are in communication with a cloud network410. The cloud network410may include back-end platforms, e.g., servers, storage, server farms and/or data centers. The clients202may correspond to a single organization/tenant or multiple organizations/tenants. More particularly, in one example implementation, the cloud computing environment400may provide a private cloud serving a single organization (e.g., enterprise cloud). In another example, the cloud computing environment400may provide a community or public cloud serving multiple organizations/tenants. In some embodiments, a gateway appliance(s) or service may be utilized to provide access to cloud computing resources and virtual sessions. By way of example, Citrix Gateway, provided by Citrix Systems, Inc., may be deployed on-premises or on public clouds to provide users with secure access and single sign-on to virtual, SaaS and web applications. Furthermore, to protect users from web threats, a gateway such as Citrix Secure Web Gateway may be used. Citrix Secure Web Gateway uses a cloud-based service and a local cache to check for URL reputation and category. In still further embodiments, the cloud computing environment400may provide a hybrid cloud that is a combination of a public cloud and one or more resources located outside such a cloud, such as resources hosted within one or more data centers of an organization. Public clouds may include public servers that are maintained by third parties to the clients202or the enterprise/tenant. The servers may be located off-site in remote geographical locations or otherwise. In some implementations, one or more cloud connectors may be used to facilitate the exchange of communications between one more resources within the cloud computing environment400and one or more resources outside of such an environment. The cloud computing environment400can provide resource pooling to serve multiple users via clients202through a multi-tenant environment or multi-tenant model with different physical and virtual resources dynamically assigned and reassigned responsive to different demands within the respective environment. The multi-tenant environment can include a system or architecture that can provide a single instance of software, an application or a software application to serve multiple users. In some embodiments, the cloud computing environment400can provide on-demand self-service to unilaterally provision computing capabilities (e.g., server time, network storage) across a network for multiple clients202. By way of example, provisioning services may be provided through a system such as Citrix Provisioning Services (Citrix PVS). Citrix PVS is a software-streaming technology that delivers patches, updates, and other configuration information to multiple virtual desktop endpoints through a shared desktop image. The cloud computing environment400can provide an elasticity to dynamically scale out or scale in response to different demands from one or more clients202. In some embodiments, the cloud computing environment400may include or provide monitoring services to monitor, control and/or generate reports corresponding to the provided shared services and resources. In some embodiments, the cloud computing environment400may provide cloud-based delivery of different types of cloud computing services, such as Software as a service (SaaS)402, Platform as a Service (PaaS)404, Infrastructure as a Service (IaaS)406, and Desktop as a Service (DaaS)408, for example. IaaS may refer to a user renting the use of infrastructure resources that are needed during a specified time period. IaaS providers may offer storage, networking, servers or virtualization resources from large pools, allowing the users to quickly scale up by accessing more resources as needed. Examples of IaaS platforms include AMAZON WEB SERVICES provided by Amazon.com, Inc., of Seattle, Washington, Azure IaaS provided by Microsoft Corporation or Redmond, Washington, RACKSPACE CLOUD provided by Rackspace US, Inc., of San Antonio, Texas, Google Compute Engine provided by Google Inc. of Mountain View, California, and RIGHTSCALE provided by RightScale, Inc., of Santa Barbara, California. PaaS providers may offer functionality provided by IaaS, including, e.g., storage, networking, servers or virtualization, as well as additional resources such as, e.g., the operating system, middleware, or runtime resources. Examples of PaaS include WINDOWS AZURE provided by Microsoft Corporation of Redmond, Washington, Google App Engine provided by Google Inc., and HEROKU provided by Heroku, Inc. of San Francisco, California. SaaS providers may offer the resources that PaaS provides, including storage, networking, servers, virtualization, operating system, middleware, or runtime resources. In some embodiments, SaaS providers may offer additional resources including, e.g., data and application resources. Examples of SaaS include GOOGLE APPS provided by Google Inc., SALESFORCE provided by Salesforce.com Inc. of San Francisco, California, or OFFICE 365 provided by Microsoft Corporation. Examples of SaaS may also include data storage providers, e.g. Citrix ShareFile® from Citrix Systems, DROPBOX provided by Dropbox, Inc. of San Francisco, California, Microsoft SKYDRIVE provided by Microsoft Corporation, Google Drive provided by Google Inc., or Apple ICLOUD provided by Apple Inc. of Cupertino, California. Similar to SaaS, DaaS (which is also known as hosted desktop services) is a form of virtual desktop infrastructure (VDI) in which virtual desktop sessions are typically delivered as a cloud service along with the apps used on the virtual desktop. Citrix Cloud from Citrix Systems is one example of a DaaS delivery platform. DaaS delivery platforms may be hosted on a public cloud computing infrastructure, such as AZURE CLOUD from Microsoft Corporation of Redmond, Washington, or AMAZON WEB SERVICES provided by Amazon.com, Inc., of Seattle, Washington, for example. In the case of Citrix Cloud, Citrix Workspace app may be used as a single-entry point for bringing apps, files, and desktops together (whether on-premises or in the cloud) to deliver a unified experience. E. Systems and Methods for Delivering Virtualized Applications and/or Desktops to Client Devices FIG.5Ais a block diagram illustrating key components of a resource delivery system500that may enable a client device202to remotely access one or more virtual applications or desktops running on one or more shared computing resources502. The shared computing resources502may include physical machines and/or virtual (e.g., hypervisor driven) machines, and may be located at a data center, within a cloud computing environment, or elsewhere. As described in more detail below, such shared computing resources502may implement one or more resource delivery agents504, including one or more server delivery agents504aand/or one or more desktop delivery agents504b. The Virtual Delivery Agents (VDAs) of the Citrix Virtual Apps and Desktops™ system offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, are example implementations of the resource delivery agents504. In some implementations, the resource delivery system500may give an information technology (IT) department of an organization control of virtual machines, applications, licensing, and security while providing “anywhere access” for any device. As described below, the resource delivery system500may enable end users to run applications and/or desktops independently of the operating system and interface of the end user's device. Further, the resource delivery system500may enable administrators to manage the network and control access from selected devices or from all devices, as well as to manage an entire network from a single data center. The resource delivery system500shown inFIG.5Amay, for example, correspond to an implementation of a Citrix Virtual Apps and Desktops™ system offered by Citrix Systems, Inc., of Fort Lauderdale, Florida. Such systems employ a unified architecture called FlexCast Management Architecture (FMA). Among other things, FMA provides the ability to run multiple versions of Citrix Virtual Apps or Citrix Virtual Desktops™ as well as integrated provisioning. As shown inFIG.5A, in addition to the shared computing resources502, the resource delivery system500may include a gateway508, a client access manager510, one or more resource delivery controllers512, a resource manager514, a resource director516, a license manager518, one or more databases520, and an Active Directory (AD)522or other directory service. The resource delivery controller(s)512may be the central management component of the resource delivery system500. In some implementations, the resource delivery controller(s)512may be installed on at least one server in a data center of an organization. The Delivery Controller of the Citrix Virtual Apps and Desktops™ system offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, is one example implementation of the resource delivery controller(s)512. For reliability and availability, respective resource delivery controllers512may be installed on multiple servers. The resource delivery controller(s)512may communicate with the shared computing resources502to distribute applications and/or desktops, authenticate and manage user access, broker connections between client devices202and resource delivery agents504running on respective shared computing resources502, optimize use connections, and/or load-balance use connections. As described in more detail below, a broker service532(shown inFIGS.5B-5D) of the resource delivery controller(s)512may interact with the database(s)520to track which users are logged on and where, what session resources the users have, and if users need to reconnect to existing applications. In some implementations, the broker service532may execute PowerShell commands and communicate with broker agents556(shown inFIG.5D) of the resource delivery agents504over transmission control protocol (TCP) port “80.” A monitor service560(shown inFIG.5D) may also be provided by the resource delivery controller(s)512to collect historical data concerning the operation of the resource delivery controller(s)512and write such data to the database(s)520. In some implementations, such a monitor service560may use TCP port “80” or “443.” The resource delivery controller(s)512may manage the state of desktops, starting and stopping them based on demand and administrative configuration. In some implementations, the resource delivery controller(s)512may also enable the adjustment of user profiles (stored within the database(s)520) to manage user personalization settings in virtualized or physical Windows environments. In some implementations, the database(s)520may include at least one Microsoft Structured Query Language (SQL) Server database in which configuration and session information may be stored. As noted above, the database(s)520may store the data collected and managed by the services that make up the resource delivery controller(s)512. In some implementations, the database(s)520may be provided within a data center of an organization and may have a persistent connection to the resource delivery controller(s)512. Although not illustrated inFIG.5A, it should be appreciated that the resource delivery system500may also include respective databases associated with the resource manager514, the resource director516, and the license manager518to store data collected and/or used by those components. The resource delivery agents504may be installed on physical or virtual machines that are made available to deliver applications or desktops to users. The resource delivery agents504may enable such machines to register with the resource delivery controller(s)512. The registration of a machine with the resource delivery controller(s)512may cause that machine and the resources it is hosting to be made available to users. The resource delivery agents504may establish and manage the connections between the machines on which they are installed and client devices202. The resource delivery agents504may also verify that a license is available for the user and/or session, and may apply policies that are configured for the session. The resource delivery agents504may communicate session information to the broker service532(shown inFIGS.5B-5D) of the resource delivery controller(s)512through the broker agents556(shown inFIG.5D) in the resource delivery agents504. Such broker agents556may host multiple plugins and collect real-time data. In some implementations, the broker agents556may communicate with the resource delivery controller(s)512over TCP port “80.” In some implementations, the resource delivery agents504may operate with Single-session and/or Multi-session Windows operating systems. The resource delivery agents504for Multi-session Windows operating systems may allow multiple users to connect to the server at one time. The resource delivery agents504for Single-session Windows operating systems, on the other hand, may allow only one user to connect to the desktop at a time. In some implementations, one or more the resource delivery agents504may alternatively operate with a Linux operating system. When users connect from outside one or more corporate firewalls, e.g., firewalls526aand526bshown inFIG.5A, the gateway508may be used to secure such connections with Transport Layer Security (TLS). The gateway508may, for example, be a Secure Socket Layer (SLL) Virtual Private Network (VPN) appliance that is deployed in a demilitarized zone (DMZ)528. The gateway508may thus provide a single secure point of access through the corporate firewall526. The client access manager510of the resource delivery system500may authenticate users and manage stores of desktops and/or applications that are available for users to access. In some implementations, the client access manager510may provide an application “storefront” for an enterprise, which may provide users with self-service access to the desktops and/or applications that the enterprise opts to make available to them. In some implementations, the client access manager510may also keep track of users' application subscriptions, shortcut names, and other data. Tracking such data may, for example, help ensure that users have a consistent experience across multiple devices. As shown inFIG.5A, a resource access application524may be installed on client devices202or other endpoints (such as virtual desktops). Such resource access applications524may provide users with quick, secure, self-service access to documents, applications, and/or desktops. The resource access application524may, for example, provide on-demand access to Windows, web, and/or Software as a Service (SaaS) applications. The Citrix Workspace™ app, offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, is one example implementation of such a client-based version of the resource access application524. In some implementations, the resource access application524may alternatively operate on a web server (not shown inFIG.5A) and may be accessed using a web browser (also not shown inFIG.5A) installed on the client device202. In some embodiments, for example, the resource access application524may be provided as a hypertext markup language 5 (HTML5) service and may be accessed using an HTML5-compatible web browser. The Citrix Workspace™ app for HTML5, offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, is one example implementation of such a web-based version of the resource access application524. In some embodiments, the resource access application524may intercept network communications from a network stack used by the one or more applications. For example, the resource access application524may intercept a network communication at any point in a network stack and redirect the network communication to a destination desired, managed, and/or controlled by the resource access application524, for example, to intercept and redirect a transport layer connection to an IP address and port controlled and/or managed by resource access application524. The resource access application524may thus, in some embodiments, transparently intercept any protocol layer below the transport layer, such as the network layer, and any protocol layer above the transport layer, such as the session, presentation, or application layers. The resource access application524may, for example, interface with the transport layer to secure, optimize, accelerate, route, and/or load-balance any communications provided via any protocol carried by the transport layer. In some embodiments, the resource access application524may be implemented as an Independent Computing Architecture (ICA) client developed by Citrix Systems, Inc. The resource access application524may perform acceleration, streaming, monitoring, and/or other operations. For example, the resource access application524may accelerate streaming an application from a shared computing resource502running a resource delivery agent504to the client device202. The resource access application524may also perform endpoint detection/scanning and/or collect endpoint information about the client202. For example, the resource access application524may identify and determine one or more client-side attributes, such as: the operating system and/or a version of an operating system, a service pack of the operating system, a running service, a running process, a file, presence or versions of various applications of the client, such as antivirus, firewall, security, and/or other software. The resource manager514shown inFIG.5A, may provide a console from which the configuration and management of applications and desktops that are to be made available to users may be controlled. The Studio component of the Citrix Virtual Apps and Desktops™ system offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, is one example implementation of the resource manager514. In some implementations, the resource manager514may eliminate the need for separate management consoles for managing delivery of applications and desktops. In some embodiments, the resource manager514may provide one or more wizards to guide system administrators through environment setup, creating workloads to host applications and desktops, and assigning applications and desktops to users. In some implementations, the resource manager514may also be used to allocate and track licenses for the resource delivery system500. In some embodiments, the resource manager514may get the information it displays from the broker service532of the resource delivery controller(s)512, e.g., communicating over TCP port “80.” The resource director516may, for example, be a web-based tool that enables IT support and help desk teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users. The Director component of the Citrix Virtual Apps and Desktops™ system offered by Citrix Systems, Inc., of Fort Lauderdale, Florida, is one example implementation of the resource director516. In some implementations, a single deployment of the resource director516may be used to connect to and monitor multiple resource delivery systems500, such as that shown inFIG.5A. Examples of information that may be displayed by the resource director516include (A) real-time session data from the broker service532of the resource delivery controller(s)512, which may include data the broker service532gets from the broker agent556in the resource delivery agents504, and (B) historical data about the resource delivery system522that may be received, for example, from the monitor service560in the resource delivery controller(s)512. In some implementations, the resource director516may use performance and heuristics data captured by the gateway508(described below) to build analytics from the data and then presents such analytics to system administrators. Further, in some implementations, the resource director516may allow system administrators to view and interact with a user's sessions, e.g., using Windows Remote Assistance. The license manager518, as its name implies, may enable the management of licenses within the resource delivery system500. In some implementations, the license manager518may communicate with the resource delivery controller(s)512to manage licensing for a user's session and with the resource manager514to allocate license files. As noted above, in some implementations, the shared computing resources502shown inFIG.5Amay include one or more virtual machines. These can be virtual machines that are used to host applications and/or desktops, as well as virtual machines that are used to host the other components of the resource delivery system500. In some implementations, a hypervisor may be installed on a host computer to run the hypervisor and hosting virtual machines. Although not depicted inFIG.5A, in some implementations, the resource delivery system500may additionally include a performance monitoring service or agent. In some embodiments, one or more dedicated servers (or a dedicated service in a cloud-based environment) may be employed to perform performance monitoring. Performance monitoring may be performed using data collection, aggregation, analysis, management and reporting, for example by software, hardware or a combination thereof. Performance monitoring may include one or more agents for performing monitoring, measurement and data collection activities on one or more clients202(e.g., as a part of the resource access application524), one or more servers204, or one or more other system component(s). In general, the monitoring agents may execute transparently (e.g., in the background) to any application and/or user of the device. In some embodiments, such a monitoring agent may be implemented as components of Citrix Analytics™ by Citrix Systems, Inc., of Fort Lauderdale, FL. The monitoring agents may, for example, monitor, measure, collect, and/or analyze data on a frequency (e.g., a predetermined frequency), based upon an occurrence of given event(s), or in real time during operation of the resource delivery system500. The monitoring agents may, for example, monitor resource consumption and/or performance of hardware, software, and/or communications resources of the clients202, the gateway508(and/or any other components in the DMZ528), and/or the resource delivery controller(s)512, the shared computing resources502, the resource delivery agents504, or any other components shown inFIG.5A. For example, network connections such as a transport layer connection, network latency, bandwidth utilization, end-user response times, application usage and performance, session connections to an application, cache usage, memory usage, processor usage, storage usage, database transactions, client and/or server utilization, active users, duration of user activity, application crashes, errors, or hangs, the time required to log-in to an application, a server, or the application delivery system, and/or other performance conditions and metrics may be monitored. The monitoring agents may provide application performance management for the resource delivery system500. For example, based upon one or more monitored performance conditions or metrics, the resource delivery system500may be dynamically adjusted, for example periodically or in real-time, to optimize application delivery by the resource delivery agents504to the clients202based upon network environment performance and conditions FIG.5Billustrates an example deployment530of a resource delivery system500, such as that shown inFIG.5A. Such a deployment may be referred to as a “Site.” A Site may be made up of machines with dedicated roles that allow for scalability, high availability, and failover, and may provide a solution that is secure by design. As discussed above, such a Site may include servers and/or desktop machines installed with resource delivery agents504, and one or more resource delivery controller(s)512, which may manage access to such servers/machines.FIG.5Billustrates one such resource delivery agent504, and one such resource delivery controller512. As shown inFIG.5B, the resource delivery controller512may include a broker service532. The resource delivery agent504may enable users to connect to desktops and/or applications. It may be installed on server or desktop machines in a datacenter for most delivery methods, but it may also be installed on physical personal computers (PCs) for Remote PC Access. In some implementations, the resource delivery controller512may be made up of independent Windows services that may manage resources, applications, and/or desktops, and may optimize and balance user connections. In some embodiments, client devices202may not directly access the resource delivery controller512. Instead, the resource delivery agent504and the client access manager510may serve as intermediaries between client devices202and the resource delivery controller512. When users log on using the client access manager510, their credentials may pass through to the broker service532on the resource delivery controller512. The broker service532may then obtain profiles and available resources based on the policies set for them. FIG.5Cillustrates an example process for handling user connections within the deployment530shown inFIG.5B. As indicated by arrows534and535, to start a session, a user may cause the client device202to connect (via the gateway508) to the client access manager510. Such a connection may, for example, be established using the resource access application524. As noted above, the resource access application524may either be installed on the client device202or accessible from a web server via a web browser on the client device202. As indicated by arrow536, the user's credentials may then move through this pathway to access the broker service532of resource delivery controller512. In some implementations, such communications may be encrypted to protect the security of such credentials. The broker service532may determine which desktops and/or applications the user is allowed to access. After the credentials have been verified, information about available applications and/or desktops may be sent back to the client device202through the pathway between the client access manager510and the resource access application524, as indicated by arrows538,540, and541. The user of the client device202may thus be provided with a list of available applications and/or desktops. When the user selects an application or desktop from this list, an indication of the selected resource goes back down the previously described pathway to the resource delivery controller512. The resource delivery controller512may then select an appropriate resource delivery agent504to host the selected applications or desktop. As indicated by arrow542, the resource delivery controller512may send a message to the selected resource delivery agent504with the user's credentials, and may then send pertinent data about the user and the connection to the resource delivery agent504. The resource delivery agent504may then accept the connection and, as indicated by arrows544,538,540, and541, may send a set of access parameters (stored in an access parameter stack546a) back through the same pathways to the resource access application524. In particular, the set of access parameters may be collected by the client access manager510and then sent to the resource access application524where they may be stored as an access parameter file546b. In some implementations, the access parameter file546bmay be created as part of a protocol conversation between the client access manager510and the resource access application524. In other implementations, the client access manager510may convert the access parameters to the file546b, and that file546bmay then be downloaded to the client device202. In some implementations, the access parameters may remain encrypted throughout this process. The access parameter file546bthat is then stored on the client device202may be used to establish a direct connection548between the client device202and the access parameter stack546arunning on the resource delivery agent504. As illustrated, the connection548between the client device202and the resource delivery agent504may use a gateway protocol550. In some implementations, the gateway protocol550may include a feature that enables the client device202to immediately reconnect to the resource delivery agent504if the connection548is lost, rather than having to relaunch through the management infrastructure (including the client access manager510, the resource delivery controller512, etc.). After the client device202connects to the resource delivery agent504, the resource delivery agent504may notify the resource delivery controller512that the user is logged on. The resource delivery controller512may then send this information to the database(s)520(shown inFIGS.5A,5B and5D) and the monitor service560(shown inFIG.5D) of the delivery controller512may also start logging data in the database(s)520. Such sessions between client devices202and resource delivery agents504produce data that system administrators can access through the resource manager514and/or the resource director516.FIG.5Dshows examples of paths through which the resource manager514and the resource director516may access such data in some embodiments. As indicated by the arrows552and554, administrators may use the resource manager514to access real-time data from the broker agent556of a resource delivery agent504(via the broker service532of the resource delivery controller512). The resource director516may access the same data, as indicated by arrows558and554, plus any historical data the monitor service560of the resource delivery controller512stores in the database(s)520, as indicated by arrows558,562and564. Further, as indicated by arrow566, the resource director516may also access data from the gateway508for help desk support and troubleshooting. Within the resource delivery controller512, the broker service532may report session data for every session on the machine providing real-time data. The monitor service560may also track the real-time data and store it as historical data in the database(s)520. In some implementations, the resource manager514may communicate with the broker service532and may access real-time data. The resource director516may communicate with the broker service532to access the database(s)520. An example process for enabling the delivery of applications and/or desktops will now be described. First, the machines that are to deliver applications and/or desktops may be set up with “Machine Catalogs.” Then, “Delivery Groups” may be created that specify the applications and/or desktops that are to be made available (using machines in the Machine Catalogs), and which users can access them. In some implementations, “Application Groups” may also be created to manage collections of applications. Machine Catalogs are collections of virtual or physical machines that can be managed as a single entity. These machines, and the application and/or virtual desktops on them, are the resources that may be made available to users. All the machines in a Machine Catalog may have the same operating system and the same resource delivery agent504installed. They may also have the same applications and/or virtual desktops. In some implementations, a master image may be created and used to create identical virtual machines in the catalog. For virtual machines, the provisioning method may be specified for the machines in that catalog. Valid machine types may, for example, include “Multi-session OS,” “Single-session OS,” and “Remote PC access.” A Multi-session OS machine is a virtual or physical machine with a multi-session operating system. Such a machine may be used to deliver published applications (also known as server-based hosted applications) and published desktops (also known as server-hosted desktops). These machines may allow multiple users to connect to them at one time. A Single-session OS machine is a virtual or physical machine with a single-session operating system. Such a machine may be used to deliver Virtual Desktop Infrastructure (VDI) desktops (desktops running single-session OSs that can optionally be personalized), virtual machine (VM)-hosted apps (applications from single-session OSs), and hosted physical desktops. Only one user at a time can connect to each of these desktops. A Remote PC access machine may enable remote users to access their physical office PCs from any device running the resource access application524. Delivery Groups may specify which users can access which applications and/or desktops on which machines. Delivery Groups may include machines from the Machine Catalogs, and Active Directory users who have access to the Site. In some implementations, users may be assigned to Delivery Groups by their Active Directory group, because Active Directory groups and Delivery Groups are ways to group users with similar requirements. Delivery Groups may contain machines from more than one Machine Catalog, and Machine Catalogs may contribute machines to more than one Delivery Group. In at least some implementations, however, individual machines can only belong to one Delivery Group at a time. The specific resources that users in the Delivery Group can access may be defined. For example, to deliver different applications to different users, all of the applications may be installed on the master image for one Machine Catalog and enough machines may be created in that catalog to distribute among several Delivery Groups. Delivery Groups may then be configured to deliver a different subset of applications that are installed on the machines. Application Groups may provide application management and resource control advantages over using more Delivery Groups. Using a “tag restriction” feature, existing machines may be used for more than one “publishing” task, saving the costs of deployment and managing additional machines. A tag restriction can be thought of as subdividing (or partitioning) the machines in a Delivery Group. Application Groups may also be helpful when isolating and troubleshooting a subset of machines in a Delivery Group. “Tags” may be strings that identify items such as machines, applications, desktops, Delivery Groups, Application Groups, and policies. After creating a tag and adding it to an item, certain operations may be tailored to apply to only items that have a specified tag. In some implementations, tags may be used to tailor search displays is the resource manager514. For example, to display only applications that have been optimized for testers, a tag named “test” may be created and may then be added (applied) to those applications. A search performed by the resource manager514may then be filtered with the tag “test”. In some implementations, tags may be used to “publish” applications from an Application Group or specific desktops from a Delivery Group, considering only a subset of the machines in selected Delivery Groups. Using an Application Group or desktops with a tag restriction may be helpful when isolating and troubleshooting a subset of machines in a Delivery Group. In some implementations, tags may be used to schedule periodic restarts for a subset of machines in a Delivery Group. Using a tag restriction for machines may, for example, enable the use of new PowerShell cmdlets to configure multiple restart schedules for subsets of machines in a Delivery Group. In some implementations, tags may be used to tailor the application (assignment) of particular policies to a subset of machines in Delivery Groups, Delivery Group types, or organizational units (OUs) of a Site that have (or do not have) a specified tag. For example, if a particular policy is to be applied only to the more powerful workstations, a tag named “high power” may be applied to those machines and the policy may be set to apply to only machines to which the high power tag has been applied. Tags may additionally or alternatively be applied to particular Delivery Groups and one or more policies may be set to apply only the Delivery Groups to which such tags have been applied. In some embodiments, the resource manager514may be used to create or edit a tag restriction for a desktop in a shared Delivery Group or an Application Group. In some implementations, creating such a tag restriction may involve several steps. First, a tag may be created and then added (applied) to one or more machines. Second a group may be created or edited to include the tag restriction, thus restricting launches to machines with the applied tag. A tag restriction may extend the machine selection process of the broker service532. In particular, the broker service532may select a machine from an associated Delivery Group subject to access policy, configured user lists, zone preference, and launch readiness, plus the tag restriction (if present). For applications, the broker service532may fall back to other Delivery Groups in priority order, applying the same machine selection rules for each considered Delivery Group. FIG.5Eillustrates a simple layout in which tag restrictions may be used to limit which machines will be considered for certain desktop and application launches. In the illustrated example, a site576has one shared Delivery Group578configured with three machines580,582,584and one published desktop586, and one Application Group588configured with two applications590,592. As shown, tags may be added to each of the three machines580,582,584. A tag restriction named “Red” has been applied to the published desktop586in the shared Delivery Group578, so that the published desktop586can be launched only on machines in that Delivery Group578that have the tag “Red,” i.e., the machines580and582. A tag restriction named “Orange” has been applied to the Application Group588, so that each of its applications590,592(Calculator and Notepad) can be launched only on machines in the Delivery Group578that have the tag “Orange,” i.e., the machines582and584. Since the machine582has both tags (Red and Orange), it can be considered for launching the applications590,592and the desktop586. In some implementations, tags may be created, added (applied), edited, and/or deleted from selected items using the resource manager514. Tag restrictions may, for example, be configured when creating or editing desktops in Delivery Groups and/or when creating or editing Application Groups. As noted above, the resource delivery system500described in connection with FIGS. may provide virtualization solutions that give administrators control of virtual machines, applications, and security while providing anywhere access for any device. As was also noted above, the resource delivery system500may also enable end users to access applications and desktops independently of the operating systems and interfaces of the client devices202such end users are operating. In some implementations, one or more components of the resource delivery system500may be provided as a service within a cloud-based computing environment.FIG.5Fillustrates an example of such an implementation. As shown inFIG.5F, one or more cloud connectors568may enable various resources at one or more locations570outside of a cloud computing environment572to interface with various components within the cloud computing environment572. As illustrated, resource location(s)570may include the machines and other resources that deliver applications and/or desktops to client devices202. As indicated by dashed lines, the resource location570may optionally include the gateway508and/or the client access manager510previously described. In the illustrated example, the resource delivery controller(s)512, the resource manager514, the resource director516, the license manager518, and the database(s)520are all provided within the cloud computing environment572. Further, as shown inFIG.5F, a configuration manager574may additionally be hosted within the cloud computing environment572in some implementations. Examples of management functions that may be performed by the configuration manager574are described below. In some implementations, the cloud computing environment572may correspond to a public cloud computing infrastructure, such as AZURE CLOUD provided by Microsoft Corporation of Redmond, Washington, or AMAZON WEB SERVICES provided by Amazon.com, Inc., of Seattle, Washington. In addition to serving as a channel for communication between the cloud computing environment572and the resource location(s)570, the cloud connectors568may enable cloud management without requiring any complex networking or infrastructure configuration such as virtual private networks (VPNs) or Internet Protocol Security (IPsec) tunnels. As noted above, the resource delivery controller(s)512may serve as the central control layer component in a deployment. The resource delivery controller(s)512may communicate through the cloud connectors568in each resource location570to distribute applications and/or desktops, authenticate and manage user access, broker connections between users and their virtual desktops and/or applications, optimize use connections, and/or load-balance use connections. In some implementations, the resource delivery controller(s)512may additionally track which users are logged on and where, which session resources the users have, and if users need to reconnect to existing applications. The resource delivery controller(s)512may further manage the state of desktops, starting and stopping them based on demand and administrative configuration, in some implementations. The configuration manager574in the cloud computing environment572may (A) enable administrators to specify which services are to be made available to users via the resource access application, (B) customize the uniform resource locator (URL) that the resource access application524is to use to access the available resources, (C) customize the appearance of the user interface provided by the resource access application, such as logos, color, and preferences, (D) specify how users are to authenticate to the system, such as using the Active Directory522, and/or (E) specify external connectivity for the resource locations570. As noted above, a resource location570may include at least one cloud connector568that serves as the communications channel between the components in the cloud computing environment572and the components in the resource location570. In the resource location570, the cloud connector(s) may act as a proxy for the resource delivery controller(s)512in the cloud computing environment572. As noted above, the physical or virtual machines that deliver applications and/or desktops may include resource delivery agents504a,504b. The resource delivery agents504may register with at least one cloud connector568. After registration, connections may be brokered from those resources to users. The resource delivery agents504may further establish and manage the connection between the machine and the client device202, and apply policies that are configured for the session. The resource delivery agents504may communicate session information to the cloud connector568through the broker agent556(shown inFIG.5D) in the resource delivery agent504. As noted above, in some implementations, such a broker agent556may host multiple plugins and collect real-time data. A host connection may be established that enables communication between components in the cloud computing environment572and the resource delivery agents504on the shared computing resources502. Specifications for such host connections may include (A) the address and credentials to access the host, (B) the tool that is to be used to create VMs, (C) the storage method to use, (D) the machines to use for storage, and/or (E) which network the VMs will use. F. Detailed Description of Example Embodiments of the System for Automated Transfer of Peripheral Device Operations Introduced in Section A FIG.6is flow diagram showing an example routine600that may be performed by the device control engine104of the computing system100shown inFIG.1A, in accordance with some embodiments of the present disclosure. The computing system100may include at least one processor (e.g., processor(s)302shown inFIG.3) and may include at least one computer-readable medium, which may be encoded with instructions which, when executed by the at least one processor of the computing system100, may cause the computing system100to perform the functionality of the device control engine104described herein. As noted in Section A, the computing system100may take on any of numerous forms, and the device control engine104may be located at any of a number of locations within the computing system100. The device control engine104may be enabled by the user102, such as through selecting a user interface element as part of the graphical user interface (GUI) for the computing system100. For example, instead of the user102directly selecting a device as the active device, the user102may select a device monitor option154, as shown inFIGS.1C and1D, so as to trigger the device control engine104to automatically select the active device. The device control engine104may determine the device to select as the active device without further interaction from the user102. The device control engine104may be implemented for multiple types of devices. The routine600may thus be performed for respective device types. Referring toFIG.6, the routine600may begin (at a step601) when the device control engine104is enabled, e.g., in response to the user102selecting the device monitor option154. At a decision step602, the device control engine104may determine whether a device has been selected. When the device control engine104determines (at the decision step602) that a device has not been selected then, per a step606of the routine600, the device control engine104may read an effective time duration from a configuration file or database (e.g., the database112shown inFIG.1A) corresponding to the device type. The effective time duration may be a time period, such as the most recent thirty days, used to determine the most frequently used devices within that time period. At a step608, a preferred device may be identified. A preferred device may be a device from among a plurality of devices of the same type which the user has identified as preferential for use, either by a manual selection or based on historical usage. The preferred device may be determined, for example, based on a setting provided by the user102. Additionally or alternatively, the preferred device may be determined based on the device used most frequently for the effective time duration. In some implementations, the device control engine104may process the user's historical data of the effective time duration for device use using one or more machine learning models. Such historical data may, for example, be stored in the database112. At a step610, the determined preferred device may be set as the active device for the computing system100which is then utilized at step632, as an input/output device for the computing system100. As noted, in some implementations, a trained machine learning model may be used at the step608to identify the preferred device. Device usage data may be collected over time for the user and stored in a database (e.g., the database112) as information or data (e.g., device usage history) with the effective time duration data. As discussed above in connection withFIG.5A, in some implementations, the resource delivery system500may include one or more monitoring agents (e.g., as a part of the resource access application524) for performing monitoring, measurement, and data collection activities on a client device202. Similar monitoring agents may also be employed to monitor activities of a client device202when the computing system100is embodied in other types of computing environments. In some implementations, such monitoring agent(s) may collect usage data for the connected devices. This usage data may include information such as time of day, location, calendar data, application usage, etc. The collected device usage data may be used to train the machine learning model based on the user's behaviors. For example, a user may commonly listen to music on a speaker at 9:00 PM, and, based on that parameter, among others, the machine learning model may be trained to identify the preferred device as a speaker when the user chooses a music application at night. In another example, a user may typically use a wireless headset for conference calls. Based on usage data indicative of that behavior, the machine learning model may be trained to identify the preferred device as the wireless headset when the user has a conference call scheduled. Accordingly, in some implementations, the device control engine104may use the trained machine learning model (at the step608) to determine the preferred device based on one or more contextual inputs, such as the application that is in use or the time of day. If, at the decision step602, the device control engine104determines that a device has been selected, then the routine600may proceed to a step612, at which the device control engine104may identify the presently connected devices of the pertinent type (e.g., audio input/output devices). As explained in more detail below, in some implementations, the device control engine104may select one or more techniques for detecting switches between simultaneously connected devices based on one or more characteristics of the connected devices identified at the step612. For example, if the device control engine104determines (per a decision step614) that one or more of the connected devices is a wired headset, then the device control engine104may employ a camera (e.g., per a step616) to monitor for use. In another example, if the device control engine104determines (per a decision step622) that one or more Bluetooth® devices are connected, then the device control engine104may monitor Bluetooth® signal strength (e.g., per a step626) to monitor for use. At the decision step614, the device control engine104may determine if one or more of the identified connected devices is a wired device. If there is at least one connected wired devices, then the device control engine104may, at the step616, monitor the user's behavior to determine if the user has switched devices. For example, in some implementations, a camera may be used to capture images of the user and determine user behavior. For instance, when a headset is connected, the camera may capture images of the user that include wearing a first headset, not wearing a headset, and then wearing a second headset. Images such as these may be provided to a trained machine learning model. Such a machine learning model may be used to quickly identify (e.g., per a decision step618) when the user switches devices. In some implementations, the steps616and618may additionally or alternatively involve monitoring input from connected devices to determine whether a switch between devices has occurred. For example, a microphone of a connected device may begin receiving sounds, such as the user's voice, when the user places the microphone near the user's mouth. The device control engine104may take such received input into account when determining whether a user is currently using a given device. New input may be detected, for example, when a user operates a manual switch to begin operating a particular device, or simply when a user begins providing input (e.g., speaking into a microphone) to a different connected device. Other types of sensors may additionally or alternatively be used to determine when a user has switched devices. For instance, accelerometers and/or gyroscopes in a device may provide data about the movement of a device. For example, movement sensors of a headset may provide data indicating when a user has removed the headset. The device control engine104may receive this indication, e.g., at the step616, and use such indication (e.g., at the decision step618) to determine that a device switch has occurred. Per the decision step618, if a switch is not detected, then the device control engine104may return to the step616and continue to monitor the user's behavior. If, at the decision step618, the device control engine104detects a device switch, then the routine600may proceed to a step620, at which the device control engine104may perform an analysis to determine if the newly-used device is wired. If, at a decision step622, the device control engine104determines that the new device is wired, then the routine600may proceed to a step624, at which the device control engine104may, in at least some implementations, perform an image matching operation to identify the new wired device in use. In other implementations, the device control engine104may not employ image matching to identify the new wired device, such as if a camera is not present or the device is not within view of the camera. In such implementations, one or more other sensors, such as movement sensors, audio sensors, etc., may be used to identify the new wired device. As noted, at the step624, the device control engine104may perform image data matching to identify which wired device is in use when multiple wired devices are concurrently connected. For example, headsets may have distinguishing characteristics such shapes, colors, lights, etc. In some implementations, the device control engine104may use a trained classifier model to receive captured images of individual devices, such as the headsets, and to identify the particular devices (e.g., particular headset model numbers) that are represented in the respective images. For example, such a machine learning model may be trained to distinguish between various different models of headsets, such as by recognizing characteristics (e.g., shapes, colors, proportions, etc.) that distinguish the different models. The trained machine learning model may additionally or alternatively be trained to recognize general characteristics about the device in use, such as detecting a wire or wires in the captured images and thus determining the device in use is a wired device. Depending on the type of device, different sensors, or the device itself, may be used to determine which device the user is currently operating. For example, in a circumstance in which the connected devices include multiple cameras, the device control engine104may determine, based on inputs received from the cameras, which camera is most likely to be the one the user expects to be in use. Such a determination may, for example, utilize one or more trained machine learning models to evaluate image data and identify images in which the user is present and/or to compare multiple images and determine which image includes the best representation of the front of the user's face. For example, in some implementations, the device control engine104may determine the camera in use by the user by determining which camera detects a person, or perhaps even which camera detects a specific person, e.g., the user102. In another example, the device control engine104may determine the camera in use by the user by identifying, based on received image data, the camera at which the user is looking the most directly. In some implementations, other device sensors may additionally or alternatively be used to determine the device in use when multiple devices are connected. For instance, in some implementations, one or more motion sensors such as accelerometers and/or gyroscopes may be used to detect when a user interacts with a device. For example, a headset may include motion sensors for detecting when a user puts on or takes off the headset. This motion sensor data may be used by the device control engine104to determine which device the user102has started using and which device the user102has stopped using. If, at the decision step614, the device control engine104determines that there are no wired devices, or if, at the decision step622, the device control engine104determines the new device is not wired, then the routine600may proceed to a step626, at which the device control engine104may use wireless data from the devices to determine user behavior. For example, if one or more of the wireless devices are connected to the computing system100by Bluetooth®, then a distance calculation may be used to determine the user behavior, or the switch, and identify the new wireless device in use by the user. Although not illustrated inFIG.6, in some implementations, prior to proceeding to the step626, the device control engine104may perform one of more of the operations described above in connection with the steps616and618after determining there are no wired devices at the step614. With respect to the step626, in some implementations, the device control engine104may use the measured power of the Bluetooth signal to perform a calculation to determine a distance of the one or more devices from the computing system100. The following formula of Equation 1 may be used to determine the Distance of a Bluetooth enabled device from the computing system100based on the Bluetooth signal. The device may provide a Received Signal Strength Indicator (RSSI) that is the strength of the signal from the computing system100as received by the device. In Equation 1, Measured Power is a factory calibrated constant for the particular device that indicates the expected RSSI at a distance of 1 meter and n is another constant that may be adjusted based on environmental factors. Distance=10⁢(Measured⁢Power-RSSI)10*nEquation⁢(1) Accordingly, in some implementations, a variance of the distance may instead be used to determine the device in use. For example, when two Bluetooth devices of the same type are concurrently connected to the computing system, the user is most likely utilizing one device at a time. The distance of the in use device from a client device202may be changing, even if slightly such as with a headset and the movements of the user's head, whereas a device not in use is not moving and will report very little or no variance of distance. Thus, the device in use may be identified based on which device has a greater variance of distance. To determine this distance variance, the device control engine104may determine how far a set of numbers is spread out from their average value for individual devices and choose the device with the larger variance. A first formula of Equation 2 may be used to determine the difference X of the distance value for a time interval, such as every 10 milliseconds, where Distancei+1and Distanceiare two calculated distance values for consecutive time interval. X=Distancei+1−DistanceiEquation (2) The average change in distance Mean(x) may then be calculated with a second formula of Equation 3, where n is the size of the set of difference calculations for a time period Mean(x)=∑i=1n⁢xinEquation⁢(3) Using the average change in distance, the Variance (x) for a device may then be calculated with a third formula of Equation 4. Variance(x)=∑i=1n⁢(Xi-Mean(x))2n-1Equation⁢(4) Finally, with a fourth formula of Equation 5, the maximum variance value Result(x) may be determined and compared with the maximum variance value for individual devices to identify the device in use. Result(x)=Max⁢∑i=1n⁢(Variance(xi))Equation⁢(5) At a step628of the routine600, the device control engine104may use the determinations from the step624and/or the step626to determine the device in use. In some implementations, if the device control engine104was unable to determine a device in use pursuant to the step624or the step626, the device control engine104may perform the steps606and608to determine a preferred device as a default. At a step630of the routine600, the device control engine104may set the device in use as the active device for computing system100and/or the application106. This may include updating a graphical user interface element to display the name of the active device, such as shown inFIG.1D. In some implementations, a graphical user interface element, such as a notification, may be displayed to notify the user of the change of active device. In some implementations the notification may prompt the user to confirm the active device switch. The user experience is improved as the device control engine104automatically identifies the switch between devices. Instead of requiring the user102to both physically switch devices and provide a device change indication to the computing system100or application106, the device control engine104may automatically identify the device switch and transfer operations to the active device. The user may be acquainted with the procedures for switching frequently used devices on the computing system100. However, the user may be less familiar with infrequently used devices which in some instances may result in more missed interactions as the user attempts to indicate the device switch. Utilizing the device control engine104, the user experience may be improved, regardless of the user's familiarity with the procedures for indicating a device switch with the computing system100or application106. In some implementations, in connection with the step630, the time of use for the previous, or switched from, device may be recorded in the database112, such as for determining the effective time duration in step606and step608. At a step632, the device control engine104may cause the computing system100and/or the application106to utilize the active device for data input and/or output. In some implementations, this may cause the computing system100and/or the application106to receive input, such as audio data from a microphone or image data from a camera, from the identified active device instead of the previous, or switched from, device. In some implementations, the computing system100and/or the application106may provide output to the identified active device instead of the previous, or switched from, device. From the step632, the routine600may return to the step612, at which the device control engine104may identify the presently connected devices and continue to monitor for a device switch. As noted above in Section A, in some implementations,FIGS.1A and1Bmay include a client device202to which virtualized applications and/or desktops may be delivered via a resource delivery agent504of a shared computing resource502, e.g., as described in Section D. In some such implementations, the device control engine104may be included, at least in part, within the resource access application524(depicted inFIGS.5A-D) associated with a client device202to which multiple peripheral devices of a similar type are connected (e.g., simultaneously connected), and the application106with which an active device (selected from amongst those simultaneously connected peripheral devices) communicates may be hosted on a shared computing resource502(also depicted inFIGS.5A-D). As such, as described in more detail below, in some such implementations, the device control engine104may perform the routine600to determine the active device of the client device202that is to interact with the remotely hosted application106via one or more virtual channels established between the resource access application524and the resource delivery agent504. The resource delivery agent504of a shared computing resource502, such as described in Section D, may receive resource data of the client device202, such as devices connected to the client device202. The resource access application524of the client device202and resource delivery agent504may communicate via TCP. In some implementations, the device control engine104, as part of the resource access application524, may identify the new active device to the resource delivery agent504when a device switch occurs at the client device202, thus enabling the resource delivery agent504to receive and send data to the newly-used device, on the same virtual channel as the previous device. For example, a virtual audio channel may be established for a first headset connected to the client device202to communicate data with the resource delivery agent504. When the device control engine104determines the user has switched to a second headset, the device control engine104may communicate data indicating the second headset as the device in use to the resource delivery agent504. The resource delivery agent504may then identify the second headset as the active device and continue to communicate on the same virtual audio channel. G. Example Implementations of Methods, Systems, and Computer-Readable Media in Accordance with the Present Disclosure The following paragraphs (M1) through (M12) describe examples of methods that may be implemented in accordance with the present disclosure. (M1) A method may be performed that involves while both a first device of a first type and a second device of the first type are simultaneously connected to a client device, using the first device, rather than the second device, as an active device of the first type for at least one application, the first device and the second device being peripheral devices; while both the first device and the second device remain connected to the client device, determining a switch from the first device to the second device by a user; and based at least in part on the switch from the first device to the second device, using the second device, rather than the first device, as the active device of the first type for the at least one application. (M2) A method may be performed as described in paragraph (M1), and may further involve the at least one application being executed by a remote server and delivered to the client device as a virtual application. (M3) A method may be performed as described in paragraph (M1) or paragraph (M2), wherein determining the switch from the first device to the second device may further involve receiving data from at least one sensor; and determining, based at least in part on the data, that the user has stopped use of the first device and begun use of the second device. (M4) A method may be performed as described in any of paragraphs (M1) through (M3), wherein the first device and second device may have different connection types, and determining the switch from the first device to the second device may further involve receiving first data from a first sensor monitoring a first characteristic of the first device; receiving second data from a second sensor monitoring a second characteristic of the second device; and determining, based at least in part on the first data and the second data, that the user has stopped use of the first device and begun use of the second device. (M5) A method may be performed as described in any of paragraphs (M1) through (M4), and may further involve receiving first wireless signal data and second wireless signal data from the first device; and wherein determining the switch from the first device to the second device may be based at least in part on the first wireless signal data and the second wireless signal data. (M6) A method may be performed as described in paragraph (M5), wherein determining the switch from the first device to the second device may further involve determining at least a first distance value based on the first wireless signal data; determining at least a second distance value based on the second wireless signal data; and wherein determining the switch from the first device to the second device may be further based at least in part on the first distance value and the second distance value. (M7) A method may be performed as described in any of paragraphs (M1) through (M6), wherein determining the switch from the first device to the second device may further involve receiving first wireless signal data from the first device; determining a first distance variance for the first device based at least in part on the first wireless signal data; receiving second wireless signal data from the second device; determining a second distance variance for the second device based at least in part on the first wireless signal data; and determining the second distance variance is greater than the first distance variance. (M8) A method may be performed as described in any of paragraphs (M1) through (M7), and may further involve receiving at least a first image captured by the client device; wherein determining the switch from the first device to the second device may be based at least in part on the first image. (M9) A method may be performed as described in paragraph (M8), wherein determining the switch from the first device to the second device may further involve determining that the second device may be represented in the first image. (M10) A method may be performed as described in paragraph (M9), and may further involve receiving a second image captured by the client device prior to capturing the first image; wherein determining the switch from the first device to the second device may be further based at least in part on the second image. (M11) A method may be performed as described in paragraph (M10), wherein determining the switch from the first device to the second device may further involve determining that the first device may be represented in the second image. (M12) A method may be performed as described in any of paragraphs (M1) through (M11), and may further involve selecting the first device as the active device of the first type for the at least one application based at least in part on at least one of a preference of a user or a history of usage of devices of the first type. The following paragraphs (S1) through (S12) describe examples of systems and devices that may be implemented in accordance with the present disclosure. (S1) A system may comprise at least one processor, and at least one computer-readable medium encoded with instructions which, when executed by the at least one processor, cause the system, while both a first device of a first type and a second device of the first type are simultaneously connected to a client device, to use the first device, rather than the second device, as an active device of the first type for at least one application, the first device and the second device being peripheral devices; while both the first device and the second device remain connected to the client device, to determine a switch from the first device to the second device by a user; and based at least in part on the switch from the first device to the second device, to use the second device, rather than the first device, as the active device of the first type for the at least one application. (S2) A system may be configured as described in paragraph (S1), wherein the at least one application may be executed by a remote server and delivered to the client device as a virtual application. (S3) A system may be configured as described in paragraph (S1) or paragraph (S2), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving data from at least one sensor; and determining, based at least in part on the data, that the user has stopped use of the first device and begun use of the second device. (S4) A system may be configured as described in any of paragraphs (S1) through (S3), wherein the first device and second device may have different connection types, and the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving first data from a first sensor monitoring a first characteristic of the first device; receiving second data from a second sensor monitoring a second characteristic of the second device; and determining, based at least in part on the first data and the second data, that the user has stopped use of the first device and begun use of the second device. (S5) A system may be configured as described in any of paragraphs (S1) through (S4), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive first wireless signal data and second wireless signal data from the first device, and to determine the switch from the first device to the second device based at least in part on the first wireless signal data and the second wireless signal data. (S6) A system may be configured as described in paragraph (S5), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining at least a first distance value based on the first wireless signal data; determining at least a second distance value based on the second wireless signal data; and determining the switch from the first device to the second device further based at least in part on the first distance value and the second distance value. (S7) A system may be configured as described in any of paragraphs (S1) through (S6), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving first wireless signal data from the first device; determining a first distance variance for the first device based at least in part on the first wireless signal data; receiving second wireless signal data from the second device; determining a second distance variance for the second device based at least in part on the first wireless signal data; and determining the second distance variance is greater than the first distance variance. (S8) A system may be configured as described in any of paragraphs (S1) through (S7), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive at least a first image captured by the client device; and to determine the switch from the first device to the second device based at least in part on the first image. (S9) A system may be configured as described in paragraph (S8), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining that the second device may be represented in the first image. (S10) A system may be configured as described in paragraph (S9), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive a second image captured by the client device prior to capturing the first image; and to determine the switch from the first device to the second device further based at least in part on the second image. (S11) A system may be configured as described in paragraph (S10), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining that the first device may be represented in the second image. (S12) A system may be configured as described in any of paragraphs (S1) through (S11), wherein the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to select the first device as the active device of the first type for the at least one application based at least in part on at least one of a preference of a user or a history of usage of devices of the first type. The following paragraphs (CRM1) through (CRM12) describe examples of computer-readable media that may be implemented in accordance with the present disclosure. (CRM1) At least one non-transitory computer-readable medium may be encoded with instructions which, when executed by the at least one processor of a system, cause the system, while both a first device of a first type and a second device of the first type are simultaneously connected to a client device, to use the first device, rather than the second device, as an active device of the first type for at least one application, the first device and the second device being peripheral devices; while both the first device and the second device remain connected to the client device, to determine a switch from the first device to the second device by a user; and based at least in part on the switch from the first device to the second device, to use the second device, rather than the first device, as the active device of the first type for the at least one application. (CRM2) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM1), wherein the at least one application may be executed by a remote server and delivered to the client device as a virtual application. (CRM3) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM1) or paragraph (CRM2), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving data from at least one sensor; and determining, based at least in part on the data, that the user has stopped use of the first device and begun use of the second device. (CRM4) At least one non-transitory computer-readable medium may be configured as described in any of paragraphs (CRM1) through (CRM3), wherein the first device and second device may have different connection types, and the at least one computer-readable medium may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving first data from a first sensor monitoring a first characteristic of the first device; receiving second data from a second sensor monitoring a second characteristic of the second device; and determining, based at least in part on the first data and the second data, that the user has stopped use of the first device and begun use of the second device. (CRM5) At least one non-transitory computer-readable medium may be configured as described in any of paragraphs (CRM1) through (CRM4), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive first wireless signal data and second wireless signal data from the first device, and to determine the switch from the first device to the second device based at least in part on the first wireless signal data and the second wireless signal data. (CRM6) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM5), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining at least a first distance value based on the first wireless signal data; determining at least a second distance value based on the second wireless signal data; and determining the switch from the first device to the second device further based at least in part on the first distance value and the second distance value. (CRM7) At least one non-transitory computer-readable medium may be configured as described in any of paragraphs (CRM1) through (CRM6), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by receiving first wireless signal data from the first device; determining a first distance variance for the first device based at least in part on the first wireless signal data; receiving second wireless signal data from the second device; determining a second distance variance for the second device based at least in part on the first wireless signal data; and determining the second distance variance is greater than the first distance variance. (CRM8) At least one non-transitory computer-readable medium may be configured as described in any of paragraphs (CRM1) through (CRM7), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive at least a first image captured by the client device; and to determine the switch from the first device to the second device based at least in part on the first image. (CRM9) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM8), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining that the second device may be represented in the first image. (CRM10) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM9), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to receive a second image captured by the client device prior to capturing the first image; and to determine the switch from the first device to the second device further based at least in part on the second image. (CRM11) At least one non-transitory computer-readable medium may be configured as described in paragraph (CRM10), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to determine the switch from the first device to the second device at least in part by determining that the first device may be represented in the second image. (CRM12) At least one non-transitory computer-readable medium may be configured as described in any of paragraphs (CRM1) through (CRM11), and may be further encoded with additional instructions which, when executed by the at least one processor, further cause the system to select the first device as the active device of the first type for the at least one application based at least in part on at least one of a preference of a user or a history of usage of devices of the first type. Having thus described several aspects of at least one embodiment, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description and drawings are by way of example only. Various aspects of the present disclosure may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in this application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. Also, the disclosed aspects may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Use of ordinal terms such as “first,” “second,” “third,” etc. in the claims to modify a claim element does not by itself connote any priority, precedence or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claimed element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Also, the phraseology and terminology used herein is used for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
118,908
11861372
Throughout the drawings, identical reference numbers may designate similar, but not necessarily identical, elements. An index number “N” appended to some of the reference numerals may be understood to merely denote plurality and may not necessarily represent the same quantity for each reference numeral having such an index number “N”. Additionally, use herein of a reference numeral without an index number, where such reference numeral is referred to elsewhere with an index number, may be a general reference to the corresponding plural elements, collectively or individually. In another example, an index number of “I,” “M,” etc. can be used in place of index number N. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings. DETAILED DESCRIPTION In the present disclosure, use of the term “a,” “an”, or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements. Various examples described herein provide for a process for enabling a manufacturer's customer to be able to verify that a platform (e.g., a device) the customer receives at their shipping bay has not been tampered with since it exited from the manufacturing chain. The approach provides an extension of a secure platform device identity (DevID) by also securing the platform's integrity fingerprint before leaving manufacturing line. The platform's integrity fingerprint can then be used by the customer to verify that the received platform has not been modified during shipping. Customers can be concerned with detecting supply chain attacks. This disclosure provides examples to provision platforms with an integrity manifest— captured during manufacturing of a given platform—authenticated by a manufacturer (e.g. via a cryptographic signature). Using that authenticated integrity manifest-along with a verification software, customers can validate that the platform they received is in the same state (e.g. same firmware installed, no modification of components such as daughterboards, etc.) as when it left manufacturing. Additionally, this may provide assurance to a customer about which manufacturing facilities was used (e.g. the platform was manufactured in a given country, by this particular contract manufacturer, etc.). Thus, the supply chain can be certified using this approach. Please note that though the customer use case is used as a benefit of the system throughout, it is contemplated that the approaches described herein can be used by others to verify the integrity of the platform in different use cases. Examples described herein include use of a security co-processor such as a Trusted Platform Module (TPM). The security co-processor be implemented using a specification that provide required features for supporting examples described herein (e.g., software tamper proof storage for integrity measurements, protected cryptographic keys and engine, etc.). A platform's integrity manifest can be provisioned during manufacturing and customer verification of the received platform can be performed upon receiving the platform. The platform is provisioned when it is still in a trusted environment with the correct manifest. Then, later (e.g., after shipping to the customer) data is retrieved from the platform and processed to appraise the platform in order to be able to detect any tampering while the platform was shipped to the customer. As used herein, a “platform” can be considered a device that includes a security co-processor and where the device includes hardware components and one or more firmware or software components to execute on one or more of the hardware components. Technology Explanation and Definitions Attestation, in general, refers to a process by which one electronic device, called a “verifier”, challenges another electronic device, such as a computer platform, to check whether the platform is trustworthy. The attestation relies on measurements of the platform. More specifically, before the platform is challenged by the verifier, the platform performs measurements of itself, relying on a trusted computing base of the platform. These measurements form a log of measurements that are stored by the platform in a platform memory. In this context, a “trusted computing base” refers to a set of hardware, firmware and/or software components of the platform, which form the core of security for the platform. In other words, the trusted computing base may be inherently trusted software, hardware, or some combination thereof. After the platform performs the trusted computing base measurements, the platform may securely store cryptographic hashes of its measurements in a secure memory of the platform, such as platform configuration registers (PCRs) inside a security component (e.g., a security co-processor or a trusted platform module) of the platform. The platform may perform the measurements at particular power state of the platform such as, for example, when the platform boots up. The verifier initiates the challenge of the platform by providing an attestation request to the platform, and a security component of the platform responds to the attestation request with an authenticated digest of the measurement hashes. In this context, an “authenticated digest” refers to a set of measurements of the platform, which are signed by a security component of the platform. A TPM quote (also called a “PCR quote” herein), containing PCR values, is an example of an authenticated digest, although the authenticated digest may take on other forms, in accordance with further example implementations. In accordance with example implementations, the attestation request can contain a nonce, which is a one-time number that is used for identification purposes so that the corresponding authenticated digest may be associated with the request and not be a replayed version of a previous authenticated digest. In this manner, in accordance with example implementations, the authenticated digest that is provided by the secure component of the platform contains the nonce from the attestation request (to verify that the authenticated digest was generated after the attestation request containing the nonce) and the measurement hashes (e.g., the platform's PCR content). Moreover, the secure component of the platform digitally signs the authenticated digest so that a corresponding digital signature is provided with the authenticated digest. The platform may also respond to the attestation request with the log of measurements, which the verifier may validate using the measurement hashes. The above-described attestation process may allow the verifier to determine whether tampering has occurred with a platform. For example, if tampering has occurred, this tampering may be detectable by the verifier due to the PCR content being reset (due to a resetting of the platform) or due to the measurement hash values not corresponding to the log of measurements. A device can be processed during its manufacture to include a manufacturer-signed digital certificate associated with the device, which can serve as a trusted form of identification for the device. The process typically involves generating and installing the manufacturer-signed digital certificate at some point during manufacture of the device (e.g., before the computing device is shipped to a retailer or customer). In some examples, customer information can be included in the manufacturer-signed digital certificate. In some examples, the manufacturer-signed digital certificate may also be based on device identification information about the device, for example, a model number, a serial number, an external color, a hash of a firmware, a date of manufacture, and the like. The manufacturer-signed digital certificate can serve as a device identity for a device or platform. In some examples, a TPM is a security co-processor with which one can only interact through an Input/Output (I/O) buffer that uses a well-defined formatting. The TPM behavior is exclusively based on the (series of) command(s) it receives. Thus any guarantee one can get from the TPM is based on the commands issued. Prototypical examples are the measured boot and remote attestation mechanisms, which are built over the use of the TPM Platform Configuration Registers to remotely verify the state of a platform. PCRs are updated by hashing their previous value with the information to store. In the measured boot case, PCRs are used to store firmware and software integrity value. Using PCRs prevents removal of firmware or software execution events. In the examples described herein, a firmware or software component cannot erase its measurement event from the PCRs. Further, a manufacturer can predict expected PCR value for a device or platform because the manufacturer would know what firmware or software should be executed. As used herein, firmware can include multiple components that can be programmed, for example a field programmable gate array bitstream, a Serial Peripheral Interface (SPI) flash, an electronically erasable programmable read only memory (EEPROM), etc. in the platform. Unused programmable logic can also be measured (and can be a known value, for example 0 or a repeating pattern). As used herein, a “certificate authority” (CA) is a trusted entity that issues digital certificates. Digital certificates are data files that can be used to cryptographically link an entity with a public key. In a public key infrastructure (PKI), CAs can issue certificates that can be used to authenticate via public means. In some examples, a digital certificate can include an owner's identification information (e.g., a name, an address, etc.) and a digital signature of an entity (e.g., a certificate authority, a manufacturer, or the like). A digital certificate may also include the entity's public key, an expiration date for the certificate, etc. A person or entity that needs a digital certificate can request one from a certificate authority. The CA can verify the applicant's identity and generate a digital certificate for the applicant and digitally signs that certificate with the CA's private key. The digital certificate can then be authenticated using the CA's public key. A verifier can verify that a certificate is signed by a particular authority or private key using a public key. In certain examples, a CA activity can begin with a root certificate. The root certificate is used as the basis of trust for certificates issued by the authority. The root certificate along with a private key associated with the certificate can be treated with a high level of security and may be stored in an offline protected facility. The CA can use the root certificate to create one or more intermediate certificates, which can be used to sign digital certificates issued by the authority. In some examples, intermediate certificates can be used to issue digital certificates through registration authorities (RAs). The CA may delegate some or all of the requirements to authenticate to the RA for a particular domain namespace associated with the RA. These concepts are leveraged throughout the document. In particular, a private key can be secure in the security co-processor for a device. The private key can be used to sign information at the security co-processor. A public key can be provided to ensure that the signature from the security co-processor is valid. Similarly, a CA has a private key that can be used to sign certificates that can be used throughout the document. Public keys can be provided to verify that the certificates are signed by the CA. FIG.1is a block diagram of a device that includes a security co-processor capable of facilitating platform verification of a device, according to an example. The device100can include a security co-processor110provisioned with a device identity112, a processing element120, a memory device130, a bus140, and one or more bus device150. Examples of a device may include a switch, a router, a server, a desktop computer, a phone, an appliance, or any other computing device with the components and features described herein. The device can be operated in the systems ofFIGS.4and5for creating an initial integrity manifest certificate and later verifying the device. A processing element120, such as a central processing unit (CPU) or a microprocessor suitable for retrieval and execution of instructions and/or electronic circuits can be configured to perform the functionality of various software components. In certain scenarios, instructions and/or other information, such as firmware, database information, etc., can be included in memory devices130or other memory. Input/output interfaces may additionally be provided by the device100. In one example, input devices, such as a keyboard, a sensor, a touch interface, a mouse, a microphone, etc. can be utilized to receive input from an environment surrounding the computing device100. Further, an output device, such as a display, can be utilized to present information to users. Examples of output devices include speakers, display devices, amplifiers, etc. Moreover, in certain examples, some components can be utilized to implement functionality of other components described herein. Input/output devices such as communication devices like network communication devices or wireless devices can also be considered devices capable of using the input/output interfaces. In some examples, device100includes one or more firmware engine. A firmware engine can be implemented using instructions executable by a processor and/or logic. In some examples, the firmware engine can be implemented as platform firmware. Platform firmware may include an interface such as a basic input/output system (BIOS) or unified extensible firmware interface (UEFI) to allow it to be interfaced with. The platform firmware can be located at an address space where the processing element120(e.g., CPU) for the device100boots. In some examples, the platform firmware may be responsible for a power on self-test for the device100. In other examples, the platform firmware can be responsible for the boot process and what, if any, operating system to load onto the device100. Further, in the case of an appliance a simple switch, etc. the platform firmware may represent the operating system of the device100. The platform firmware may be capable to initialize various components of the device100such as peripherals, memory devices130, memory controller settings, storage controller settings, bus speeds, video card information, etc. In some examples, platform firmware can also be capable to perform various low level functionality while the device100executes. Moreover, in some examples, platform firmware may be capable to communicate with a higher level operating system executing on a CPU, for example via an advanced configuration and power interface (ACPI). In certain examples, the processing element may execute an Operating System. The Operating System is a system software that manages computer hardware and software resources and provides common services for computer programs. The OS can be executable on processing element120and loaded to memory devices130. The OS is a high level OS such as LINUX, WINDOWS, UNIX, a bare metal hypervisor, or other similar high level software that a boot firmware engine of the computing device100turns control of the device100to. The components of device100can interact with a certification station200to certify the device100(e.g., during a manufacturing state) and a verification station300to verify the device100is in the same condition at a later time. In some examples, the device100can be provisioned, at manufacturing, with a software stack that includes a secure or measured boot upon startup. In one example, a firmware engine can take an inventory and store the inventory as a stored inventory. The stored inventory can include an inventory of multiple components that may be desirous to be protected and tracked. Examples of devices or components to be inventoried include one or multiple processing elements120, memory devices130, a system board and/or multiple components of the system board, bus devices150on one or multiple bus140(e.g., a PCIe bus), a controller hub and/or devices connected to the controller hub, field replaceable unit enclosures, a northbridge device, other ASICs, etc. As used herein, the system board is the main printed circuit board used for the device100and allows communication between many of the components of the device, for example, the processing element120, the memory device130, peripherals, bus devices, etc. In some examples, a controller hub can be an I/O controller hub, for example a southbridge. The controller hub may be used to manage data communications between a CPU and other components of the system board. In some examples, a controller hub may have direct media interface to a northbridge device or the CPU. Further the controller hub may provide peripheral support for the device, such as bus connections like Universal Serial Bus (USB), Peripheral Component Interconnect (PCI), PCI express, PCI extended, serial AT attachment, audio circuitry, integrated Ethernet, enhanced host controller interfaces, combinations thereof, etc. Other examples of identifiers that can be used include system board revision identifiers, complex programmable logic device revision identifiers, ASIC stepping identifiers, platform and chassis identifiers, riser identifiers, embedded controller identifiers, battery and power identifiers, storage component identifiers, etc. These are examples of what may be included as components to be inventoried. These can change based on the device. For example, a server may have a northbridge and a southbridge, while a switch or other appliance may have less components. Firmware and/or software associated with one or more of the components may also be measured and stored in registers of the security co-processor. In the TPM context, the measurements can be stored in PCRs. Measurements can include, for example, a hash of a digest including the inventory. The inventory can include information about specific firmware and/or software that can be measured. Examples of this include firmware, software, and/or configurations. The measurements stored in the PCRs as part of the boot process can be considered a platform integrity measurement. As noted, the device100includes at least one processing element120. The processing element120can be configured to execute instructions stored on a memory device130. In some examples, for example, in the case of a server, one of the processing elements can include a BMC. In other examples, a main processing unit can execute the instructions. In various examples inFIGS.4and5, the device100performs various actions. This can be implemented using an interface of the device100coupled to a bus device150or another processing element120that is capable of performing the actions described in a particular device implementation. In some examples, a processing element or bus device performing the actions described can have a bus to communicate with the security co-processor110. In some examples, the device can be a server and the server can include a baseboard management controller (BMC). The BMC can be used to implement services for the device100. BMC can be implemented using a separate processor from the processing element120that is used to execute a high level operating system. BMCs can provide so-called “lights-out” functionality for computing devices. The lights out functionality may allow a user, such as a systems administrator, to perform management operations on the device100even if an operating system is not installed or not functional on the computing device. Moreover, in one example, the BMC can run on auxiliary power, thus the device100need not be powered on to an on state where control of the device100is handed over to an operating system after boot. As examples, the BMC may provide so-called “out-of-band” services, such as remote console access, remote reboot and power management functionality, monitoring health of the system, access to system logs, and the like. As used herein, a BMC has management capabilities for sub-systems of a device100, and is separate from a processor or processing element120that executes a main operating system of a computing device (e.g., a server or set of servers). As noted, in some instances, the BMC may enable lights-out management of the device100, which provides remote management access (e.g., system console access) regardless of whether the device100is powered on, whether a primary network subsystem hardware is functioning, or whether an OS is operating or even installed. The BMC may comprise an interface, such as a network interface, and/or serial interface that an administrator can use to remotely communicate with the BMC. As used herein, an “out-of-band” service is a service provided by the BMC via a dedicated management channel (e.g., the network interface or serial interface) and is available whether the device is in powered on state. In some examples, a BMC may be included as part of an enclosure. In other examples, a BMC may be included in one or more of the servers (e.g., as part of the management subsystem of the server) or connected via an interface (e.g., a peripheral interface). In some examples, sensors associated with the BMC can measure internal physical variables such as humidity, temperature, power supply voltage, communications parameters, fan speeds, operating system functions, or the like. The BMC may also be capable to reboot or power cycle the device. As noted, the BMC allows for remote management of the device, as such, notifications can be made to a centralized station using the BMC and passwords or other user entry can be implemented via the BMC. In some examples, the device can track events in one or more logs. The logs can include information about any time that a PCR register is changed. As noted above, each time a PCR register change, it is updated by hashing the previous value. A log of these events can later be played back to determine a comparison point for a verifier. In some examples, a subset of possible actions that can be logged can be valid operations (e.g., boot operations) for verification and if a non-valid operation is found in a log, the integrity of the device can be considered compromised. FIG.2is a block diagram of certification station that can be used to certify a platform, according to one example. The certification station can be used in a system in conjunction with a device100to certify and store an initial integrity manifest certificate for the platform as further described in relation toFIG.4. The certification station200can be implemented as a computing device that includes, for example, a processing element210, and a machine-readable storage medium220including instructions230,232,234,236for certifying a manifest for a platform (e.g., device100). Certification station200may be, for example, a server, a notebook computer, a slate computing device, a desktop computer, a portable reading device, a wireless email device, a mobile phone, or any other computing device. Processing element210may be, one or multiple central processing unit (CPU), one or multiple semiconductor-based microprocessor, one or multiple graphics processing unit (GPU), other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium220, or combinations thereof. The processing element210can be a physical device. Moreover, in one example, the processing element210may include multiple cores on a chip, include multiple cores across multiple chips, multiple cores across multiple devices (e.g., if the certification station200includes multiple node devices), or combinations thereof. Processing element210may fetch, decode, and execute instructions230,232,234,236to sign an integrity manifest for the platform. As an alternative or in addition to retrieving and executing instructions, processing element210may include at least one integrated circuit (IC), other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for performing the functionality of instructions230,232,234,236. Machine-readable storage medium220may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a Compact Disc Read Only Memory (CD-ROM), and the like. As such, the machine-readable storage medium can be non-transitory. As described in detail herein, machine-readable storage medium220may be encoded with a series of executable instructions for signing an integrity manifest for a platform. In some examples, the certification station200is connected to a device that is to be provisioned. The device100can expose an interface to the certification station for communication to the security co-processor110. Various interfaces can be used, for example, a serial connection such as a Universal Asynchronous Receiver/Transmitter (UART), an I2C bus, a SPI bus, etc. In some examples, a connection can be direct to the security co-processor (e.g., via a header and direct connection), in other examples, a connection can be via intermediary microcontrollers (e.g., a BMC, a microcontroller connected to an interface, etc.) or the processing element120. In some examples, the processing element120can be connected via an interface to the certification station. The certification station200can be connected via a connection to a certificate authority. In some examples, this can be via a secure network connection. FIG.3is a block diagram of verification station that can be used to verify a platform, according to one example. The verification station can be used in a system in conjunction with a device100to verify that a platform is in a same state or an expected state as a time when an initial integrity manifest certificate was generated for the platform as further described in relation toFIG.4. The verification station300can be implemented as a computing device that includes, for example, a processing element310, and a machine-readable storage medium320including instructions330,332,334,336for certifying a manifest for a platform (e.g., device100). Verification station300may be, for example, a server, a notebook computer, a slate computing device, a desktop computer, a portable reading device, a wireless email device, a mobile phone, or any other computing device. Processing element310may be, one or multiple central processing unit (CPU), one or multiple semiconductor-based microprocessor, one or multiple graphics processing unit (GPU), other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium320, or combinations thereof. The processing element310can be a physical device. Moreover, in one example, the processing element310may include multiple cores on a chip, include multiple cores across multiple chips, multiple cores across multiple devices (e.g., if the verification station300includes multiple node devices), or combinations thereof. Processing element310may fetch, decode, and execute instructions330,332,334,336to verify the integrity for a platform. As an alternative or in addition to retrieving and executing instructions, processing element310may include at least one integrated circuit (IC), other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for performing the functionality of instructions330,332,334,336. Machine-readable storage medium320may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a Compact Disc Read Only Memory (CD-ROM), and the like. As such, the machine-readable storage medium can be non-transitory. As described in detail herein, machine-readable storage medium320may be encoded with a series of executable instructions for verifying the integrity of a platform. In some examples, the verification station300is connected to a device that is to be verified. The device100can expose an interface to the certification station for communication to the security co-processor110. Various interfaces can be used, for example, a serial connection such as a Universal Asynchronous Receiver/Transmitter (UART), an I2C bus, a SPI bus, etc. In some examples, a connection can be direct to the security co-processor (e.g., via a header and direct connection), in other examples, a connection can be via intermediary microcontrollers (e.g., a BMC, a microcontroller connected to an interface, etc.) or the processing element120. A communication network can use wired communications, wireless communications, or combinations thereof. Further, the communication network can include multiple sub communication networks such as data networks, wireless networks, telephony networks, etc. Such networks can include, for example, a public data network such as the Internet, local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), cable networks, fiber optic networks, combinations thereof, or the like. In certain examples, wireless networks may include cellular networks, satellite communications, wireless LANs, etc. Further, a communication network can be in the form of a direct network link between devices. Various communications structures and infrastructure can be utilized to implement the communication network(s). By way of example, devices can communicate with each other and other components with access to a communication network via a communication protocol or multiple protocols. A protocol can be a set of rules that defines how nodes of the communication network interact with other nodes. Further, communications between network nodes can be implemented by exchanging discrete packets of data or sending messages. Packets can include header information associated with a protocol (e.g., information on the location of the network node(s) to contact) as well as payload information. FIG.4is a block diagram of a flow of certifying a platform, according to one example. System400includes a device100and a certification station200that are used to generate and store an initial integrity manifest certificate for the platform (e.g., device100). During manufacturing, the device100can be provisioned with a Device Identity (DevID). This is secured by being stored and usable only in the device's security co-processor (e.g., a TPM). The DevID includes a private key stored in the security co-processor. This allows the private key to be protected from malicious firmware or software running on the platform. An advantage of this approach is that if an attacker manages to run code on the platform, it won't be able to extract the private key to impersonate the platform later. The DevID can be associated with a public certificate. The public certificate can be, for example, an X.509 certificate or similar data structure. In one example, the data structure for the certificate includes information about the CA that issued the certificate, a serial number, and the DevID public key in a subject public key field. This public information can be retained by a manufacturer, provided by the security co-processor or another device on the platform, and be available to the certification station200. The device100can boot up. During boot, the device100can implement a secure boot or measured boot414. As noted above, during the boot process, an inventory of components and/or firmware and/or software can be taken. During the boot process, the instructions being executed as part of the boot process can cause secure storage of a platform integrity measurement at416. As noted above, in the context of a TPM, the platform integrity measurement can be stored in PCRs. The device100can be connected to the certification station200using an interface. The device100can be powered on and brought up. The certification station200can begin the certification process at410, where the processing element210of the certification station200can execute request instructions230to request a DevID certificate from the device100. The DevID112is stored in the security co-processor110. A public certificate can be provided to the certification station200. The public certificate can be stored at a security co-processor110, a TPM, another location on the device100or platform, or the like. In this case, the device100can send the DevID certificate to the certification station at412. In some examples, the certification station200may have access to the public certificate via another means, for example, a database that identifies the device100using an identifier such as a serial number. The certification station200can verify the DevID certificate using the first CA authority. The first CA is to have signed the DevID certificate. Once the DevID certificate is received by the certification station at418, the verification instructions232are executed to verify, using the first CA, the DevID. As noted, the certification station200can have a secure link to the first CA (e.g., a manufacturer authority) to validate the authenticity of a signature associated with the DevID certificate. In other examples, a certificate from the first CA may be used to validate the authenticity of the signature. After the DevID of the security co-processor110is verified, at420, the request instructions230can be executed to request a platform integrity measurement from the device100. The platform integrity measurement can be an integrity proof from the security co-processor110. As noted, this measurement can be taken during a boot process such as a measured boot. To obtain the integrity measurement, the DevID key can be used to securely retrieve the platform state captured in the security co-processor110(422). In the case of a TPM, this can include the information included in one or more platform configuration registers. Further, in the example of a TPM, this can be called a Quote. The integrity measurement proof can be signed using the DevID private key (424). The device100can send the secure integrity proof to the certification station (426). In some examples, the platform may also send integrity event logs. At428the certification station can execute verification instructions232to verify that the platform state captured in the integrity measurement proof (e.g., in TPM PCRs) matches the expected manufactured device values for the components, firmware and/or software installed in the manufactured system (otherwise this could mean that there is an attack already happening during manufacturing). In some examples, a database can be created that map expected states of components and firmware to product SKUs, model numbers, device types, or other grouping units. Customizations can also be accounted for and expected states stored. In some examples, for verifying the PCRs, the certification station may have access to the different event logs used to trace the modification of the PCRs. The integrity proof (and the associated log if needed), at430, is then signed by a second certificate authority (CA) by executing signing instructions234. In some examples, a quote is received from a TPM. A Quote may need a nonce. In some examples, a well-known value can be used for the nonce. In other examples, the nonce is also included in the data to sign. In some examples, the second CA may be a manufacturer wide CA, which can allow the second CA to authenticate the platform's state as a genuine manufacturer platform. A more fine-grained CA hierarchy could be used. For example a manufacturer can setup a dedicated CA for each release of a product, for a specific customer and restrict usage of the CA to the manufacturing lines of given countries. This allows the manufacturer to guarantee to customer the manufacturing lines used for their platforms. The signing can be based on the proof and the DevID112. The processing element210can execute store instructions236to store the signed integrity manifest certificate (432). In one example, the signed integrity manifest certificate is stored on the platform itself as it is integrity-protected by the CA signature. It could also be made available to a customer by other channels such as hosted on a website online, transmitted out-of-band by a secure mail, email, storage device, etc. At a later time, a verification station300can verify the platform state is the same or expected state as that when the initial integrity manifest certificate was created or last verified. In some examples, the device100is shipped to another location (e.g., to a customer). FIG.5is a block diagram of a flow of verifying a platform that has been certified, according to one example. As noted, this can occur after the certification station200creates an initial integrity manifest certificate. In one example, the device100can be shipped from a manufacturing site to a customer site and the customer can use the verification station300to perform verification. A verifying entity can be provided a medium320including instructions that can be used to implement the verification station300. Different software to be executed can be provided for different platforms or devices. The software is configured with a certificate representing the first CA used for signing the DevID certificate and the second CA used for signing the initial integrity manifest certificate. The certificates may be provided in one of various ways, for example, via a secure connection to the respective CAs, as a certificate that can be stored locally to verify the signatures, or the like. Upon reception of a platform, the customer installs and runs software to validate the DevID of the device. To begin, the device100can be booted and execute a measured or secure boot at514. During this process, registers of the security co-processor110are populated with platform integrity measurements as described above (516). Request instructions330can be executed by the processing element310of the verification station300at510to request a DevID certificate from the device100. This can be used to identify the device100is an authentic device that is certified by the first CA. The request can be sent via an interface to the device100, which can respond. At512, a processing element or bus device on the device100can respond to the request by sending the DevID certificate. In some examples, the processing element or bus device can request the DevID certificate from a secure storage, for example a secure storage in the security co-processor110or another storage location on the device100. An interface can be used to send the DevID to the verification station300. At518, the verification station300can verify the DevID certificate using a CA certificate associated with the first authority (e.g., a manufacturer authority). As noted above, in some examples, the DevID certificate can be verified using a local CA certificate (e.g., a public certificate). In other examples, the DevID certificate or information from the DevID certificate can be sent to the first CA for authentication using an application programming interface or another approach. At520, the request instructions330can be executed by processing element310to request a proof from the device of the state of integrity of the platform. In one example, a nonce can be used for the request. The nonce can be a fresh nonce. At522, the device100can run an attestation protocol to retrieve an integrity proof (e.g., a fresh Quote of the platform state using the DevID). This can also be signed using the DevID (524). Freshness of the integrity proof (e.g. using a new random nonce) is used because the platform may have been modified during shipping. Therefore, if an old nonce was used the procedure could be subject to replay attacks. The information for the integrity proof can include the same information or with expected changes. For example, if a boot counter was included in the integrity proof, an expected change can be an increment of the boot counter (or an increment of less than a certain number of the boot counter). For signing, the nonce, can be provided to the security co-processor, which has access to the platform integrity measurements (as well as other information such as counters. The security co-processor110can use the private key of the DevID to sign this information or portions of the information (including the nonce) and provide it to a processing element or bus device that is communicating with the verification station300. The integrity proof signed by the DevID and one or more logs (e.g., information about the state of the platform such as component identifiers, firmware information, etc.) can be provided to the verification station300(526). The information can be sent via an interface connecting the verification station300to the device100. At528, verification instructions332can be executed to verify the fresh integrity proof. First, the public key of the DevID can be verified to verify that the signature is correct. The verification can also include verifying that the public key has been certified by a trusted authority (e.g., the first CA). As described above, a local CA certificate can be used or a connection to the CA can be used. After the signature has been validated, the verification station can trust the platform's integrity values within the integrity proof (e.g., a TPM reboot counter, a clock, a digest of the PCRs, etc.). The digest of the PCRs are then used to validate the different event logs (e.g. UEFI, OS) that traces the firmware, software or configuration integrity value that were extended in the PCR at514/516. The verification station300can execute request instructions330to obtain the initial integrity manifest certificate. In one example, the request can be to the device100, which can provide its initial integrity manifest certificate. In another example, another way can be used to provide the verification station300the initial integrity manifest certificate. As noted above, this can include blockchain, a web portal, storage media, or other approach. At532the verification station300can execute verification instructions332to verify that the key used for signing the initial integrity manifest and the fresh integrity proof are the same (e.g., the DevID). At534, the platform integrity proof can be compared with the initial integrity manifest by executing comparison instructions334. This can be based on expected changes to the state of the platform. This is done by going through the event logs in the entry—in chronological order—and recreating the intermediary value of each register (e.g., PCR in a TPM implementation) (in the event logs, the PCR being used can be recorded). After each of the entries have been used, the digest of the registers should now match the integrity proof's register digest. Changes made within the logs can be considered expected changes if the changes are included in a set of operations that are authorized. The event logs can be recreated using the values of the initial integrity manifest certificate as the starting point. Once the new integrity proof is verified, the comparison instructions can verify the signed integrity proof of the platform (using the certificate of the second CA used for signing the quote during provisioning) and then compares the platform's integrity value within the integrity proof and initial integrity manifest (e.g. register values, reboot counter, etc.) in order to validate that the platform state has not been modified (or to evaluate what has been modified) since integrity manifest was created or last verified (e.g., during shipping). This also enables customer to protect from counterfeiting as the customer would detect any platform which has not been provisioned completely by the manufacturer. The customer can also verify that the initial integrity manifest and the fresh integrity proof obtained at reception of the platform are signed by the same key in order to make sure the initial integrity manifest is from this particular platform. Security action instructions336can be executed to perform an action based on the results. If any of the verification steps518,528,532,534fail, remedies can be taken. For example, a notification can be sent to the manufacturer, the customer, the entity verifying, etc. In some examples, the device100can be blocked from booting up. In other examples, the security action can include a recommendation to not include the device100into the customer's infrastructure. Moreover, in some examples, the logs can be parsed to determine what the differences are between the current values of the integrity proof and the expected values. This information can be analyzed. FIG.6is a flowchart of a method for certifying a platform, according to one example. Although execution of method600is described below with reference to certification station200, other suitable components for execution of method600can be utilized (e.g., system400). Additionally, the components for executing the method600may be spread among multiple devices. Method600may be implemented in the form of executable instructions stored on a machine-readable storage medium, such as storage medium220, and/or in the form of electronic circuitry. A device identity can be provisioned to a device or platform during a manufacturing process. The device identity can include a private key that is provisioned to a security co-processor such as a TPM. The device identity can also include a public key paired to the private key. The public key can be included in a public certificate. A first CA can be used to certify the public certificate for the device identity. In some examples, this is a manufacturer's CA. The certification station can request the public certificate from the device or have another source for the public certificate (e.g., a database to look up a serial number). The certification station can validate that the public certificate is valid and signed by the first CA. If it isn't validated, the certification station can be alerted of a potential threat and can perform a security action (e.g., send a notification, an email, log the event, flag the device from being shipped, etc.). The certification station can then ask the device or platform for its current integrity proof. The device or platform can boot up using a secure or measured boot process. During this process, logs can be kept by the system and records kept at registers in the security co-processor (in the case of a TPM, in a PCR). At602, the certification station can ask the platform to retrieve the integrity proof for the platform at this state and the security co-processor can provide the integrity proof signed by the private key of the DevID. As such, the device identity of the device can be used to retrieve the integrity proof. The integrity proof can include a representation of each of multiple hardware components including a processing element, a memory device, a bus device, a system board. The integrity proof can also include a representation of multiple firmware components included on the device. Further, in some examples, the integrity proof can include a representation of software to execute on the processing element. In some examples, the software can include an image of an operating system or a part of the operating system provisioned for the device. At604, the device100can provide the integrity proof to the certification station. In some examples, this can be sent via an interface. At606, the certification station200can determine that the integrity proof is an expected value based on an expected provisioning state of the device and the device identity. In one example, the certification station can verify that the signature of the device identity is valid as part of the determination. In another example, the status of the registers can be compared to an expected value for the device based on the serial number or a model number of the device and the configuration that is expected for the device. Test devices can be used to create a database of the expected values. At608, the certification station can sign an integrity manifest certificate representing the device. In one example, a second CA is used to sign the integrity manifest certificate. In another example, the second CA and the first CA can be the same root CA. The certification station can include the integrity proof, which includes a representation of the components and at least one firmware or software representation of a component. Further, the certification station can include information used to verify the device identity (e.g., include the public certificate or a public key). At610, the certification station can cause storage of the integrity manifest certificate. The integrity manifest certificate can be stored in the device, be stored in a block chain, be send separately via a web portal, an email, a storage device, or the like. The device can be transferred to a customer or moved to another location and, at a later time, the integrity can be verified. FIG.7is a flowchart of a method for verifying a platform, according to one example. Although execution of method700is described below with reference to verification station300, other suitable components for execution of method700can be utilized (e.g., system500). Additionally, the components for executing the method700may be spread among multiple devices. Method700may be implemented in the form of executable instructions stored on a machine-readable storage medium, such as storage medium320, and/or in the form of electronic circuitry. Subsequent to the signing of the integrity proof and a shutdown of the device, the device can be connected to a verification station and turned on. At702, the verification station can verify a device identity certificate of the device being verified. A benefit of this verification is a trust that the security co-processor is the expected security co-processor that was provisioned with the DevID from the first CA. In one example, the verification station can query the device and receive a DevID certificate. The certificate can be validated as described above. In one example, if the certificate is not valid a security action can be performed as noted above because a potential threat may have occurred. In response to a successful verification of the device identity certificate, the verification station can request a fresh integrity measurement from the device (704). This action is trusted because the security co-processor has been validated and the DevID provisioned from the first CA. When the private key from the DevID signs this using the security co-processor, a level of trust is provided. The device can be in a state where a measured or secure boot has occurred and security registers are populated. The device can retrieve and send, using the security co-processor the fresh integrity measurement to the verification station. In some examples, a nonce can be provided by the verification station and used for the fresh integrity measurement. As noted above, this can be signed using a private key from the DevID. Because the verification station has a public key paired with the DevID, the verification station can verify that the fresh measurement is authentic. At706, the verification station can process the fresh integrity measurement and the integrity manifest certificate to determine whether an unexpected change occurred. The integrity manifest certificate can be retrieved, either from the device or via another means as explained above. The verification station can have or obtain a public key associated with the second CA. The public key can be used to verify that the integrity manifest certificate is authentic. If the certificate is not authentic, a security action can be taken. The integrity measurement from the fresh proof can be processed using an integrity event log, where the fresh integrity proof is processed using the integrity event log to determine a comparison point to compare the fresh integrity measurement with the integrity manifest certificate to determine whether an unexpected change occurred as explained above. Also as noted above, the determination as to whether the unexpected change has occurred can be based on a number of times the device has been booted between verifications of the device. One attack vector that this protects from is excessive power cycling in attempts to break security by a malicious entity. At708, a security action can be performed in response to a determination that an unexpected change occurred. In some examples, a customer may wish to send the device to a reseller or other preparer to modify the device prior to receiving a final device. This can include, for example, adding a new PCIe card, replacing a device, etc. In this example, the verification process700can be followed to verify that the device was initially in the expected state when the reseller received it. Then a new certification process600can be performed after the modifications are complete. The new integrity manifest certificate can be signed by a third CA. In some examples, the third CA can be associated with the reseller or modifier. In some examples, the new integrity manifest certificate can include a link (e.g., a hash) to the previous, certified, integrity manifest that is being superseded. This can create a chain of integrity manifest certificate that can be extended as modification occurs on the platform. While certain implementations have been shown and described above, various changes in form and details may be made. For example, some features that have been described in relation to one implementation and/or process can be related to other implementations. In other words, processes, features, components, and/or properties described in relation to one implementation can be useful in other implementations. Furthermore, it should be appreciated that the systems and methods described herein can include various combinations and/or sub-combinations of the components and/or features of the different implementations described. Thus, features described with reference to one or more implementations can be combined with other implementations described herein.
55,492
11861373
DETAILED DESCRIPTION In some examples, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In some examples, IaaS is one of the three main categories (or sub-categories) of cloud computing services. Most consider the other main categories to be software as a service (SaaS) and platform as a service (PaaS), and sometimes SaaS may be considered a broader category, encompassing both PaaS and IaaS, with even some considering IaaS to be a sub-category of PaaS as well. In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing and clustering, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) in each VM, deploy middleware, such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc. In some examples, IaaS deployment is the process of putting a new application, or a new version, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand) or the like. In some examples, IaaS may be used to provision an initial set of infrastructure components (e.g., services, etc.). In some embodiments, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. An overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files. As noted above, one way to provision the infrastructure is to describe it declaratively. As such, the configuration file may be a declarative file that merely describes each of the infrastructure components noted above and how they interact. The configuration file can describe the resource and the relevant fields needed to create the element, and then as other elements can be described that reference the previously described elements. In some examples, a provisioning tool can then generate a workflow for creating and managing the elements that are described in the configuration file. In some instances, the workflow of the provisioning tool may be configured to perform various commands. One function that can be performed is view reconciliation, where the provisioning tool can compare the view of the current infrastructure (e.g., the expected state of the infrastructure) with how the infrastructure is actually running. In some instances, performing the view reconciliation function may include querying various resource providers or infrastructure resources to identify what resources are actually running. Another function that the provisioning tool can perform is plan generation, where the provisioning tool can compare the actually running infrastructure components with what the provisioning tool wants the state to look like (e.g., the desired configuration). In other words, the plan generation function can determine what changes need to be made to bring the resources up to the most current expectations. In some instances, a third function is the execution (e.g., apply) function, where the provisioning tool can execute the plan generated by the plan generation function. In general, provisioning tools may be configured take the configuration file, parse the declarative information included therein, and programmatically/automatically determine the order in which the resources need to be provisioned in order to execute the plan. For example, if a virtual private cloud (VPC) needs to be booted before security group rules and VMs are booted, then the provisioning tool will be able to make that determination and implement the booting in that order without user intervention and/or without that information necessarily being included in the configuration file. In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned. As noted above, generally there are two different tools used to handle each of the provisioning of infrastructure resources and the deployments of code to control the infrastructure resources, with orchestration between the two tools being performed manually. However, at scale, manual implementation always leads to deviations. Thus, an automated tool that can both provision and deploy a virtual infrastructure enables more efficient and reliable techniques for implementing a virtual cloud environment. In some examples, when two tools are used, issues can arise when a user manually makes changes to the code between the provisioning phase and the deployment phase. As described herein, a technique that uses a single tool for both provisioning and deploying can alleviate that by automating the process, such that there isn't an opportunity for manual code changes. It may be the case, that a slight change to the way in which one user codes something, can create major issues in the deployment phase. In some examples, the first time an operator performs an action in a new region (e.g., a typo in the code), the object that was coded with the typo may be that way forever. If the application is deployed with that typo, and the application is not sensitive to that typo (e.g., it still works), it is possible that some time down the road, an additional code change could become sensitive to that typo, and crash the entire system. Thus, the techniques provided herein can remove the gap between provisioning and deployment that can often lead to problems. In general, modeling deployments is declarative such that a configuration file can be used to declare the infrastructure resources. For example, create, read, update, delete (CRUD) instructions are generally used to generate deployment files using general Representational State Transfer (REST) concepts (e.g., REST Application Programming Interfaces (APIs)). However, deployment itself does not generally follow the same concept. Additionally, while the infrastructure provisioning tools tend to be really powerful and/or expressive, the tools for deployment tend to be much more restrictive regarding the operations they can perform (e.g., they are imperative as opposed to declarative). Thus, there has been a long-felt need for a tool that can handle both functional requirements (e.g., provisioning and deployment of infrastructure elements) within a cloud environment. The embodiments disclosed herein may utilize a cloud infrastructure orchestration service (CIOS). This service can be configured to manage both provisioning and deploying of infrastructure assets within a cloud environment. In some instances, the CIOS can include two classes of service: the Central and Regional components (e.g., CIOS Central and CIOS Regional). CIOS can be described as an orchestration layer that applies configuration to downstream systems (e.g., world-wide). It is designed to allow world-wide infrastructure provisioning and code deployment with no manual effort from service teams (e.g., beyond an initial approval in some instances). The high level responsibilities of CIOS include, but are not limited to:Providing teams with a view in to the current state of resources managed by CIOS, including any in-flight change activity.Helping teams plan and release new changes.Coordinating activity across various downstream systems within a region to execute approved release plans with no human intervention.Coordinating activity across regions/realms to execute approved release plans world-wide. Evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned can be challenging. Conventionally, if a service that has not previously been provisioned is requested (e.g., via API call), the request would be rejected and the user would have no access to the service. Additionally, services that have gone unutilized (or at least underutilized according to some predefined threshold) may continue to run and waste resources even though the service(s) are not needed. To address these inadequacies, IaaS may be configured to identify implicitly when to add a new service and/or remove a service and may perform corresponding operations for deploying or winding down the service automatically. Additionally, or alternatively, IaaS may be configured with user interfaces from which a user may explicitly request the addition or removal of a service, view the current status of various services, and the like. These techniques can provide improved control and an overall more seamless and less-frustrating user experience. FIG.1is a block diagram of an architecture for implementing at least some elements of a cloud infrastructure orchestration service, according to at least one embodiment. For example,FIG.1depicts an architecture100for illustrating techniques for implementing CIOS Central102. In some examples, CIOS Central102can be the service that handles operations at the level of a “Flock.” A flock may be a model CIOS may use to encapsulate a control plane and all its components. A flock may be used to model ownership of and point at the infrastructure components. An infrastructure component may be a long-lived piece of infrastructure that supports running code (e.g., a deployment application, a load balancer, a domain name system (DNS) entry, an object storage bucket, etc.). A flock configuration file may be used to describe the set of all infrastructure components, artifacts, and deployment tasks associated with a single service. Each flock may have one flock configuration file (herein referred to as a “flock config”). Flock configs are checked in to source control. Flock configs are declarative and provide realm, region, availability domain (AD), and artifact versions as input. An artifact refers to code being deployed to a deployment application or a Kubernetes engine cluster, or configuration information (hereinafter, “config”) being applied to an infrastructure component. In some embodiments, CIOS Central102has a few responsibilities, including but not limited to:Serving as an authentication gateway for Flock metadata changes and release operations.Storing an authoritative mapping of Flock metadata to the deployment artifacts and CIOS repositories for the flock.Coordinating global Releases across Phases and Targets.Synchronization to enforce policies like “no more than one ongoing release to a Flock at a time.”Detecting changes to Flock configuration (config) and artifacts, and triggering a release generation on such changes. In some examples, a source code version-control management service (SCVMS)104can be configured to store authoritative Flock configuration and an artifact notification service (ANS)106can be subscribed to by CIOS Central102, so that CIOS Central102can be informed of new artifact builds. The CIOS Central102can then map incoming changes against the affected flocks, and initiate release planning where desired. Additionally, in some examples, an artifact push service (APS) can be invoked by CIOS Central102, before a release to a target, to ensure any artifacts required for a successful release are present in the target's region ahead of release. In some examples, customers (e.g., engineers)108can call CIOS Central102to CRUD flocks and/or releases, and to view the status of ongoing CIOS activity. Flock management service110can include one or more API's to manipulate flocks, view/plan/approve service112can include CRUD API's to create and approve plans, and to view a central copy of the state of all CIOS-managed resources, change monitoring service114can watch SCVMS104for changes to flock config, and can receive notifications about changes to other artifacts from ANS106, and state ingester service116can create copies of each regional state (e.g., a point-in-time snapshot of a state of regional resources) in CIOS Central database (DB)118so that view/plan/approve112can expose them. In some examples, the CIOS Central DB118can be a DB of flocks, plans, and state. Flock information can be authoritative; while everything else may be a stale copy of data from CIOS Regional120. CIOS Central102may be configured to provide any suitable portion and/or number of user interfaces for presenting any suitable data related to a flock, a release, an infrastructure component, an artifact, or the like. In some embodiments, CIOS Central102may present via any suitable interface data related to one or more releases. A release may include any suitable combination of tasks related to one or more infrastructure components and/or one or more code changes to one or more applications (e.g., artifacts). In some examples, engineer108can perform an API call for the flock management service110(e.g., through the ingress proxy fleet122) to create a list of flocks. The protocol for making such an API call can be hypertext transport protocol secure (HTTPS) or the like. Relevant access control lists (ACLs) for this operation can include a local area network (LAN)124or other private connection. For example, CIOS may manage/control a network-connectivity alternative to using the public Internet for connecting a customer's on-premises data center or network with CIOS (e.g., a dedicated, leased, and/or private connection). Additionally, authentication and authorization (e.g., of the engineer108) may be performed by a reservation system portal that allows users to manage machine infrastructure (e.g., reservation service). In some instances, CIOS Central102can store flock metadata, plans, and state in the Central DB118, using Java database connectivity (JDBC) or the like. In some examples, ANS106can be configured to notify the change monitoring service114when new artifacts have been published. The ANS106may use HTTPS, and both authentication and authorization may be handled by a mutual transport layer security service. Additionally, in some instances, the change monitoring service114can poll the SCVMS104for flock configuration changes. This polling can be performed using secure shell (SSH) or other protocols. Authentication of the change monitoring service114may be handled by a CIOS system account and authorization may be handled by SCVMS104. In some examples, the engineer108can use the view/plan/approve service112to do one or more of the following operations. The engineer108can plan and/or approve by calling CIOS Central102to generate and approve plans. The engineer108can view by calling CIOS Central102to view the status of ongoing CIOS activity world-wide. Additionally, the engineer108can CIOS Central102to view a replica of the state of CIOS-managed resources world-wide. These API calls (or the like) can be performed via the HTTPS protocol or similar protocols. Additionally, relevant access control lists (ACLs) can be controlled by LAN124, and both authentication and authorization can be handled by the reservation service. In some examples, the view/plan/approve service112can request planning and push plan approval to all regions of CIOS Regional120(e.g., using HTTPS or the like). Relevant ACLs can be controlled using a security list managed by the wide area network (WAN) gateway126. Authentication can be handled by mutual transport layer security and authorization can be handled by various identity policies. Further, the state ingester service116can watch CIOS Regional120for job status or state changes, so that CIOS can provide a central view of them upon request (e.g., also using HTTPS or the like). ACLSs for this can also be handled by the WAN gateway126, and both authentication and authorization can be handled by mutual transport layer security services. FIG.2depicts an architecture200for illustrating techniques for implementing at least CIOS Regional202. In some examples, CIOS Regional202is where much of the work of declarative provisioning and planning, as well as approved release application can occur. In some instances, each instance of CIOS Regional202may have a regional fronted that can handle operations at the level of “Execution Targets.” It can be configured to perform the following:Handling all CIOS Authentication for incoming operations from CIOS Central102.Enforcing a rule that only one “execution” (plan/import resources/apply plan) can be ongoing for a given Execution target at a time.Managing binary artifact storage for declarative provisioning artifacts used for input and output during declarative infrastructure provisioning execution. Examples of input are declarative infrastructure provisioning configuration files and an input state file. Typical output is a final state file.Requesting work from and polls for results from the CIOS Executor for any given execution. In some instances, the CIOS Frontend may be dependent on a CIOS Executor206(also referred to herein as a “scheduler”), which can handle the actual execution. The CIOS Executor, in some examples, operates at the level of “Execution,” and it can:Track a pool of available Worker nodesQuery incoming job requests, and assigns them to eligible workers as availableTrack worker status and Execution updates for reporting to clientsDetect dead nodes via a leasing protocol, and can fail tasks assigned to dead nodes, depending on task status.Provide facilities to cancel/kill/pause/resume Executions, and can map those onto facilities to pass cancellation/kill/resumption info on to Worker nodes. In some instances, the CIOS Executor can depend on CIOS Workers, which can assign tasks for execution to Workers, and provide a facility for Workers to update job progress. The worker service operates at the granularity of “Task.” Each worker is an agent executing Tasks assigned to that worker and reporting Task status and output. Each worker can:Poll Executor Worker APIs for assigned work items, and take action to make the assigned state match its local state:start containers for polls task items that do not exist locallykill containers for locally running containers that have no corresponding assigned task itemReport status for jobsStage input and output for job container executionLaunch and monitor declarative infrastructure provisioning containers for doing the real work of a Release for an Execution Target. CIOS Workers may depend on CIOS Executor to poll work from and report results to the worker endpoint of the CIOS Executor. The Worker may rely on the Executor for all coordination. Additionally, the CIOS Workers may also depend on CIOS Regional202, where the Worker services reads input from and writes output to one or more APIs that are associated with the Regional Frontend service. Examples of input are configuration and starting state files and import mappings. Examples of output are declarative provisioning process, output declarative provisioning state files, and import result states. In some examples, CIOS Regional202can be a regional service for managing regional instances/deployments of CIOS. CIOS Regional202covers responsibility for authoritatively storing and managing plans and stat that pertains to a particular region. A Regional DB204may be a CIOS DB for the state and plans in the particular region. This is the authoritative copy of the region's subset of the Central DB118ofFIG.1. Scheduler206can be responsible for managing worker fleet capacity, assigning tasks to workers, and keeping track of task state. In some instances, Task DB208is another CIOS DB for task state. Data in this DB is mostly for operational purposes. Additionally, Worker210can be a fleet of java virtual machines (JVMs) that manage declarative provisioning images. These receive instructions from the Scheduler206and communicate results to both the Scheduler206and CIOS Regional202. A CIOS container212can run declarative provisioning actions in its own private docker214container. This container does not need to contain secrets. Additionally, in some examples, a signing proxy216can be configured to prevent secret exfiltration via a declarative provisioning tool, in order to avoid putting secrets in the declarative provisioning Image. Instead, CIOS can perform request signing or initiate a mutual transport layer security (mTLS) service in a proxy. This also makes it easier to use FIPS-compliant crypto libraries. In some examples, CIOS Central102can call CIOS Regional202to create plans, push approvals, watch job status, and extract declarative provisioner state (service principal). An ingress proxy218can be configured as the ACL and various identity policies may be used for both authentication and authorization. Alternatively, in some examples, the ingress proxy218may be replaced with a load balancer configured to balance the load incoming requests, plans, etc. In some instances, CIOS Regional202may run a declarative provisioner by asking the scheduler206to do so. Worker210can ask Scheduler206what it should be running, and can report status to Scheduler206when done. In some cases, mTLS may handle both authentication and authorization for CIOS Regional202and Worker210. Additionally, when Worker210needs to run a declarative provisioner, it does so in docker containers by interacting with the local docker214. Authentication for this stage may be handled by a local unix socket. A docker protocol may be used for this last step; however, HTTPS may be utilized for the previous ones. In some embodiments, CIOS Regional202may be configured to provide any suitable portion and/or number of user interfaces for presenting any suitable data related to a flock, a release, an infrastructure component, an artifact, or the like. In some embodiments, CIOS Regional202may present via any suitable interface data related to one or more releases. A release may include any suitable combination of tasks related to one or more infrastructure components and/or one or more code changes to one or more applications (e.g., artifacts). In some examples, the CIOS container212enables a declarative provisioner to interact (via API) with the signing proxy216, while the declarative provisioner thinks it is calling various CIOS services. The signing proxy216listens on one ephemeral port per calling instance of declarative provisioner, known only to that declarative provisioner. The signing proxy216can initiate requests signatures or mTLS, and can pass the declarative provisioner's calls through to other CIOS services within the service enclave. In some instances, the signing proxy216can also communicate with one or more public CIOS services220. For example, the Signing Proxy216will use the internal endpoint of public services where possible. For services with no internal endpoint, it must use the egress proxy222to reach the external endpoint. This use of the signing proxy216may not be for cross-region communication; for example, an egress proxy whitelist in each region may only be for that region's public IP ranges. In some examples, Worker210may then persist state and logs from a declarative provisioner in CIOS Regional202so that they can be exfiltrated to CIOS Central102. CIOS (or a declarative infrastructure provisioner such as the declarative provisioning tool of CIOS discussed above) may be utilized to parse the configuration file. Through this parse, CIOS (or the declarative provisioning provisioner) may generate a directed acyclic graph (DAG) for each resource, module, and/or capability that compiles and defines an ordered list of dependencies on other resources, modules, and/or capabilities. While attempting to deploy a resource, CIOS may traverse the DAG to identify when a resource is dependent on another resource, module, and/or capability of another resource. The DAG for each resource may specify implicit dependencies, explicit dependencies, or a combination thereof and may be used for booting or otherwise deploying the corresponding resource with CIOS. FIG.3is a flow diagram for illustrating a flow300of operations performed in response to an application programming interface (API) call to a previously deployed service (e.g., service302), in accordance with at least one embodiment. Service302may be one of a set of previously deployed services that have been deployed within a given region. The services discussed in the following figures are examples of a computing component of the cloud-computing environment ofFIGS.1and2. The same examples can be applied to other computing components (e.g., databases, object storage, block storage, virtual machines, etc.). The flow300may begin at304, where an API request may be made by user device306. In some embodiments, the API can be performed via the HTTPS protocol or similar protocols. The request may include any suitable information such as an identifier of the user, an identifier for the service associated with the request, or the like. The API call may be received by gateway computer308, which may be an example of a computer that implements the WAN gateway126ofFIG.1. The gateway computer308may be configured to maintain a routing table for previously deployed regional services (e.g., the cloud services A-N ofFIG.2). The routing table may include Internet Protocol (IP) addresses for each service and/or infrastructure component within the environments provided by the architectures ofFIGS.1and2. The routing table may contain any suitable information needed to forward a data packet along toward its destination. For example, the routing table may include a network identifier (e.g., an IP address) for the destination, a subnet mask that is used to match the destination IP address to the network ID, and any suitable information configured to enable the data packet to be forwarded to toward the destination. At310, the gateway computer308(or any suitable component ofFIG.2, such as ingress proxy218, CIOS regional202, or the like) may authenticate the user of the API call. In some embodiments, authentication may include making an API call to one or more services (e.g., an identity service) configured to maintain permissions and/or identity data for one or more users of the system. By way of example, the gateway computer308may call an identity service configured with permissions and/or identity data that may be utilized with the identifier of the user as received in the request to identify the user and/or one or more permissions policies associated with that user. At312, a determination may be made as to whether the API call is able to be routed. This determination may include any suitable combination of identifying 1) if the user is who they purport to be, 2) if the user has authorization to invoke such functionality of the service302, or 3) if an identifier for the service (e.g., an IP address) is included in the current routing table maintained by the gateway computer308. By way of example, if the identity of the user is known and/or the permissions associated with the user indicates the user has the authority to make that API call, and the identifier of the service302is included in the routing table maintained by the gateway computer308, the gateway computer308may forward the request to the service302. This forwarding may include identifying, from the routing table and a service identifier received in the message and associated with the service302, the destination address associated with the service302. At314, the service302may receive the API call and perform any suitable operations for processing the call. Once processing has completed, the service may return a response at316to the gateway computer308. In some embodiments, this response may indicate whether the processing was successful (e.g., completed) or unsuccessful (e.g., incomplete/not allowed). At316, the user device306may receive the response and execute any suitable operations such as, but not limited to, displaying for the user an indication that the API call was successfully processed. Alternatively, if at312, a determination may be made by the gateway computer308(or another suitable component ofFIG.2) that the API is not routable. For example, if the user is not who they purport to be and/or the user does not have the authority to make such API calls and/or if an identifier for the service (e.g., an IP address) is not included in the current routing table maintained by the gateway computer308, the gateway computer308(or other suitable component) may determine the API call is not routable and may forward a return error code (e.g., an alphanumeric value indicating a general or particular error) to the user device306at320. At322, the user device306may be configured to perform one or more operations to handle the error such as displaying at the screen an indication of the error, enabling user options to attempt another request, etc. FIG.4is a flow diagram for illustrating a flow400of operations performed in response to an API call for a service (e.g., service402) that has not yet been deployed, in accordance with at least one embodiment. In some embodiments, a predefined set of services may be previously deployed in the region and/or accessible to the user device by API call. Service402may not be included in that initial predefined set of deployed services. The flow400may begin at404, where an API request may be made by user device406(e.g., an example of the user device306ofFIG.3). In some embodiments, the API call can be performed via the HTTPS protocol or similar protocols. The API call may include any suitable information such as an identifier of the user, an identifier for the service associated with the request, or the like. The API call may be received by gateway computer408, which may the same or similar to the gateway computer408and which may be an example of a computer that implements the WAN gateway126ofFIG.1. Similar to the gateway computer discussed above, the gateway computer408may be configured to maintain a routing table for previously deployed regional services (e.g., the cloud services A-N ofFIG.2). The routing table may include Internet Protocol (IP) addresses for each service and/or infrastructure component within the environments provided by the architectures ofFIGS.1and2. The routing table may contain any suitable information needed to forward a data packet along toward its destination. For example, the routing table may include a network identifier (e.g., an IP address) for the destination, a subnet mask that is used to match the destination IP address to the network ID, and any suitable information configured to enable the data packet to be forwarded to toward the destination. At410, the gateway computer408(or any suitable component ofFIG.2, such as ingress proxy218, CIOS regional202, or the like) may authenticate the user of the API call. As described above, authentication may include making an API call to one or more services (e.g., an identity service) configured to maintain permissions and/or identity data for one or more users of the system. By way of example, the gateway computer308may call an identity service configured with permissions and/or identity data that may be utilized with the identifier of the user as received in the request to identify the user and/or one or more permissions policies associated with that user. As another example, the gateway computer408may maintain user data associated with any suitable number of users and may authenticate the user against that user data using any suitable information obtained from the API call (e.g., an identifier associated with the user). At412, a determination may be made as to whether the API call is able to be routed. This determination may include any suitable combination of identifying 1) if the user is who they purport to be, 2) if the user has authorization to invoke such functionality of the service302, or 3) if an identifier for the service (e.g., an IP address) is included in the current routing table maintained by the gateway computer408. This determination may be similar to the one made at312ofFIG.3. If the API call is not routable (e.g., the service402has not yet been deployed, is not included in the routing table, etc.), the flow400may proceed to414, where a return error may be forwarded to the user device406and data may be sent to orchestrator416(e.g., CIOS Central102ofFIG.1) indicating that a bootstrap of the requested service (e.g., service402) is to be initiated. At418, the user device406may be configured to perform one or more operations to handle the error such as displaying at the screen an indication of the error, enabling user options to attempt another request, etc. Upon making another API call, the flow may proceed back to404and the remaining portion of the flow may be repeated any suitable number of times. The operations of414,418, and420may be executed in any suitable order. At420, the orchestrator416may receive the bootstrap request and perform any suitable operations for bootstrapping (e.g., loading into memory (e.g., memory of a virtual machine) and/or initializing) the absent service (e.g., service402). In some embodiments, the orchestrator416may utilize a predefined set of instructions associated with bootstrapping the service402. By way of example, a directed acyclic graph (DAG) associated with the service402may be utilized to identify one or more instructions for bootstrapping the service402. In some embodiments, this DAG is part of a larger DAG that maintains interdependencies of various services with capabilities of other services (e.g., a specific portion of functionality provided by another service). The DAG associated with service402may be any suitable part of a finite directed graph that includes any suitable number of nodes and edges with each edge being directed from one node to another. The nodes and edges may be arranged to avoid directed cycles. Thus, the finite directed graph (e.g., the DAG) is arranged such that there is no way to start at any node and follow a consistently directed sequence of edges that eventually loop back to that same node. A last node of the finite directed graph may point to a null value or otherwise indicate an end to the finite directed graph. In some embodiments, each node of the DAG corresponds to a set of operations or a set of capabilities on which a next node of operations depends. The directed edges of each DAG define an order by which these operations are to be executed and/or a dependency between a subset of operations associated with a node and a subset of capabilities associated with an immediately preceding node. The operations of each node are to be executed in the order corresponding to the order of nodes and may individually correspond with one or more dependencies. By way of example, a first node in the DAG may correspond to a dependency of operations corresponding to a third node of the DAG (e.g., corresponding to service402) on a capability associated with a different resource (e.g., service A). Similarly, a second node of the DAG may correspond to a dependency of operations corresponding to the third node (e.g., representing service402) on a capability associated with a different resource (e.g., resource B). In some embodiments, different capability nodes (e.g., a node identifying a dependency on a particular resource's capability/capabilities) may be used for different resources, or a single node may be utilized to specify all dependencies regardless of how many resources to which the dependencies refer. Thus, in some embodiments, the dependency of service402on service A and resource B may be combined in a single node. The DAG may be traversed to orchestrate the execution of operations for booting and/or deploying a resource in a cloud-computing environment (e.g., the service402) with respect to one or more dependencies on capabilities of other resources (or other resources themselves). At422, once service402has been bootstrapped (e.g., an predefined image for the service402is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator416may transmit data (e.g., an IP address associated with the service402, an alphanumeric identifier associated with the service402, etc.) to the gateway computer408to update the routing table maintained by the gateway computer408. The gateway computer408may be configured to update the routing table with any suitable portion of the data provided by the orchestrator416and associated with the service402. Although not depicted, in some embodiments, the gateway computer408may transmit any suitable data to the user device406to indicate the service402is ready. Subsequently, a user may initiate a new API call at404and the operations of404-412may be repeated. Now that the routing table includes the data associated with the service402(e.g., the identifier for the service, an IP address for the service402, etc.), the determination made at412may indicate the API call is now routable to the service402and the API call may be forwarded to the service402at424. In some embodiments, the service402may initiate a timer for a predefined period of time. This timer may be configured to maintain knowledge of when the service402was last called. The purpose here is to be able to wind down service402if another API call is not received for the service within a predefined period of time from the last received API call. This process is described in more detail with respect toFIG.7. At426, the service may return a response to the gateway computer408. In some embodiments, this response may indicate whether the processing was successful (e.g., completed) or unsuccessful (e.g., incomplete/not allowed). The gateway computer408may forward the response to the user device406. At428, the user device406may receive the response and execute any suitable operations such as, but not limited to, displaying for the user an indication that the API call was successfully processed. In should be appreciated that, if the service402was initially available after the first API call described above, the flow400may proceed from412to424while forgoing the entire bootstrapping process and routing table update as the service would have already been available at the time of the API request. FIG.5is a flow diagram for illustrating a flow500of operations performed in response to an API call for a service (e.g., service502) that has not yet been deployed, when the service depends on another service that also has not yet been deployed, in accordance with at least one embodiment. The flow500may begin at504, where an API request may be made by user device506(e.g., an example of the user device406ofFIG.4). In some embodiments, the API can be performed via the HTTPS protocol or similar protocols. The operations performed at504,510,512,514, and518may be substantially the same as the operations discussed at404,410,412,414, and418ofFIG.4. The API call may be received by gateway computer508, that may the same or similar to the gateway computer408ofFIG.5, which may be an example of a computer that implements the WAN gateway126ofFIG.1. At520, in response to receiving a request to the bootstrap request transmitted at514, the orchestrator516(e.g., the CIOS Central102ofFIG.1), may perform any suitable operations for bootstrapping (e.g., loading into memory (e.g., memory of a virtual machine) and/or initializing) the absent service (e.g., service502, an example of the service402ofFIG.4). In some embodiments, the orchestrator516may utilize a predefined set of instructions associated with bootstrapping the service502. By way of example, a directed acyclic graph (DAG) associated with the service502may be obtained (e.g., from memory) and utilized to identify one or more instructions for bootstrapping the service502. FIG.6is a flow diagram illustrating an example process600for orchestrating the execution of a task (e.g., deploying/bootstrapping a resource such as service502ofFIG.5) that includes a dependency on at least one capability (e.g., a capability of a different resource), according to at least one embodiment. As illustrated inFIG.6, the process flow600includes a scheduler602(e.g. the scheduler206ofFIG.2), a worker604(e.g. the worker210ofFIG.2), and a process606(e.g. an example of the CIOS container212ofFIG.2). At608, the scheduler602may receive a task for deploying one or more infrastructure components/resources (such as service502) in a region, and the scheduler602may transmit data pertaining to the task to the worker604. In some embodiments, the scheduler602may instantiate the worker604to handle deployment of a resource (e.g., the service502). At610, the worker604may instantiate computing process606which may be configured to execute an instance of a declarative infrastructure provisioner (e.g., a declarative provisioning tool such as Terraform). At612, the computing process606may parse a configuration file associated with the deployment to generate a directed acyclic graph (DAG) for a particular resource (e.g., service502). As discussed above, each node of the DAG corresponds to a set of operations or a set of capabilities on which a next node of operations depends. The directed edges of each DAG define an order by which these operations are to be executed and/or a dependency between a subset of operations associated with a node and a subset of capabilities associated with an immediately preceding node. The operations of each node are to be executed in the order corresponding to the order of nodes and may individually correspond with one or more dependencies. Through parsing the configuration, the computing process606may identify any suitable number of implicit and/or explicit dependencies on capabilities of other resources. By way of example, the computing process606may identify that service502depends on another service (referred to as “service B”). Once identified, the computing process606builds a DAG that specifies tasks for booting and/or deploying a resource with potentially one or more nodes that correspond to a capability on which the resource depends (e.g., in accordance with the implicit and/or explicit dependencies identified during the parsing). In some embodiments, the DAG may already be generated and stored in memory. In this situation, the DAG may simply be retrieved rather than regenerated through parsing. At614, the computing process606may begin traversing the DAG, executing at least a portion of the deployment and/or booting of the particular resource as various nodes of the DAG are reached. In accordance with at least one node of the DAG, any suitable operations may be executed to cause a portion of functionality corresponding to the resource to become available. It may be that multiple portions of functionality corresponding to the resource become available. In some embodiments, the computing process606may transmit to the scheduler602a notification indicating one or more capabilities of the resource is now available (not depicted). At least one of the nodes of the DAG may correspond to a capability of one or more other resources. When these types of nodes are reached, the computing process606may check to see if the capability is available. If so, the computing process606may proceed with its traversal of the DAG. At616, the computing process606may reach a node of the DAG that corresponds to one or more capabilities of one or more other resources. In some embodiments, the computing process606may determine that at least one capability associated with the node is not yet available. At620, in response to determining at least one capability associated with the node is unavailable, the computing process606may transmit data to the scheduler602indicating the one or more capabilities on which the resource corresponding to the computing process606depends which have been determined to be unavailable. At622, the computing process606may exit after potentially storing state information indicating what operations and/or node of the DAG have already been completed and/or at what particular node of the DAG the computing process606was last accessing. The computing process606exits, is killed, is suspended, or otherwise ceases to execute. At624, the scheduler602may store information indicating that the particular resource was awaiting one or more particular capabilities which are needed for the resource to resume booting and/or for deployment purposes. At626, the scheduler602may receive one or more notifications that the one or more capabilities for which the resource was waiting have become available. In some embodiments, the scheduler602may receive various notification from other computing processes (e.g., threads) indicating various capabilities of corresponding resources as those capabilities become available. The scheduler602may maintain one or more records of the various capabilities that are available and/or of the various capabilities for which resources are currently waiting. The scheduler602may identify from the one or more records that the particular capability/capabilities on which the resource corresponding to computing process606is waiting have become available. Accordingly, the scheduler602may proceed to628. At628, in response to determining that the capabilities on which the resource corresponding to computing process606depends have become available, the scheduler602may return to step608, where it transmits data pertaining to the original task (e.g., deploying the resource) to the worker604. In some embodiments, the scheduler602may instantiate a new worker or utilize the previous worker604(as depicted) to continue handling the task associated with the resource. The worker604may instantiate a process (not depicted) which may be configured to execute parse the configuration file to generate (or otherwise obtain) the DAG for the resource. That process may access the stored state information to identify the node that was last access in the DAG (e.g., the node corresponding to the one or more capabilities for which the resource was waiting). Since the one or more capabilities are now available, the process may proceed with its traversal of the DAG in a similar manner as discussed above, executing operations at each node either execute a portion of the task or check for capabilities on which a next portion of the task depends, until the operations for deploying/bootstrapping the resource (e.g., service502) are complete. A similar process as discussed above may be performed for every resource of the task. By way of example, when deploying a system with multiple resources (e.g., multiple services), the process600may be performed on behalf of each resource in order to deploy each resource in the system. Returning toFIG.5, at520the orchestrator516may identify one or more dependencies associated with service502. As described above, a DAG associated with the service502may be obtained (e.g., generated or retrieved from memory). One or more nodes of the DAG may correspond to one or more dependencies. As a non-limiting example, the DAG for service502(or portion of the DAG associated with service502) may indicate that service502depends on another service (e.g., service B). That is, a node corresponding to service B may be provided before a node corresponding to service502. In accordance with identifying the dependency on service B, the orchestrator516may execute any suitable operations for deploying/bootstrapping service B at522. At524, once service B has been bootstrapped (e.g., an predefined image for the service B is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator516may transmit data (e.g., an IP address associated with the service B, an alphanumeric identifier associated with the service B, etc.) to the gateway computer508to update the routing table maintained by the gateway computer508. The gateway computer508may be configured to update the routing table with any suitable portion of the data provided by the orchestrator516and associated with the service B. Although not depicted, in some embodiments, the gateway computer508may transmit any suitable data to the user device506to indicate the server B is ready. At526, the orchestrator516may identify that service B has been deployed and may proceed to execute any suitable operations for deploying/bootstrapping the service502. At528, once service502has been bootstrapped (e.g., an predefined image for the service502is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator516may transmit data (e.g., an IP address associated with the service502, an alphanumeric identifier associated with the service502, etc.) to the gateway computer508to update the routing table maintained by the gateway computer508. The gateway computer508may be configured to update the routing table with any suitable portion of the data provided by the orchestrator516and associated with the service502. Although not depicted, in some embodiments, the gateway computer508may transmit any suitable data to the user device506to indicate the service502is ready. Subsequently, a user may initiate a new API call at504and the operations of504,510, and512may be repeated. Now that the routing table includes the data associated with the service502(e.g., the identifier for the service, an IP address for the service502, etc.), the determination made at512may indicate the API call is now routable to the service502and the API call may be forwarded to the service502at530. Starting at530, operations corresponding to the service502and the operations of424-428ofFIG.4may be performed. For example, the service may return a response to the gateway computer508, which in turn may forward the response to the user device506. This response may indicate whether the processing was successful (e.g., completed) or unsuccessful (e.g., incomplete/not allowed). The user device506may receive the response and execute any suitable operations such as, but not limited to, displaying for the user an indication that the API call was successfully processed. In should be appreciated that, if the service502was initially available after the first API call described above, the flow400may proceed from512to530while forgoing the entire bootstrapping process and routing table update as the service would have already been available at the time of the API request. FIG.7is a flow diagram illustrating a flow700of operations for spinning down an unused service, according to at least one embodiment. Prior to executing the operations of flow700, it may be assumed that the service702(e.g., an example of the service402ofFIG.4) was bootstrapped and that a timer was set upon receipt of the last API call to the service702. The flow700may begin at704, where a timeout of the timer set at424ofFIG.4(and similarly at530ofFIG.5for service502) occurs. In some embodiments, the timeout may trigger an event that is received by service702. In response to identifying the timeout (e.g., receiving the event), the service702may be configured to transmit any suitable data to indicate to the orchestrator706(e.g., an example of the orchestrator416ofFIG.4) that service702is no longer in use (e.g., as indicated by not receiving any requests in a predefined period of time such as the last hour, as indicated by receiving less than a threshold amount of requests in a previous predefined period of time, etc.). Although service702is depicted as receiving the event, it may be the case that the event is received (or the timeout is otherwise identified) by the gateway computer708(e.g., an example of the gateway computer408ofFIG.4). By way of example, the gateway computer may be configured to determine a service is to be spun down based on periodically identifying a last time at which a last request for service702was received and determining a difference between the last time and the current time exceeds a threshold period of time. Although a time out is used for illustration as the mechanism for triggering the spin down of service702, any suitable trigger may be utilized. By way of example, spinning down service702may be triggered based at least in part on receiving a request (e.g., from a user device) to spin down one or more services (e.g., including service702). In some embodiments, a service may be deemed safe to spin down if it has no active resource instances under its management (or otherwise determining that no other component of the cloud-computing environment depends on the service). If the service is managing one or more resource instances, that status of these resources instances may be monitored to determine when and/if the service is no longer managing any resources instances. As part of this monitoring, a check for these resource instance may occur periodically, according to a schedule or predefined frequency (e.g., every minute, every 30 seconds, etc.). At710, the orchestrator706may receive the data (a message and/or indicator requesting spin down) provided by the service702(or alternatively, by the gateway computer708). In response to receiving this data, the orchestrator706may be configured to execute any suitable operations for spinning down service702. In some embodiments, the operations may be predefined, provided in a DAG that identifies an order by which operations are to be performed to spin down a service. In some embodiments, the orchestrator706may determine whether other resources (e.g., other services on which the service702depended are needed). By way of example, the orchestrator706may identify a request rate, a number of previous requests in a previous period of time (e.g., the last ten minute, hour, day, etc.), with which functionality of a dependent resource (e.g., a service on which service702depends) was last utilized. The orchestrator706may utilize a predefined rule set to identify whether the dependent resource is still needed (e.g., by other services as suggested by a rate/number that breaches a predefined threshold). In accordance with determining the dependent resource is no longer needed, the orchestrator706may execute operations to spin down the dependent resource as well as the service702. In some embodiments, the orchestrator706may be configured to request and receive user input requesting user input indicating approval to proceed with the spin-down request prior to transmitting the spin-down request for any resource (e.g., the dependent resource and/or the service702). At712, the orchestrator706can update the routing table maintained by the gateway computer708. In some embodiments, updating the routing table may include transmitting, by the orchestrator706, data indicating (e.g., by identifier(s)) that the service702(and, if applicable, one or more dependent resources) are no longer utilized. In response to receiving this data, the gateway computer708may remove or otherwise disassociate the service and any suitable number of one or more dependent services if they too were indicated as being unutilized (or at least underutilized) by the data received at712. Subsequent to completion of flow700, the flow400may be performed any suitable number of times. FIG.8is a flow diagram illustrating an example method800for orchestrating the bootstrapping of a service in response to receiving an API request, according to at least one embodiment. This method is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Additionally, the method800may be performed under the control of one or more computing devices or computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable storage medium, for example, in the form of a computer program including a plurality of instructions executable by one or more processors. In some embodiments, the method1000may be performed by a plurality of processors in parallel. The computer-readable storage medium may be non-transitory. In some embodiments, the method800is performed by an orchestrator (e.g., the CIOS Regional202via scheduler206, worker210, and/or CIOS container212ofFIG.2, the orchestrator416,516, and/or706ofFIGS.4,5, and7, respectively). The method800may begin at802, where a request (e.g., an API call provided via an HTTP request) comprising an identifier for a computing component (e.g., service402ofFIG.4) of a cloud-computing environment is received (e.g., by the gateway computer408ofFIG.4). At804, the computing device may determine whether the identifier (or any suitable information corresponding to the service associated with that identifier) exists in a routing table that is accessible to the computing device (e.g., a routing table maintained/managed by the gateway computer408ofFIG.4). At806, in accordance with the identifier existing in the routing table, the computing device (the gateway computer408) may forward the request to the computing component (e.g., the service402). At808, in accordance with the identifier being missing from the routing table, an error code may be transmitted by the computing device in response to the request. In some embodiments, the error code may indicate that the computing component is unavailable. The error code may, for example, be transmitted to a user device from which the request was initiated. At810, in accordance with the identifier being missing from the routing table, a bootstrap request corresponding to the computing component may be transmitted (e.g., by the computing device to a deployment orchestrator (e.g., orchestrator406) of the cloud-computing environment). In some embodiments, the deployment orchestrator is configured to deploy the computing component to the cloud-computing environment based at least in part on the bootstrap request. The particular operations of such a deployment may be identified from a DAG as described above in connection withFIGS.5and6. At812, a subsequent request comprising the identifier may be received. In some embodiments, the identifier may now be stored in the routing table maintained by the computing component in accordance with being added after bootstrapping was complete. At814, the subsequent request is transmitted (e.g., by the gateway computer408) to the computing component (e.g., service402) for processing. FIG.9is a flow diagram for illustrating a flow900of operations performed in response to ordering a service that has not yet been deployed, in accordance with at least one embodiment. In some embodiments, a predefined set of services may be previously deployed in the region and/or accessible to the user device by API call. Service902may not be included in that initial predefined set of deployed services. Alternatively, service902may have previously been operational but spun down and is no longer accessible. In either scenario, at the time the operations of flow900are commenced, service902is assumed to be inaccessible/not operational (e.g., service902is not deployed). The flow900may begin at904, where a service (e.g., service902) may be ordered. In some embodiments, a user may access a user interface via user device906. An example interface will be discussed in further detail below in connection withFIG.11. In accordance with user input provided at the interface and to request the service1002be ordered, an API request may be made by user device906(an example of the user device406ofFIG.4). In some embodiments, the API request can be performed via the HTTPS protocol or similar protocols. The request may include any suitable information such as an identifier of the user, user credentials, an identifier for the service associated with the request, or the like. The API call may be received by gateway computer908, which may the same or similar to the gateway computer408ofFIG.4and which may be an example of a computer that implements the WAN gateway126ofFIG.1. Similar to the gateway computer discussed above, the gateway computer908may be configured to maintain a routing table for previously deployed regional services (e.g., the cloud services A-N ofFIG.2). The routing table may include Internet Protocol (IP) addresses for each service and/or infrastructure component within the environments provided by the architectures ofFIGS.1and2. The routing table may contain any suitable information needed to forward a data packet along toward its destination. For example, the routing table may include a network identifier (e.g., an IP address) for the destination, a subnet mask that is used to match the destination IP address to the network ID, and any suitable information configured to enable the data packet to be forwarded to toward the destination. At910, the gateway computer408(or any suitable component ofFIG.2, such as ingress proxy218, CIOS regional202, or the like) may authenticate the user of the API call and determine whether the user is authorized to order the service. As described in other examples above, authentication may include making an API call to one or more services (e.g., an identity service) configured to maintain permissions and/or identity data for one or more users of the system. By way of example, the gateway computer308may call an identity service configured with permissions and/or identity data that may be utilized with the identifier of the user as received in the request to identify the user and/or one or more permissions policies associated with that user. As another example, the gateway computer908may maintain user data associated with any suitable number of users and may authenticate the user against that user data using any suitable information obtained from the API call (e.g., an identifier associated with the user). At912, a determination may be made as to whether the request is allowed. This determination may include any suitable combination of identifying 1) if the user is who they purport to be, and 2) if the user has authorization to order a resource (e.g., service902). In some embodiments, user data obtained from the request such as user credentials may be utilized to obtain permissions data that indicates the particular or type of service that are orderable by the user. If the permissions data indicates that the user is not allowed to order the requested service (or that type of service), the flow900may proceed to914, where an error code may be returned to the user device906by the gateway computer908. The error code may be any suitable value that indicates the user is not allowed to order service902. At916, the user device906, in response to receiving the error code, may be configured to present any suitable data at the user interface to indicate the order was not successful/not allowed. Alternatively, if the order for service902is allowed as determined at912, the flow900may proceed to918, where a return status may be forwarded to the user device906and data may be sent to orchestrator916(e.g., CIOS Central102ofFIG.1) indicating that a bootstrap of the requested service (e.g., service902) is to be initiated. At920, the user device406may be configured to perform one or more operations to display the status received from the gateway computer908. By way of example, the user device406may present a status such as “Requested” adjacent to an identifier for the service902to indicate the service902has been ordered but is not yet operational. At922, the orchestrator916may receive the bootstrap request and perform any suitable operations for bootstrapping (e.g., loading into memory (e.g., memory of a virtual machine) and/or initializing) the absent service (e.g., service902). In some embodiments, the orchestrator916may utilize a predefined set of instructions associated with bootstrapping the service902. The orchestrator916may traverse a DAG associated with the service902to identify and execute the operations for booting and/or deploying service902to the cloud-computing environment in which the order was received. This process may be similar or the same to that described above in connection with the DAG discussed in the description ofFIG.4. At924, once service402has been bootstrapped (e.g., an predefined image for the service902is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator916may transmit data (e.g., an IP address associated with the service902, an alphanumeric identifier associated with the service902, etc.) to the gateway computer908to update the routing table maintained by the gateway computer908. The gateway computer908may be configured to update the routing table with any suitable portion of the data provided by the orchestrator916and associated with the service902. In some embodiments, the gateway computer908may transmit any suitable data to the user device906to indicate the service902is ready. The user device906may present the status upon receipt (depicted at920). Subsequently, a user may initiate an API call at926corresponding to the service902. Operations corresponding to blocks404-412may be performed with respect to the service902to identify if the user is whom they purport to be and are authorized to make the API call. If not, an error code may be provided and displayed to the user at the user device906. Alternatively, the API call may be routed to the service902at928by the gateway computer908now that the routing table includes the data associated with the service902(e.g., the identifier for the service, an IP address for the service902, etc.). At930, the service902may process the API call. In some embodiments, the service902may initiate a timer for a predefined period of time. This timer may be configured to maintain knowledge of when the service902was last called. If the service902is not utilized again for a predefined period of time, the process described in connection withFIG.7may be executed to spin down the service902. The service902may return a response to the gateway computer908as a result of processing the API call. In some embodiments, this response may indicate whether the processing was successful (e.g., completed) or unsuccessful (e.g., incomplete/not allowed). The gateway computer908may forward the response to the user device906. The user device906may receive the response and execute any suitable operations such as, but not limited to, displaying for the user an indication that the API call was successfully processed. FIG.10is a flow diagram for illustrating a flow1000of operations performed in response to ordering a service that has not yet been deployed when the service depends on another service that also has not yet been deployed, in accordance with at least one embodiment. In some embodiments, a predefined set of services may be previously deployed in the region and/or accessible to the user device by API call. Service1002may not be included in that initial predefined set of deployed services. Alternatively, service1002may have previously been operational but spun down and is no longer accessible. In either scenario, at the time the operations of flow1000are commenced, service1002is assumed to be inaccessible/not operational (e.g., service1002is not deployed). The flow1000may begin at1004, where a service (e.g., service1002, an example of the service902ofFIG.9) may be ordered. In some embodiments, a user may access a user interface via user device1006. An example interface will be discussed in further detail below in connection withFIG.11. In accordance with user input provided at the interface, an API call may be made by user device1006(an example of the user device406ofFIG.4). In some embodiments, the API call can be performed via the HTTPS protocol or similar protocols. The request may include any suitable information such as an identifier of the user, user credentials, an identifier for the service associated with the request, or the like. The API call may be received by gateway computer1008, which may the same or similar to the gateway computer408ofFIG.4and which may be an example of a computer that implements the WAN gateway126ofFIG.1. Similar to the gateway computer discussed above, the gateway computer1008may be configured to maintain a routing table for previously deployed regional services (e.g., the cloud services A-N ofFIG.2). The operations performed at1004-1020may generally correspond to the same or similar operations as those discussed at904-922ofFIG.9, and will not be discussed again for brevity. At1022, the orchestrator1023may identify one or more dependencies associated with service1002. A DAG associated with the service1002may be obtained (e.g., generated or retrieved from memory). One or more nodes of the DAG may correspond to one or more dependencies. As a non-limiting example, the DAG for service1002(or portion of the DAG associated with service1002) may indicate that service1002depends on another service (e.g., service B). That is, a node within the DAG corresponding to service B may be provided before a node corresponding to service1002. In accordance with identifying the dependency on service B, the orchestrator1023may execute any suitable operations for deploying/bootstrapping service B at1024. At1026, once service B has been bootstrapped (e.g., an predefined image for the service B is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator1023may transmit data (e.g., an IP address associated with the service B, an alphanumeric identifier associated with the service B, etc.) to the gateway computer1008to update the routing table maintained by the gateway computer1008. The gateway computer1008may be configured to update the routing table with any suitable portion of the data provided by the orchestrator1023and associated with the service B. In some embodiments, the gateway computer1008may transmit any suitable data to the user device1006to indicate the service B is ready. The user device1006may present the status upon receipt (depicted at1020). At1028, the orchestrator516may identify that service B has been deployed and may proceed to execute any suitable operations for deploying/bootstrapping the service1002. At1026, once service1002has been bootstrapped (e.g., an predefined image for the service1002is deployed to a particular computer within the environment/region) and is ready to accept subsequent requests, the orchestrator1023may transmit data (e.g., an IP address associated with the service1002, an alphanumeric identifier associated with the service1002, etc.) to the gateway computer1008to update the routing table maintained by the gateway computer1008. The gateway computer1008may be configured to update the routing table with any suitable portion of the data provided by the orchestrator1023and associated with the service1002. In some embodiments, the gateway computer1008may transmit any suitable data to the user device1006to indicate the service1002is ready. The user device1006may present the status upon receipt (depicted at1020). The operations described at1024and/or1028may be repeated any suitable number of times depending on how many dependencies service1002has on other capabilities/services in the system. Subsequently, a user may initiate a new API call at1030to initiate an API call/response to the service1002. Now that the routing table includes the data associated with the service1002(e.g., the identifier for the service, an IP address for the service1002, etc.), the gateway computer1008may route the API call to the service1002at1032. At1034, the service1002may receive and process the API call and then return a response to the gateway computer1008, which in turn may forward the response to the user device1006. This response may indicate whether the processing was successful (e.g., completed) or unsuccessful (e.g., incomplete/not allowed). The user device1006may receive the response and execute any suitable operations such as, but not limited to, displaying for the user an indication that the API call was successfully processed. FIG.11is an example user interface1100for, among other things, initiating the operations discussed above, in accordance with at least one embodiment. In some embodiments, the user interface1100may be hosted by the gateway computers discussed in connection with the figures above or any suitable computer of the cloud-computing environment. If hosted by another computer, that computer may be configured to render user interface1100at the user device and transmit API calls to the gateway computers of theFIGS.9and10. If the user interface1100is hosted by another computer different from the gateway computers, this computer may be configured to receive any suitable error codes, status values, or the like from the gateway computer and present status information corresponding to that data at the user interface1100. The user interface1100may include a search area1102that includes a search box1104and button1106. The user may utilize the search box1104to enter a search query (e.g., some portion of a service identifier). When the button1106is selected, a search query may be executed using the input provided at the search box1104. The search query may be executed against a database of orderable services to determine one or more services that match the query. In some embodiments, services that have previously been ordered and/or are currently active may be filtered from the list of search results. In some embodiments, the user may be presented (e.g., via user interface1100or another user interface) the list of search results from which the user may select one or more services to order. The search result list may look similar to the list of orderable services1108discussed below and may provide similar functionality via selections provided via the list as those discussed below. The user interface1100may include a status area1110. Status area1110may identify any suitable number of services, which are currently active, previously ordered, or winding down. In some embodiments, the status area1110may include identifiers for each service within column1112and a corresponding status for each service within column1114. The status values may be numerous and vary based on the granularity of status that is desired. Some example status values could include “Ready,” “Ordered,” “Bootstrap Initiated,” and “Spinning Down.” In some embodiments, a menu or option for requesting that one or more services are spun down. This menu or option, although not depicted inFIG.11, may be provided upon selecting one or more services within service area1110or this menu/option may be otherwise provided via user interface1100or any suitable interface. The gateway computers of the figures discussed above may be configured to provide status of the service at any suitable time, not necessarily only at the times and triggers discussed above. Thus, the gateway computer may provide status immediately after an order is received to display “ordered” at the column1114on a line corresponding to the service ordered. When the orchestrator initiates the bootstrap of the service (or when the gateway computer transmits the bootstrap request), the gateway computer may provide status to the user device which may be presented at the user interface1100within column1114as “Bootstrap Initiated.” When the gateway computer (or any suitable computing component) determines that a service is to be spun down as discussed in connection withFIG.7, a status value for the service may be provided and presented at the user interface1100as “Spinning Down” or “Deactivating.” When a service is no longer active, it may be placed back in the set of orderable services depending on a predefined set of rules for identifying when a service is to be orderable. If orderable, the service may be removed from the status area1110and added to the list of orderable services1108. The list of orderable services1108may include any suitable number of services. A set of predefined rules may dictate when a service is to be made available for order, and thus, when the service will appear in the list of orderable services1108. In some embodiments, the particular services that are orderable may depend on a number of factors such as the type of service, the service identifier associated with the service, the particular user and/or permissions associated with the user, and the like. The user may select any service within the list of orderable services. In some embodiments, when the user selects area1116(e.g., left clicks within area1116), for example, description area1118may be presented. In some embodiments, description area1118may present a predefined description of the service. This description may describe functionality of the service and/or various dependencies associated with the service such that the user may be informed of other services and/or resources on which the selected service depends. Description area1118may include an order button1120and a cancel button1122. Upon selection of the order button1120, an API call corresponding to ordering a service may be transmitted in a similar manner as described above at904and1004ofFIGS.9and10. If the user decides he does not wish to order the selected service, he may select cancel button1122to cause the description area1118for the selected service to be removed. FIG.12illustrates an example flow diagram showing a method1200for performing operations for booting a resource of a cloud-computing system, according to certain embodiments of the present disclosure. This process is illustrated as a logical flow diagram, each operation of which can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Additionally, the method1200may be performed under the control of one or more computing devices or computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable storage medium, for example, in the form of a computer program including a plurality of instructions executable by one or more processors. In some embodiments, the method1200may be performed by a plurality of processors in parallel. The computer-readable storage medium may be non-transitory. In some embodiments, the method1200is performed by a computing device (e.g., the gateway computers ofFIGS.9and10). The method1200may begin at block1202, where a first set of computing components already deployed within the cloud-computing environment may be identified by the computing device of the cloud-computing environment. By way of example, a request message may be transmitted to the orchestrator (e.g., orchestrator1023ofFIG.10) requesting a list of all services already deployed in the cloud-computing environment (and associated with the user). Data provided in the request may include an identifier for the user, user credentials, or the like. The orchestrator may be configured to compile this list (potentially based at least in part on the data provided in the request) and transmit a list of ordered or active services back to the computing device, which may configured to cause the presentation of this information at the status area1110ofFIG.11. For example, the computing device may transmit (directly, or thorough a computing device configured to host the user interface1100) to a user device one or more status indicators corresponding to the list of ordered or active services. Upon receipt, the user device may be configured to present this list within status area1110ofFIG.11. At1204, a second set of computing components available for deployment within the cloud-computing environment may be identified by the computing device. By way of example, a request message may be transmitted to the orchestrator (e.g., orchestrator1023ofFIG.10) requesting a list of all services that are available for order. Data provided in the request may include an identifier for the user, user credentials, or the like. The orchestrator may be configured to compile this list (potentially based at least in part on the data provided in the request) and transmit a list of orderable services back to the computing device which may configured to cause the presentation of this information within area1108ofFIG.11. For example, the computing device may transmit (directly, or thorough a computing device configured to host the user interface1100) a list of orderable services. Upon receipt, the user device may be configured to present this list within area1108ofFIG.11. At1206, a request for deployment may be received. By way of example, the user may make a selection (e.g., of service 4) from a user interface (e.g., user interface1100ofFIG.11). The selection may result into an API call being transmitted from the user device to the gateway computer (directly, or through a computing device configured to host the user interface) to order the service. The request for deployment (also referred to as an order request) may identify a particular computing component of the second set of computing components available for deployment (e.g., service 4 ofFIG.11, an example of services902and1002ofFIGS.9and10, respectively). At1208, a bootstrap request corresponding to the particular computing component requested may be transmitted, by the computing device to a deployment orchestrator of the cloud-computing environment (e.g., the orchestrators916and1023ofFIGS.9and10, respectively). In some embodiments, the deployment orchestrator may be configured to deploy the particular computing component to the cloud-computing environment based at least in part on the bootstrap request. Thus, the deployment orchestrator may deploy the requested computing component in response to the bootstrap request. At1210, a user interface (e.g., user interface1100) that presents a first set of status indicators for the first set of computing components already deployed within the cloud-computing environment and a status indicator corresponding to the particular computing component may be presented (e.g., at the user device). As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (e.g., billing, monitoring, logging, load balancing and clustering, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc. In most cases, a cloud computing model will require the participation of a cloud provider. The cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services. In some examples, IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand) or the like. In some examples, IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first. In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files. In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve. In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned. FIG.13is a block diagram1300illustrating an example pattern of an IaaS architecture, according to at least one embodiment. Service operators1302can be communicatively coupled to a secure host tenancy1304that can include a virtual cloud network (VCN)1306and a secure host subnet1308. In some examples, the service operators1302may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled. Alternatively, the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN1306and/or the Internet. The VCN1306can include a local peering gateway (LPG)1310that can be communicatively coupled to a secure shell (SSH) VCN1312via an LPG1310contained in the SSH VCN1312. The SSH VCN1312can include an SSH subnet1314, and the SSH VCN1312can be communicatively coupled to a control plane VCN1316via the LPG1310contained in the control plane VCN1316. Also, the SSH VCN1312can be communicatively coupled to a data plane VCN1318via an LPG1310. The control plane VCN1316and the data plane VCN1318can be contained in a service tenancy1319that can be owned and/or operated by the IaaS provider. The control plane VCN1316can include a control plane demilitarized zone (DMZ) tier1320that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep breaches contained. Additionally, the DMZ tier1320can include one or more load balancer (LB) subnet(s)1322, a control plane app tier1324that can include app subnet(s)1326, a control plane data tier1328that can include database (DB) subnet(s)1330(e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s)1322contained in the control plane DMZ tier1320can be communicatively coupled to the app subnet(s)1326contained in the control plane app tier1324and an Internet gateway1334that can be contained in the control plane VCN1316, and the app subnet(s)1326can be communicatively coupled to the DB subnet(s)1330contained in the control plane data tier1328and a service gateway1336and a network address translation (NAT) gateway1338. The control plane VCN1316can include the service gateway1336and the NAT gateway1338. The control plane VCN1316can include a data plane mirror app tier1340that can include app subnet(s)1326. The app subnet(s)1326contained in the data plane mirror app tier1340can include a virtual network interface controller (VNIC)1342that can execute a compute instance1344. The compute instance1344can communicatively couple the app subnet(s)1326of the data plane mirror app tier1340to app subnet(s)1326that can be contained in a data plane app tier1346. The data plane VCN1318can include the data plane app tier1346, a data plane DMZ tier1348, and a data plane data tier1350. The data plane DMZ tier1348can include LB subnet(s)1322that can be communicatively coupled to the app subnet(s)1326of the data plane app tier1346and the Internet gateway1334of the data plane VCN1318. The app subnet(s)1326can be communicatively coupled to the service gateway1336of the data plane VCN1318and the NAT gateway1338of the data plane VCN1318. The data plane data tier1350can also include the DB subnet(s)1330that can be communicatively coupled to the app subnet(s)1326of the data plane app tier1346. The Internet gateway1334of the control plane VCN1316and of the data plane VCN1318can be communicatively coupled to a metadata management service1352that can be communicatively coupled to public Internet1354. Public Internet1354can be communicatively coupled to the NAT gateway1338of the control plane VCN1316and of the data plane VCN1318. The service gateway1336of the control plane VCN1316and of the data plane VCN1318can be communicatively couple to cloud services1356. In some examples, the service gateway1336of the control plane VCN1316or of the data plane VCN1318can make application programming interface (API) calls to cloud services1356without going through public Internet1354. The API calls to cloud services1356from the service gateway1336can be one-way: the service gateway1336can make API calls to cloud services1356, and cloud services1356can send requested data to the service gateway1336. But, cloud services1356may not initiate API calls to the service gateway1336. In some examples, the secure host tenancy1304can be directly connected to the service tenancy1319, which may be otherwise isolated. The secure host subnet1308can communicate with the SSH subnet1314through an LPG1310that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet1308to the SSH subnet1314may give the secure host subnet1308access to other entities within the service tenancy1319. The control plane VCN1316may allow users of the service tenancy1319to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN1316may be deployed or otherwise used in the data plane VCN1318. In some examples, the control plane VCN1316can be isolated from the data plane VCN1318, and the data plane mirror app tier1340of the control plane VCN1316can communicate with the data plane app tier1346of the data plane VCN1318via VNICs1342that can be contained in the data plane mirror app tier1340and the data plane app tier1346. In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet1354that can communicate the requests to the metadata management service1352. The metadata management service1352can communicate the request to the control plane VCN1316through the Internet gateway1334. The request can be received by the LB subnet(s)1322contained in the control plane DMZ tier1320. The LB subnet(s)1322may determine that the request is valid, and in response to this determination, the LB subnet(s)1322can transmit the request to app subnet(s)1326contained in the control plane app tier1324. If the request is validated and requires a call to public Internet1354, the call to public Internet1354may be transmitted to the NAT gateway1338that can make the call to public Internet1354. Memory that may be desired to be stored by the request can be stored in the DB subnet(s)1330. In some examples, the data plane mirror app tier1340can facilitate direct communication between the control plane VCN1316and the data plane VCN1318. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN1318. Via a VNIC1342, the control plane VCN1316can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN1318. In some embodiments, the control plane VCN1316and the data plane VCN1318can be contained in the service tenancy1319. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN1316or the data plane VCN1318. Instead, the IaaS provider may own or operate the control plane VCN1316and the data plane VCN1318, both of which may be contained in the service tenancy1319. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users', or other customers', resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet1354, which may not have a desired level of threat prevention, for storage. In other embodiments, the LB subnet(s)1322contained in the control plane VCN1316can be configured to receive a signal from the service gateway1336. In this embodiment, the control plane VCN1316and the data plane VCN1318may be configured to be called by a customer of the IaaS provider without calling public Internet1354. Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy1319, which may be isolated from public Internet1354. FIG.14is a block diagram1400illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators1402(e.g. service operators1302ofFIG.13) can be communicatively coupled to a secure host tenancy1404(e.g. the secure host tenancy1304ofFIG.13) that can include a virtual cloud network (VCN)1406(e.g. the VCN1306ofFIG.13) and a secure host subnet1408(e.g. the secure host subnet1308ofFIG.13). The VCN1406can include a local peering gateway (LPG)1410(e.g. the LPG1310ofFIG.13) that can be communicatively coupled to a secure shell (SSH) VCN1412(e.g. the SSH VCN1312ofFIG.13) via an LPG1310contained in the SSH VCN1412. The SSH VCN1412can include an SSH subnet1414(e.g. the SSH subnet1314ofFIG.13), and the SSH VCN1412can be communicatively coupled to a control plane VCN1416(e.g. the control plane VCN1316ofFIG.13) via an LPG1410contained in the control plane VCN1416. The control plane VCN1416can be contained in a service tenancy1419(e.g. the service tenancy1319ofFIG.13), and the data plane VCN1418(e.g. the data plane VCN1318ofFIG.13) can be contained in a customer tenancy1421that may be owned or operated by users, or customers, of the system. The control plane VCN1416can include a control plane DMZ tier1420(e.g. the control plane DMZ tier1320ofFIG.13) that can include LB subnet(s)1422(e.g. LB subnet(s)1322ofFIG.13), a control plane app tier1424(e.g. the control plane app tier1324ofFIG.13) that can include app subnet(s)1426(e.g. app subnet(s)1326ofFIG.13), a control plane data tier1428(e.g. the control plane data tier1328ofFIG.13) that can include database (DB) subnet(s)1430(e.g. similar to DB subnet(s)1330ofFIG.13). The LB subnet(s)1422contained in the control plane DMZ tier1420can be communicatively coupled to the app subnet(s)1426contained in the control plane app tier1424and an Internet gateway1434(e.g. the Internet gateway1334ofFIG.13) that can be contained in the control plane VCN1416, and the app subnet(s)1426can be communicatively coupled to the DB subnet(s)1430contained in the control plane data tier1428and a service gateway1436(e.g. the service gateway ofFIG.13) and a network address translation (NAT) gateway1438(e.g. the NAT gateway1338ofFIG.13). The control plane VCN1416can include the service gateway1436and the NAT gateway1438. The control plane VCN1416can include a data plane mirror app tier1440(e.g. the data plane mirror app tier1340ofFIG.13) that can include app subnet(s)1426. The app subnet(s)1426contained in the data plane mirror app tier1440can include a virtual network interface controller (VNIC)1442(e.g. the VNIC of1342) that can execute a compute instance1444(e.g. similar to the compute instance1344ofFIG.13). The compute instance1444can facilitate communication between the app subnet(s)1426of the data plane mirror app tier1440and the app subnet(s)1426that can be contained in a data plane app tier1446(e.g. the data plane app tier1346ofFIG.13) via the VNIC1442contained in the data plane mirror app tier1440and the VNIC1442contained in the data plane app tier1446. The Internet gateway1434contained in the control plane VCN1416can be communicatively coupled to a metadata management service1452(e.g. the metadata management service1352ofFIG.13) that can be communicatively coupled to public Internet1454(e.g. public Internet1354ofFIG.13). Public Internet1454can be communicatively coupled to the NAT gateway1438contained in the control plane VCN1416. The service gateway1436contained in the control plane VCN1416can be communicatively couple to cloud services1456(e.g. cloud services1356ofFIG.13). In some examples, the data plane VCN1418can be contained in the customer tenancy1421. In this case, the IaaS provider may provide the control plane VCN1416for each customer, and the IaaS provider may, for each customer, set up a unique compute instance1444that is contained in the service tenancy1419. Each compute instance1444may allow communication between the control plane VCN1416, contained in the service tenancy1419, and the data plane VCN1418that is contained in the customer tenancy1421. The compute instance1444may allow resources, that are provisioned in the control plane VCN1416that is contained in the service tenancy1419, to be deployed or otherwise used in the data plane VCN1418that is contained in the customer tenancy1421. In other examples, the customer of the IaaS provider may have databases that live in the customer tenancy1421. In this example, the control plane VCN1416can include the data plane mirror app tier1440that can include app subnet(s)1426. The data plane mirror app tier1440can reside in the data plane VCN1418, but the data plane mirror app tier1440may not live in the data plane VCN1418. That is, the data plane mirror app tier1440may have access to the customer tenancy1421, but the data plane mirror app tier1440may not exist in the data plane VCN1418or be owned or operated by the customer of the IaaS provider. The data plane mirror app tier1440may be configured to make calls to the data plane VCN1418but may not be configured to make calls to any entity contained in the control plane VCN1416. The customer may desire to deploy or otherwise use resources in the data plane VCN1418that are provisioned in the control plane VCN1416, and the data plane mirror app tier1440can facilitate the desired deployment, or other usage of resources, of the customer. In some embodiments, the customer of the IaaS provider can apply filters to the data plane VCN1418. In this embodiment, the customer can determine what the data plane VCN1418can access, and the customer may restrict access to public Internet1454from the data plane VCN1418. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN1418to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN1418, contained in the customer tenancy1421, can help isolate the data plane VCN1418from other customers and from public Internet1454. In some embodiments, cloud services1456can be called by the service gateway1436to access services that may not exist on public Internet1454, on the control plane VCN1416, or on the data plane VCN1418. The connection between cloud services1456and the control plane VCN1416or the data plane VCN1418may not be live or continuous. Cloud services1456may exist on a different network owned or operated by the IaaS provider. Cloud services1456may be configured to receive calls from the service gateway1436and may be configured to not receive calls from public Internet1454. Some cloud services1456may be isolated from other cloud services1456, and the control plane VCN1416may be isolated from cloud services1456that may not be in the same region as the control plane VCN1416. For example, the control plane VCN1416may be located in “Region 1,” and cloud service “Deployment 13,” may be located in Region 1 and in “Region 2.” If a call to Deployment 13 is made by the service gateway1436contained in the control plane VCN1416located in Region 1, the call may be transmitted to Deployment 13 in Region 1. In this example, the control plane VCN1416, or Deployment 13 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 13 in Region 2. FIG.15is a block diagram1500illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators1502(e.g. service operators1302ofFIG.13) can be communicatively coupled to a secure host tenancy1504(e.g. the secure host tenancy1304ofFIG.13) that can include a virtual cloud network (VCN)1506(e.g. the VCN1306ofFIG.13) and a secure host subnet1508(e.g. the secure host subnet1308ofFIG.13). The VCN1506can include an LPG1510(e.g. the LPG1310ofFIG.13) that can be communicatively coupled to an SSH VCN1512(e.g. the SSH VCN1312ofFIG.13) via an LPG1510contained in the SSH VCN1512. The SSH VCN1512can include an SSH subnet1514(e.g. the SSH subnet1314ofFIG.13), and the SSH VCN1512can be communicatively coupled to a control plane VCN1516(e.g. the control plane VCN1316ofFIG.13) via an LPG1510contained in the control plane VCN1516and to a data plane VCN1518(e.g. the data plane1318ofFIG.13) via an LPG1510contained in the data plane VCN1518. The control plane VCN1516and the data plane VCN1518can be contained in a service tenancy1519(e.g. the service tenancy1319ofFIG.13). The control plane VCN1516can include a control plane DMZ tier1520(e.g. the control plane DMZ tier1320ofFIG.13) that can include load balancer (LB) subnet(s)1522(e.g. LB subnet(s)1322ofFIG.13), a control plane app tier1524(e.g. the control plane app tier1324ofFIG.13) that can include app subnet(s)1526(e.g. similar to app subnet(s)1326ofFIG.13), a control plane data tier1528(e.g. the control plane data tier1328ofFIG.13) that can include DB subnet(s)1530. The LB subnet(s)1522contained in the control plane DMZ tier1520can be communicatively coupled to the app subnet(s)1526contained in the control plane app tier1524and to an Internet gateway1534(e.g. the Internet gateway1334ofFIG.13) that can be contained in the control plane VCN1516, and the app subnet(s)1526can be communicatively coupled to the DB subnet(s)1530contained in the control plane data tier1528and to a service gateway1536(e.g. the service gateway ofFIG.13) and a network address translation (NAT) gateway1538(e.g. the NAT gateway1338ofFIG.13). The control plane VCN1516can include the service gateway1536and the NAT gateway1538. The data plane VCN1518can include a data plane app tier1546(e.g. the data plane app tier1346ofFIG.13), a data plane DMZ tier1548(e.g. the data plane DMZ tier1348ofFIG.13), and a data plane data tier1550(e.g. the data plane data tier1350ofFIG.13). The data plane DMZ tier1548can include LB subnet(s)1522that can be communicatively coupled to trusted app subnet(s)1560and untrusted app subnet(s)1562of the data plane app tier1546and the Internet gateway1534contained in the data plane VCN1518. The trusted app subnet(s)1560can be communicatively coupled to the service gateway1536contained in the data plane VCN1518, the NAT gateway1538contained in the data plane VCN1518, and DB subnet(s)1530contained in the data plane data tier1550. The untrusted app subnet(s)1562can be communicatively coupled to the service gateway1536contained in the data plane VCN1518and DB subnet(s)1530contained in the data plane data tier1550. The data plane data tier1550can include DB subnet(s)1530that can be communicatively coupled to the service gateway1536contained in the data plane VCN1518. The untrusted app subnet(s)1562can include one or more primary VNICs1564(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)1566(1)-(N). Each tenant VM1566(1)-(N) can be communicatively coupled to a respective app subnet1567(1)-(N) that can be contained in respective container egress VCNs1568(1)-(N) that can be contained in respective customer tenancies1570(1)-(N). Respective secondary VNICs1572(1)-(N) can facilitate communication between the untrusted app subnet(s)1562contained in the data plane VCN1518and the app subnet contained in the container egress VCNs1568(1)-(N). Each container egress VCNs1568(1)-(N) can include a NAT gateway1538that can be communicatively coupled to public Internet1554(e.g. public Internet1354ofFIG.13). The Internet gateway1534contained in the control plane VCN1516and contained in the data plane VCN1518can be communicatively coupled to a metadata management service1552(e.g. the metadata management system1352ofFIG.13) that can be communicatively coupled to public Internet1554. Public Internet1554can be communicatively coupled to the NAT gateway1538contained in the control plane VCN1516and contained in the data plane VCN1518. The service gateway1536contained in the control plane VCN1516and contained in the data plane VCN1518can be communicatively couple to cloud services1556. In some embodiments, the data plane VCN1518can be integrated with customer tenancies1570. This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the IaaS provider may determine whether to run code given to the IaaS provider by the customer. In some examples, the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane tier app1546. Code to run the function may be executed in the VMs1566(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN1518. Each VM1566(1)-(N) may be connected to one customer tenancy1570. Respective containers1571(1)-(N) contained in the VMs1566(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers1571(1)-(N) running code, where the containers1571(1)-(N) may be contained in at least the VM1566(1)-(N) that are contained in the untrusted app subnet(s)1562), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer. The containers1571(1)-(N) may be communicatively coupled to the customer tenancy1570and may be configured to transmit or receive data from the customer tenancy1570. The containers1571(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN1518. Upon completion of running the code, the IaaS provider may kill or otherwise dispose of the containers1571(1)-(N). In some embodiments, the trusted app subnet(s)1560may run code that may be owned or operated by the IaaS provider. In this embodiment, the trusted app subnet(s)1560may be communicatively coupled to the DB subnet(s)1530and be configured to execute CRUD operations in the DB subnet(s)1530. The untrusted app subnet(s)1562may be communicatively coupled to the DB subnet(s)1530, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s)1530. The containers1571(1)-(N) that can be contained in the VM1566(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s)1530. In other embodiments, the control plane VCN1516and the data plane VCN1518may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN1516and the data plane VCN1518. However, communication can occur indirectly through at least one method. An LPG1510may be established by the IaaS provider that can facilitate communication between the control plane VCN1516and the data plane VCN1518. In another example, the control plane VCN1516or the data plane VCN1518can make a call to cloud services1556via the service gateway1536. For example, a call to cloud services1556from the control plane VCN1516can include a request for a service that can communicate with the data plane VCN1518. FIG.16is a block diagram1600illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators1602(e.g. service operators1302ofFIG.13) can be communicatively coupled to a secure host tenancy1604(e.g. the secure host tenancy1304ofFIG.13) that can include a virtual cloud network (VCN)1606(e.g. the VCN1306ofFIG.13) and a secure host subnet1608(e.g. the secure host subnet1308ofFIG.13). The VCN1606can include an LPG1610(e.g. the LPG1310ofFIG.13) that can be communicatively coupled to an SSH VCN1612(e.g. the SSH VCN1312ofFIG.13) via an LPG1610contained in the SSH VCN1612. The SSH VCN1612can include an SSH subnet1614(e.g. the SSH subnet1314ofFIG.13), and the SSH VCN1612can be communicatively coupled to a control plane VCN1616(e.g. the control plane VCN1316ofFIG.13) via an LPG1610contained in the control plane VCN1616and to a data plane VCN1618(e.g. the data plane1318ofFIG.13) via an LPG1610contained in the data plane VCN1618. The control plane VCN1616and the data plane VCN1618can be contained in a service tenancy1619(e.g. the service tenancy1319ofFIG.13). The control plane VCN1616can include a control plane DMZ tier1620(e.g. the control plane DMZ tier1320ofFIG.13) that can include LB subnet(s)1622(e.g. LB subnet(s)1322ofFIG.13), a control plane app tier1624(e.g. the control plane app tier1324ofFIG.13) that can include app subnet(s)1626(e.g. app subnet(s)1326ofFIG.13), a control plane data tier1628(e.g. the control plane data tier1328ofFIG.13) that can include DB subnet(s)1630(e.g. DB subnet(s)1530ofFIG.15). The LB subnet(s)1622contained in the control plane DMZ tier1620can be communicatively coupled to the app subnet(s)1626contained in the control plane app tier1624and to an Internet gateway1634(e.g. the Internet gateway1334ofFIG.13) that can be contained in the control plane VCN1616, and the app subnet(s)1626can be communicatively coupled to the DB subnet(s)1630contained in the control plane data tier1628and to a service gateway1636(e.g. the service gateway ofFIG.13) and a network address translation (NAT) gateway1638(e.g. the NAT gateway1338ofFIG.13). The control plane VCN1616can include the service gateway1636and the NAT gateway1638. The data plane VCN1618can include a data plane app tier1646(e.g. the data plane app tier1346ofFIG.13), a data plane DMZ tier1648(e.g. the data plane DMZ tier1348ofFIG.13), and a data plane data tier1650(e.g. the data plane data tier1350ofFIG.13). The data plane DMZ tier1648can include LB subnet(s)1622that can be communicatively coupled to trusted app subnet(s)1660(e.g. trusted app subnet(s)1560ofFIG.15) and untrusted app subnet(s)1662(e.g. untrusted app subnet(s)1562ofFIG.15) of the data plane app tier1646and the Internet gateway1634contained in the data plane VCN1618. The trusted app subnet(s)1660can be communicatively coupled to the service gateway1636contained in the data plane VCN1618, the NAT gateway1638contained in the data plane VCN1618, and DB subnet(s)1630contained in the data plane data tier1650. The untrusted app subnet(s)1662can be communicatively coupled to the service gateway1636contained in the data plane VCN1618and DB subnet(s)1630contained in the data plane data tier1650. The data plane data tier1650can include DB subnet(s)1630that can be communicatively coupled to the service gateway1636contained in the data plane VCN1618. The untrusted app subnet(s)1662can include primary VNICs1664(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)1666(1)-(N) residing within the untrusted app subnet(s)1662. Each tenant VM1666(1)-(N) can run code in a respective container1667(1)-(N), and be communicatively coupled to an app subnet1626that can be contained in a data plane app tier1646that can be contained in a container egress VCN1668. Respective secondary VNICs1672(1)-(N) can facilitate communication between the untrusted app subnet(s)1662contained in the data plane VCN1618and the app subnet contained in the container egress VCN1668. The container egress VCN can include a NAT gateway1638that can be communicatively coupled to public Internet1654(e.g. public Internet1354ofFIG.13). The Internet gateway1634contained in the control plane VCN1616and contained in the data plane VCN1618can be communicatively coupled to a metadata management service1652(e.g. the metadata management system1352ofFIG.13) that can be communicatively coupled to public Internet1654. Public Internet1654can be communicatively coupled to the NAT gateway1638contained in the control plane VCN1616and contained in the data plane VCN1618. The service gateway1636contained in the control plane VCN1616and contained in the data plane VCN1618can be communicatively couple to cloud services1656. In some examples, the pattern illustrated by the architecture of block diagram1600ofFIG.16may be considered an exception to the pattern illustrated by the architecture of block diagram1500ofFIG.15and may be desirable for a customer of the IaaS provider if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected region). The respective containers1667(1)-(N) that are contained in the VMs1666(1)-(N) for each customer can be accessed in real-time by the customer. The containers1667(1)-(N) may be configured to make calls to respective secondary VNICs1672(1)-(N) contained in app subnet(s)1626of the data plane app tier1646that can be contained in the container egress VCN1668. The secondary VNICs1672(1)-(N) can transmit the calls to the NAT gateway1638that may transmit the calls to public Internet1654. In this example, the containers1667(1)-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN1616and can be isolated from other entities contained in the data plane VCN1618. The containers1667(1)-(N) may also be isolated from resources from other customers. In other examples, the customer can use the containers1667(1)-(N) to call cloud services1656. In this example, the customer may run code in the containers1667(1)-(N) that requests a service from cloud services1656. The containers1667(1)-(N) can transmit this request to the secondary VNICs1672(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet1654. Public Internet1654can transmit the request to LB subnet(s)1622contained in the control plane VCN1616via the Internet gateway1634. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s)1626that can transmit the request to cloud services1656via the service gateway1636. It should be appreciated that IaaS architectures1300,1400,1500,1600depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. In certain embodiments, the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee. FIG.17illustrates an example computer system1700, in which various embodiments may be implemented. The system1700may be used to implement any of the computer systems described above. As shown in the figure, computer system1700includes a processing unit1704that communicates with a number of peripheral subsystems via a bus subsystem1702. These peripheral subsystems may include a processing acceleration unit1706, an I/O subsystem1708, a storage subsystem1718and a communications subsystem1724. Storage subsystem1718includes tangible computer-readable storage media1722and a system memory1710. Bus subsystem1702provides a mechanism for letting the various components and subsystems of computer system1700communicate with each other as intended. Although bus subsystem1702is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem1702may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard. Processing unit1704, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system1700. One or more processors may be included in processing unit1704. These processors may include single core or multicore processors. In certain embodiments, processing unit1704may be implemented as one or more independent processing units1732and/or1734with single or multicore processors included in each processing unit. In other embodiments, processing unit1704may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip. In various embodiments, processing unit1704can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s)1704and/or in storage subsystem1718. Through suitable programming, processor(s)1704can provide various functionalities described above. Computer system1700may additionally include a processing acceleration unit1706, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like. I/O subsystem1708may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands. User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like. User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system1700to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems. Computer system1700may comprise a storage subsystem1718that comprises software elements, shown as being currently located within a system memory1710. System memory1710may store program instructions that are loadable and executable on processing unit1704, as well as data generated during the execution of these programs. Depending on the configuration and type of computer system1700, system memory1710may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.) The RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated and executed by processing unit1704. In some implementations, system memory1710may include multiple different types of memory, such as static random access memory (SRAM) or dynamic random access memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system1700, such as during start-up, may typically be stored in the ROM. By way of example, and not limitation, system memory1710also illustrates application programs1712, which may include client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), etc., program data1714, and an operating system1716. By way of example, operating system1716may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® 17 OS, and Palm® OS operating systems. Storage subsystem1718may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem1718. These software modules or instructions may be executed by processing unit1704. Storage subsystem1718may also provide a repository for storing data used in accordance with the present disclosure. Storage subsystem1700may also include a computer-readable storage media reader1720that can further be connected to computer-readable storage media1722. Together and, optionally, in combination with system memory1710, computer-readable storage media1722may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. Computer-readable storage media1722containing code, or portions of code, can also include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. This can also include nontangible computer-readable media, such as data signals, data transmissions, or any other medium which can be used to transmit the desired information and which can be accessed by computing system1700. By way of example, computer-readable storage media1722may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media1722may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media1722may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system1700. Communications subsystem1724provides an interface to other computer systems and networks. Communications subsystem1724serves as an interface for receiving data from and transmitting data to other systems from computer system1700. For example, communications subsystem1724may enable computer system1700to connect to one or more devices via the Internet. In some embodiments communications subsystem1724can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem1724can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. In some embodiments, communications subsystem1724may also receive input communication in the form of structured and/or unstructured data feeds1726, event streams1728, event updates1730, and the like on behalf of one or more users who may use computer system1700. By way of example, communications subsystem1724may be configured to receive data feeds1726in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources. Additionally, communications subsystem1724may also be configured to receive data in the form of continuous data streams, which may include event streams1728of real-time events and/or event updates1730, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g. network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. Communications subsystem1724may also be configured to output the structured and/or unstructured data feeds1726, event streams1728, event updates1730, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system1700. Computer system1700can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system. Due to the ever-changing nature of computers and networks, the description of computer system1700depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly. Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or modules are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
143,436
11861374
DETAILED DESCRIPTION A computing system includes a host device and a root of trust (RoT) device for performing batch encryption and decryption operations facilitated by a direct memory access (DMA) engine. The host device generates a command table for batch processing of a set of address tables that each describe a set of data blocks of a file to be encrypted or decrypted. The DMA engine facilitates a DMA transfer of the command table from the host memory to an RoT memory of the RoT device. The RoT device performs batch processing of the address tables referenced in the command table by copying the set of address tables to a DMA memory of a DMA engine. To process each address table, the DMA engine copies set of data blocks from the host memory to the RoT memory, a cryptography engine encrypt or decrypt the data blocks, and the DMA engine copies the transformed data blocks back to the host memory. The DMA engine may further copy the address tables including authentication tags back to the host memory after encryption operations. FIG.1illustrates an example embodiment of a computing system100comprising a host device120and a root of trust (RoT) device110coupled by an external bus130. The external bus130may comprise, for example, a peripheral component interconnect express (PCIe) bus or other interconnect bus for transferring data and commands between the host device120and the RoT device110as described in further detail below. The host device120may comprise, for example, a workstation, a server, a single-board computer, or other computing device. The host device120comprises at least a hard disk drive (HDD)124, a host processor (CPUH)122, and a host memory (MEMO)126coupled by a host bus128. The host memory126comprises one or more dynamic random-access memory (DRAM) devices or another type of memory. The host processor122may comprise a general-purpose processor or a special-purpose processor (e.g., a graphics processor) for performing operations associated with data stored to the hard disk drive124and/or the host memory126. Data stored by the hard disk drive124and the host memory126may be in either encrypted form (ciphertext) or decrypted form (plaintext). The host bus128comprises a communications pathway between the hard disk drive124, the host processor122, the host memory126and the external bus130. The RoT device110performs encryption or decryption functions associated with data stored and processed by the host device120. For example, the RoT device110may receive plaintext data from the host device120(via the external bus130), encrypt the plaintext data to generate ciphertext data, and provide the ciphertext data to the host device120via the external bus130. Furthermore, the RoT device110may receive ciphertext data from the host device120(via the external bus130), decrypt the ciphertext data to generate plaintext data, and provide the plaintext data back to the host device120via the external bus130. In other embodiments, the RoT device110may perform other transformations on data from the host device120that are not necessarily encryption or decryption of the data. Furthermore, in some embodiments, the RoT device110may facilitate unidirectional transfers from the host device120to the RoT device110or vice versa without necessarily performing transformations of the data. The RoT device110comprises an RoT memory (MEMR)116and an RoT system-on-chip (SoC)150. The RoT memory116may comprise one or more DRAM devices or other types of memory. The RoT SoC150performs encryption and decryption functions on data in the RoT memory116. The RoT SoC150comprises a direct memory access (DMA) engine114including a DMA memory (MEMD)118, a cryptographic engine112, and an RoT core140that includes an RoT processor (CPUR)102, a key module104, an aperture106, and a communications module108. The DMA engine114manages DMA operations of the RoT device110based on commands received from the RoT core140via an RoT system bus142. The DMA engine114includes special-purpose logic for performing memory operations including direct data transfers between the host memory126and the DMA memory118or the RoT memory116. Once initiated (e.g., via a command from the RoT processor102), the operations of the DMA engine114can occur substantially independently from operations of the RoT processor102. Thus, the RoT processor102can perform other processing operations in parallel with an ongoing DMA data transfer managed by the DMA engine114. For example, to transfer data from the host memory126to the RoT memory116, the DMA engine114reads from host memory126via the RoT interface bus144(coupled to the external bus130) and writes to the RoT memory116via the RoT system bus142. To transfer data from the RoT memory116to the host memory126, the DMA engine114reads from the RoT memory116via the RoT system bus142and writes to the host memory126via the RoT interface bus144(coupled to the external bus130). The DMA engine114may furthermore operate to transfer commands from the host device120to the RoT memory116that can be executed by the RoT processor102or to transfer command status information from the RoT memory116to the host memory126. Upon completing a set of memory operations, the DMA engine114may send a signal (e.g., an interrupt) to the host processor102to indicate completion of the operations. The cryptographic engine112performs encryption and decryption of data in the RoT memory116based on one or more cryptographic keys obtained from the key module104via the key bus146. For example, to perform encryption, the cryptographic engine112obtains plaintext data from the RoT memory116, encrypts the plaintext data to generate ciphertext data based on the one or more cryptographic keys, and writes the ciphertext back to the RoT memory116. To perform decryption, the cryptographic engine112obtains ciphertext data from the RoT memory116, decrypts the ciphertext data to generate plaintext data based on one or more cryptographic keys, and writes the plaintext data back to the RoT memory116. The cryptographic engine112may operate based on instructions received from the RoT core140via the RoT system bus142. The communications module108facilitates communication of commands between the host device120(via the RoT interface bus144and the external bus130) and the RoT processor102via the RoT internal bus148. The interface bus144may include one or more interrupt lines that can be asserted from the host device120to cause the RoT device140to perform a specified action. Similarly, there may be one or more interrupt lines that can be asserted by the RoT device110to cause the host device120to perform a specified action. The RoT processor102comprises a general-purpose processor or a special-purpose processor for controlling the cryptographic engine112and the DMA engine114. The RoT processor102may furthermore interface with the key module104(via the RoT key bus152) to control generation of the one or more cryptographic keys for delivery to the cryptographic engine112. Commands may be transferred from the host device120to the RoT processor102via the communications module108or commands may be read by the RoT processor102from the RoT memory116via the aperture106. The aperture106comprises an isolated control plane between the RoT system bus142and the RoT internal bus148of the RoT core140. Control commands and data communicated between the cryptographic engine112and the DMA engine114pass through the aperture106. In an embodiment, the aperture106provides an interface between separate address spaces of the RoT core140and external components such as the cryptography engine112and the DMA engine114. In alternative embodiments, the cryptography engine112and the DMA engine114may be in the same address space as the RoT core140and the aperture106may be omitted. The key module104generates and delivers cryptographic keys applied by the cryptographic engine112in encryption and decryption operations. For example, in one implementation, a cryptographic key may be generated by the key module104, sent to the RoT processor102for further processing, and then delivered to the cryptographic engine112via the key module104. Alternatively, the key module104may operate only to deliver keys without necessarily generating them. In an example embodiment, the RoT device110may comprise a printed circuit board that supports the RoT memory116and the RoT SoC150. The RoT SoC150may be implemented using a field programmable gate array (FPGA) or may comprise an application-specific integrated circuit (ASIC) device. In another embodiment, the DMA engine114may be implemented as a standalone integrated circuit separate from the RoT core140and the cryptography engine112. In other embodiments, one or more components of the RoT SoC150may be implemented in software or firmware. For example, functions of the RoT SoC150described herein may be implemented based on the RoT processor102executing instructions stored to a non-transitory computer-readable storage medium. FIG.2illustrates an example embodiment of a structure for an encrypted file210. The encrypted file210includes encrypted file contents212comprising a set of ciphertext blocks216(e.g., N blocks) and a plaintext file footer214. The ciphertext blocks216may be of varying size and different encrypted files210may have different numbers N of ciphertext blocks216. The plaintext file footer214may include various footer data that provides information relevant to encryption and decryption operations such as, for example, a metadata field218, block size fields220, block tag fields222, a total size field224, and a footer version field226. The block size fields220specify block sizes for the respective ciphertext blocks216. In an embodiment, each ciphertext block216has a size that is a multiple of a typical page size (e.g., 4 KB). The block tag fields222comprise authenticated encryption tags for each respective ciphertext block216that may be used to verify the integrity of the corresponding ciphertext block216. The metadata field218may include various information utilized during decryption of the ciphertext blocks216to derive the cryptographic key. The footer version field226provides a file format version associated with the plaintext file footer214.FIG.3illustrates an example of an address table330associated with the encrypted file210or a portion thereof. The address table330comprises a set of pointers to logical blocks and metadata associated with each block. The metadata can apply to one or more entries in the address table330or each entry could have different metadata, thus allowing the address table to support multiple contexts for the blocks associated with the address table330. The address table330enables the DMA engine114to perform scatter-gather functions associated with data blocks transferred into and out of the host memory126. For example, the ciphertext blocks216of an encrypted file210may be stored in the host memory126in a scattered manner such that the physical addresses of each ciphertext block216are non-contiguous. Similarly, upon decrypting the encrypted file210, the resulting plaintext blocks may be scattered in the host memory126across non-contiguous addresses. In the illustrated example, the address table330provides information used during decryption to locate the ciphertext blocks216in the host memory126, derive the relevant cryptographic information for performing decryption, and control where the resulting plaintext blocks are written to in the host memory126. In an embodiment, the address table330comprises a set of rows that each correspond to a different ciphertext block216of the encrypted file210. Each row specifies at least a source address332for the corresponding ciphertext block216that indicates where the ciphertext block216is stored in the host memory126, a destination address334for a corresponding plaintext block that indicates where to store the plaintext block in the host memory126after decryption, and a size field336indicating a size of the ciphertext block216(which may be copied directly from the block size fields220of the plaintext file footer214). Optionally, each row of the address table330may further include additional fields that may be utilized during decryption such as, for example, an initialization vector340, block tag342(which may be copied directly from the block tag field222of the plaintext file footer214), and additional authentication data344. The AD field338comprises a flag indicating whether or not the optional fields340,342,344are valid. For example, the AD field338is set to valid when the address table330is used for encryption and decryption operations and is set to invalid when used for unidirectional transfer operations where these additional fields340,342,344are not utilized. An address table330does not necessarily describe an entire encrypted file210and may instead describe only a subset of the ciphertext blocks216of an encrypted file210. Thus, multiple address tables330may be employed to collectively describe a single encrypted file210. For example, in the illustrated embodiment, the address table330comprises j rows beginning with the ith data block of the file. The size of each address table330employed to describe an encrypted file210may be limited by the size of the DMA memory118of the DMA engine114. For example, if the DMA memory is limited to 4096 bytes (4 KB) and each row of the address table330comprises 52 bytes, then each address table330can have up to 78 rows. An encrypted file210that is 1 GB in size and has 4 KB ciphertext blocks would therefore take up 262,144 address table rows, which may be split across 3,361 different address tables330. While the above description describes the address table330ofFIG.3as corresponding to an encrypted file210or a portion thereof, a similar address table330may be constructed for a plaintext file or a portion thereof. In this case, each row of the address table330corresponds to a plaintext block, the source address332indicates a location of the corresponding plaintext block in the host memory126, the destination address334indicates where to store the ciphertext block in the host memory126after encryption, and the size field336indicates a size of the plaintext block. The additional optional fields of the address table330for a plaintext block (e.g., fields338,340,342,344) may provide various information associated with the block pointed to be the source address332. For example, the AD flag338operates as described above to indicate whether remaining fields are valid, the tag field342is an authentication tag generated during encryption, and the initialization vector field340and additional authentication data field344are utilized during encryption. While the address table330provides one example format, other formats may be utilized in different embodiments that may include different, additional, or fewer fields. FIG.4illustrates an example embodiment of a process performed by the host device120in association with initiating a transformation (e.g., encryption or decryption) of a file or portion thereof. The host processor122copies402file data for a file (e.g., a set of data blocks which may be ciphertext blocks or plaintext blocks) from the hard disk drive124to the host memory126. The host processor122generates404a set of one or more address tables associated with the file. As described above, each of the address tables includes a set of rows corresponding to different blocks of the file. Each row includes a source address referencing an address of the block in the host memory126, a destination address referencing an address in the host memory126for receiving the corresponding transformed block after encryption or decryption, and various data fields for facilitating the transformation of the block. The host processor122then generates406a command table in the host memory126that comprises a set of commands for performing a batch processing of the address tables. The host processor122sends408a command table transfer signal to the RoT device110to initiate transfer of the command table in the host memory126to the RoT device110. In an embodiment, the command table transfer signal comprises an interrupt signal asserted on the external bus130that is detectable by the communications module108of the RoT device110. Following the encryption or decryption operations performed by the RoT device110, the host device120receives410the transformed blocks from the RoT device110into the host memory126via a DMA transfer. The transformed blocks are stored to the respective destination addresses specified in the address tables. After completion of the command table, the host device120may also receive412command status information for each of the commands in the command table that indicates whether each command succeeded or failed. In an embodiment, the host device120stores each address table330in the host memory126starting at a memory page boundary such that some number of the least-significant bits of the address of the address table330are zero. In this case, the one or more of the least significant bits of the address of the address table may instead be used to encode the number of rows in the address table. FIG.5illustrates an example embodiment of a process performed by the RoT device110in association with a transformation (e.g., encryption or decryption) of a file or portion thereof. The RoT device110receives502the command table transfer signal from the host device120(e.g., via an interrupt signal)6. In response to the command table transfer signal, the RoT processor102causes the DMA engine114to copy504the command table from the host memory126to the RoT memory116via a DMA transfer. The RoT processor102then executes506the commands in the command table. When executing the commands, the DMA engine114obtains the data blocks from the host memory126, the cryptography engine112transforms the data blocks (e.g., perform encryption or decryption), and the DMA engine114writes the resulting transformed data blocks back to the host memory126, as described in further detail below with toFIG.6. The RoT processor102may furthermore write a status result indicating a success or failure of each command to respective status fields for the commands in the command table. Upon reaching the end of the command table, the RoT processor102causes the DMA engine114to copy508the status information associated with the command table to the host device120via a DMA transfer. The RoT processor102may then generate a completion signal as an interrupt signal on the external bus130detectable by the host device120to indicate completion of the operation. FIG.6illustrates an example embodiment of a process performed by the RoT device110for executing the command table to facilitate encryption or decryption operations. In this embodiment, the sequence of commands in the command table includes an “open channel” command, a set of batch transformation processing commands, and a “close channel” command. The RoT processor102executes the open channel command to open602a channel supported by the cryptography engine112. This step may include generating a cryptographic key and delivering the cryptographic key to the cryptographic engine112. The RoT processor102causes the DMA engine114to copy604one or more address tables referenced in the command table from the host memory126to the DMA memory118. The RoT processor102then causes the DMA engine114to execute a sequence of batch address table commands that each reference the starting address of the address table and the number of rows in the address table. Here, when executing an address table command, the DMA engine114copies606the data blocks referenced in the source address fields of the address table from the host memory126to the RoT memory116. The cryptographic engine112transforms608(e.g., encrypts or decrypts) the data blocks in the RoT memory116. For example, in a decryption operation, the cryptographic engine112obtains a ciphertext block from the RoT memory116, decrypts the block, and writes corresponding plaintext back to the RoT memory116. In an encryption operation, the cryptographic engine112obtains a plaintext block from the RoT memory116, encrypts the block, and writes corresponding ciphertext back to the RoT memory116. The encryption or decryption operations may be based in part on the cryptographic information stored to the address table. The DMA engine copies610the transformed blocks to the corresponding destination addresses referenced in the address table. The RoT processor102may furthermore update status information associated with each command in the command table as it is processed to indicate success or failure of the command. In an embodiment, while the cryptography engine112is processing a row of an address table, the DMA engine114may begin copying data associated with the next row of the address table. Furthermore, once processing of an address table is initiated, the DMA engine114and cryptography engine112may operate substantially independently of the RoT processor102such that the RoT processor102may concurrently perform other operations. After processing an individual address table, the DMA engine114may signal to the RoT processor102that it has completed processing of the address table, after which steps604-610may repeat for the next address table. For example, the RoT processor102may provide information about the next address table to the DMA engine114or indicates that all address tables have been processed. Following processing of all address tables, the RoT processor102executes a “close channel’ command to close612the channel. The RoT processor102also copies the status information associated with the command table back to the host memory126via a DMA transfer and may then assert the completion signal as an interrupt to the host device120via the communication module108. In an embodiment, the RoT processor102may perform authentication of the data blocks referenced in the address tables prior to processing them. For example, the RoT processor102may verify the integrity of the data blocks based on the block tags342and/or additional authentication data344. In an embodiment, transferring of the command table from the host memory126to the RoT memory116may be implemented using an address table that references the command table. Here, the host device120generates the address table in the host memory126and sends the address of the address table and its size to the RoT device110via the communication module108during an initialization process. Then, when the host device120sends the command table transfer signal as an interrupt, the DMA engine114is configured to transfer the address table to the DMA memory118, and process the address table to copy the command table to the RoT memory116. FIG.7illustrates an example of a set of data structures for a command transfer signal710, a command table720, and a set of address tables730. The command transfer signal710may comprise a single command that references the location of an address table that references the command table720in the host memory126. As described above, the command table720begins with a command for opening a channel, includes a set of commands for batch processing of respective address tables, and ends with a command for closing the channel. The address table commands each reference the location of the corresponding address table730in the host memory126as described above. In this example, the address blocks730relate to an encrypted file for decryption by the RoT device110. As described above, a similar structure may be used to facilitate encryption except that the source address fields in the address tables730reference plaintext blocks and the destination address field in the address tables730specify where the ciphertext blocks will be written. Furthermore, similar data structures may be used for implementing a unidirectional data transfer (without necessarily performing encryption or decryption). In this case, the source addresses in the address tables730may reference an address in the host memory126and the destination addresses may reference an address in the RoT memory116or vice versa. FIG.8illustrates another example of a command table820. In this embodiment, the command table820facilitates transfers of two different DMA streams concurrently (e.g., corresponding to different files or different portions of a file). Here, the command table820includes commands822for opening both a first and second channel, commands824for performing batch processing of the address tables associated with the respective channels, and commands826for closing the first and second channels. In other embodiments, the command table820may facilitate concurrent transfer of data streams over more than two channels. FIG.9illustrates examples of various data structures associated with the above-described operations. Here, the logical view910represents the logical organization of a file as a collection of data blocks and associated metadata. The logical view910may be consistent with the format of the file at rest in long-term storage (e.g., in the HDD124). The physical view920indicates an example structure for how the file may be organized when it is loaded into the host memory126. Here, the data blocks are assigned to memory locations that are not necessarily contiguous or consecutive. The address tables930comprise a set of pointers to the data blocks in the memory126. In the illustrated example, the file is described by two address tables that each reference a subset of the data blocks. Here, the address tables and their respective sets of pointers are also not necessarily stored in the same order as the data blocks that they reference but may be scattered in memory. The command table940groups the set of address tables into a single structure comprising pointers to the respective address tables. FIG.10illustrates an example embodiment of a process for facilitating a unidirectional transfer of data (e.g., from the host memory126to the RoT memory116or vice versa) without necessarily performing any encryption, decryption, or other transformation. In this process, the host processor122generates1002a set of address tables in the host memory126identifying the source or destination addresses for the data blocks being transferred depending on the direction of transfer. Furthermore, in this embodiment, the address tables may lack the metadata fields providing encryption/decryption parameters. The host processor122generates1004a command table that references the address tables and sends1006a command table transfer signal referencing the command table to the RoT device110as described above. The RoT device110receives1008the command table transfer signal and copies1010the command table from the host memory126to the RoT memory116via a DMA transfer as described above. The RoT device110then executes1012the commands to cause the DMA engine114to perform the data transfers specified in the address tables. The RoT device110may furthermore update status information of the commands in the command table and cause the DMA engine114to copy1014the command table including the status information associated with the commands from the RoT memory116to the host memory126as described above. FIG.11illustrates another example embodiment of a computing device1100that incorporates a DMA engine1114. The computing device1100comprises the DMA engine1114, a memory1116, and a processor1102, all coupled by a bus1142. The processor1102may comprise a general-purpose processor or a special-purpose processor specifically configured for graphics processing, security function processing, cryptographic processing, or other special-purpose computer functions. The memory1116may comprise one or more DRAM devices or other types of general or special-purpose memory. The DMA engine1114manages DMA operations of the computing device1100based on commands received from the processor1102to transfer data to the memory1116(or an internal memory of the DMA engine1114) from an external system1120and to transfer data from the memory1116(or an internal memory of the DMA engine1114) to the external system1120. The processor1102and DMA engine1114may operate to facilitate DMA transfers according to any of the embodiments described above. For example, the external system1120may operate according to the flowchart ofFIG.4to generate address tables associated with data for transferring, generate a command table referencing the address tables, and send the command table to the computing device1100. The processor1102and DMA engine1114of the computing system1100then operate according to the embodiments ofFIG.5(and optionallyFIG.6) to process the command table. For example, the processor1102executes a sequence of commands in the command table, where each command references an address table and causes the DMA engine1114to transfer the data referenced in the address table. Alternatively, the processor1102and DMA engine1114may operate in a similar manner to transfer data from the memory1116to the external system1120or between different memory locations within the memory1116. The external system1120and computing device1100may also operate according to the process ofFIG.10to perform unidirectional transfers without necessarily performing any transformation of the data. In the example computing device1100, the DMA engine1114includes logic performing memory operations independently of the processor1102once initiated. For example, the processor1102may send a command to the DMA engine1114to initiate a DMA transfer associated with an address table, after which the DMA engine1114independently executes the transfer while the processor1102may perform other operations in parallel. Upon completing the transfer, the DMA engine1114may assert a signal to indicate to the processor1102that the operations are completed, which causes the processor1102to proceed to the next command in the command table. The DMA engine1114may be embodied in one or more standalone integrated circuits or chips such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). Furthermore, the DMA engine1114may be incorporated into one or more integrated circuits or chips that include other components (such as those illustrated inFIG.11) for performing general purpose computing functions, or special purpose functions such as graphics processing, security (e.g., encryption, decryption, or other cryptographic functions), or other specialized computing functions. Upon reading this disclosure, those of ordinary skill in the art will appreciate still alternative structural and functional designs and processes for the described embodiments, through the disclosed principles of the present disclosure. Thus, while embodiments and applications of the present disclosure have been illustrated and described, it is to be understood that the disclosure is not limited to the precise construction and components disclosed herein. Various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present disclosure herein without departing from the scope of the disclosure as defined in the appended claims.
31,372
11861375
DETAILED DESCRIPTION The description that follows includes illustrative systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques have not been shown in detail. Various examples described herein are directed to systems and methods for configuring applications in a microservice architecture including at least one microservice-built application. A microservice is a self-contained unit of software that persists working data at an independent storage location specific to the microservice. For example, a microservice persists its working data at a storage location that may not accessible to other microservices. This can be accomplished in different ways. In some examples, a microservice is executed by a dedicated virtual machine executing at a server or other computing device. Working data for the microservice may be stored at a virtual disk of the virtual machine. In some examples, a microservice executes at a server or other computing device. The microservice may be assigned a dedicated drive, partition or other suitable storage area at the computing device. A microservice-built application includes a set of two or more microservices. Different functional units of the application are implemented with different microservices. This arrangement provides certain advantages, such as, simpler upgrades and increased robustness. For example, an application built with microservices may be updated on a microservice-by-microservice basis. Changes are made to one or more targeted microservices, often without the need to modify other microservices. Also, because microservices are self-contained, failure of one or even a set of microservices may not cause the application to crash. Instead, failed microservices are sometimes restarted while the application executes. Although microservice-based applications provide advantages, they also present challenges. For example, configuring a microservice-based application includes configuring multiple, independent microservices. Often, there are dependencies between the configurations of the different services. For example, setting or changing a configuration parameter for one microservice may require setting or changing one or more other configuration parameter values in one or more other microservices. Also, for example, selecting a particular configuration parameter for one microservice may limit the range of acceptable configuration parameter values for one or more other microservices. Additional challenges relate to the configuration process. For example, if one configuration parameter change is made to a first microservice, but a corresponding configuration parameter change to a second microservice fails, the application may become inconsistent. These and other issues may be addressed with a configuration system, as described herein. The configuration system includes a distributed configuration deployment service (DCDS) and one or more configuration application program interfaces. The DCDS accesses one or more service command models. A service command model describes a microservice, including one or more configuration parameter values for the microservice. The DCDS synthesizes the service command models to generate application configuration parameter values and/or application configuration constraints. The application configuration constraints reflect the allowable configuration parameter values for the various microservices in view of the constraints of other microservices. The DCDS receives configuration parameter values for the application. Using the application configuration constraints, the DCDS derives sets of microservice parameters for the different microservices. For example, a first set of microservice parameters is determined for a first microservice and a second set of microservice parameters is determined for a second set of microservices. The sets of microservice parameters are deployed to the microservices to implement application configuration. FIG.1is a diagram showing one example of an environment100for configuring an application that uses microservices. The environment100includes a configuration system102and an application108built with microservices110A,110B,110N. Although three microservices110A,110B,110N are shown inFIG.1, any suitable number of microservices may be used. The configuration system102may be any suitable computing device or set of computing devices including a processor or processor unit, as described herein. The application108and/or the microservices110A,110B,110N implementing the application108may also be executed at a computing device including a processor or processor unit, as described herein. The application108and/or microservices110A,110B,110N may be executed at the same computing device or may be distributed to execute at a set of different computing devices. In some examples, the application108and/or some or all of the microservices110A,110B,110N execute at the configuration system102. Microservices110A,110B,110N are in communication with respective storage locations112A,112B,112N. The microservices110A,110B,110N persist working data at the respective storage locations112A,112B,112N. The storage locations112A,112B,112N may be, for example, virtual disks of respective virtual machines executing the respective microservices110A,110B,110N. For example, storage location112A may be a virtual disk of a virtual machine that executes the microservice110A. In some examples, storage locations112A,112B,112N may be or include respective dedicated drives, partitions, or other subdivisions of data storage. For example, storage location112A may be or include a dedicated drive, partition, or other subdivision of data storage dedicated to the microservice110A. The configuration system102executes a DCDS104and may also execute a configuration API106. The DCDS104generates sets of configuration parameter values for the respective microservices110A,110B,110N. For example, the DCDS104generates a first set of configuration parameter values for the microservice110A, a second set of configuration parameter values for the microservice110B, and so on. The DCDS104generates sets of microservice configuration parameter values, in some examples, considering an application configuration model124and/or one or more service configuration models126A,126B,126N. The application configuration model124may describe the application108. The service configuration models126A,126B,126N may describe the microservices110A,110B,110N. For example, service configuration model126A may describe microservice110A; service configuration model126B may describe microservice110B; and so on. Service configuration models126A,126B,126N describe configuration parameter values for the respective microservices110A,110B,110N. For example, a service configuration model126A may describe configuration parameter values, such as switches and other parameter values that the microservice110A receives as input. The service configuration model126A may also describe arrays, tables, or other data objects that may be used to provide configuration parameter values to the microservice110A. The service configuration model126A may also describe constraints on microservice configuration parameter values for the microservice110A (microservice configuration constraints). Microservice configuration constraints may include, for example, variable types, a set or range of values that are permissible for configuration parameters to the microservice110A, etc. Microservice configuration constraints may also describe a set of configuration parameter values. For example, a constraint on configuration parameter values to the microservice110A may indicate that a particular value or range of values for a first microservice configuration parameter limits the allowable values for a second microservice configuration parameter. For example, a configuration parameter describing a shipping carrier for a particular transaction may be limited by another configuration parameter describing a country for the transaction. That is, a transaction in a particular country may be limited to only use shipping carriers that operate in the particular country. The service configuration models126B,126N may be arranged in a similar manner relative to respective microservices110B,110N, etc. The application configuration model124describes the application108. In some examples, the application configuration model124is an extension of a service configuration model126A,126B,126N. For example, the application configuration model124may have a structure and features similar to that of the service configuration models126A,126B,126N with additional features and/or attributes. The application configuration model124may describe a set of microservices110A,110B,110N used by the application108. For example, the application configuration model124may include an indication of the microservices110A,110B,110N used by the application108. The application configuration model124, in some examples, includes an indication of the service configuration models126A,126B,126N associated with the microservices110A,110B,110N. The application configuration model124also describes application configuration parameter values for the application. For example, the application configuration model may map application configuration parameter values to corresponding microservice configuration parameters for the various microservices110A,110B,110N. The application configuration model124, in some examples, also indicates one or more microservice configuration parameter values from the various microservices110A,110B,110N that may be directly exposed as application configuration parameter values. A microservice configuration parameter that is directly exposed as an application configuration parameter is a microservice configuration parameter that has a one-to-one correlation to the corresponding application configuration parameter. This application configuration model124, in some examples, also describes one or more application configuration constraints. Application configuration constraints are constraints on the value or values that may be taken by particular application configuration parameter and/or corresponding microservice configuration parameter. For example, a constraint on application configuration parameter values may indicate that a particular value or range of values for a first application configuration parameter limits the allowable value or values for a second application configuration parameter. In some examples, the application configuration model124includes content, such as predetermined values for one or more application configuration parameter values and/or microservice configuration parameter values. In some examples, the application configuration model124and service configuration models126A,126B,126N are stored at a configuration model data store114including one or more data storage devices that are accessible to the DCDS104. The DCDS104may load one or more of the application configuration model124and the service configuration models126A,126B,126N. In some examples, the DCDS104derives some or all of the application configuration model124based on the service configuration models126A,126B,126N. Also, in some examples, the application configuration model124may be omitted and the DCDS104may derive sets of configuration parameter values for the microservices110A,110B,110N directly from the service configuration models126A,126B,126N. The DCDS104, in some examples, also generates and provides a configuration UI122to a user computing device116utilized by a user118. The configuration UI122includes fields for receiving application configuration parameter values. For example, application configuration parameter values may be derived from the application configuration model124and/or service configuration models126A,126B,126N as described herein. The DCDS104may also enforce application configuration constraints at the configuration UI122. For example, the user118may be prevented from entering values for application configuration parameter values that are impermissible in general or in view of other values for application configuration parameter values entered at the configuration UI122. In some examples, the DCDS104utilizes a check configuration request132to one or more of the microservices110A,110B,110N to verify application configuration parameter values entered at the configuration UI against working data in use by the various microservices110A,110B,110N, as described herein. To further illustrate, consider an example in which the application108processes order fulfillment for electronic commerce. In this example, a first microservice110A manages tax consequences of the various orders, and a second microservice110B manages shipping services for providing ordered goods to a customer. The first microservice110A receives microservice configuration parameter values as described by TABLE 1 and the second microservice110B receives microservice configuration parameter values described by TABLE 2: TABLE 1Tax Microservice1Country A Switch2Country A Tax Service(s)3Country B Switch4Country B Tax Service(s)5Country C Switch6Country C Tax Service(s) TABLE 2Shinning Microservice7Country A Switch8Country A Shipping Service(s)9Country B Switch10Country B Shipping Service(s)11Country C Switch12Country C Shipping Service(s) To clarify the example, the microservice configuration parameter values for the tax microservice110A are labeled 1-6. The microservice configuration parameters for the shipping microservice110B are labeled 7-12. In this example, microservice configuration parameter values 1, 3, 5, from the tax microservice and microservice configuration parameter values 7, 9, and 11 from the shipping microservice are switch parameters that indicate whether the respective microservices are configured for the indicated country. For example, a service configuration model126A for the tax microservice110A may include a constraint indicating that if parameter 1 is true, indicating that the microservice110A is configured for Country A, then a set of microservice configuration parameter values provided to the microservice110A should also include at least one value for parameter 2 indicating a tax service for Country A. The considered example and the TABLES 1 and 2 also illustrate a potential application configuration constraint. For example, the application108may be inconsistently configured if the user118provides inconsistent values for the country switches (for example, if parameter 1 is true and parameter 7 is false). Therefore, the DCDS104may implement an application configuration constraint that if a particular country switch is enabled in one microservice, then it should also be enabled in the other microservice. FIG.2shows an example screen200that may be a part of the configuration UI122in the considered example. The screen200may be displayed at the user computing device116to receive data from the user118. The screen200includes fields for Country A (field202), Country B (field204), and Country C (field206). In this example, the microservice configuration parameter “country” is a shared parameter. Instead of providing separate country switch parameter values for each microservice, the user118may provide a single country switch value for the respective countries. For example, the DCDS may map the value for Country A provided at field202to both parameter 1 and parameter 7 from TABLES 1 and 2 above. The screen200also shows tax service fields210,212,218,220,226,228. The DCDS104may map values received at these fields210,212,218,220,226,228to respective tax service parameters 2, 4, and 6 from TABLE 1 and microservice110A. Similarly, the DCDS104may map shipping service fields214,216,222,224,230,232to shipping service parameters 8, 10, and 12 from TABLE 2 and microservice110B. The DCDS may also enforce application and/or microservice configuration constraints with the screen200. For example, if the user118sets a particular country field to true, the DCDS104may decline to deploy the resulting sets of microservice configuration parameter values until or unless values for tax and shipping service values are also received or determined. This example illustrates a potential conflict that could be addressed by the DCDS104. For example, if the DCDS104receives a microservice configuration parameter indicating one or more shipping services for country A, but fails to receive a corresponding microservice configuration parameter indicating a tax service or services for that country, the DCDS104may decline to deploy the sets of microservice configuration parameter values. In some examples, the screen200also demonstrates content for the application108that may be indicated by the application configuration model124. For example, the application configuration model124may describe default and/or predetermined values for one or more configuration parameter values. Default or predetermined values for one or more configuration parameter values may be reflected at the appropriate field or fields of the screen200. For example, when the screen200is displayed, one or more fields may be pre-populated with the default or predetermined values. The user118may be permitted to modify default values or may not be permitted to modify predetermined values. Although in the above example the microservices110A,110B were described as handling tax and shipping tasks, respectively, this example is for purposes of illustration. Microservices110A,110B,110N may be configured to perform other and different functions in different implementations. Referring back toFIG.1, the DCDS104may be programmed to test and/or deploy sets of microservice configuration parameter values using one or more configuration APIs106. The configuration API106is configured to provide the sets of microservice configuration parameter values to the microservices110A,110B,110N and may be configured to receive and respond to configuration check requests132and deploy requests130. A check request132from the DCDS104to the configuration API106may include sets of microservice configuration parameter values for one or more of the microservices110A,110B,110N. In response to a check request132, the configuration API106compares the received sets of microservice configuration parameter values to live working data of the various microservices110A,110B,110N (e.g., at microservice storage112A,112B,112N). For example, the DCDS104may not change the active microservice configuration parameter values at a microservice110A,110B,110N. That is, the microservices110A,110B,110N may not operate on live working data under the sets of microservice configuration parameter values provided with a check request132. The configuration API106may return to the DCDS104an indication of whether any of the sets of microservice configuration data conflicted with live working data at any of the respective microservices110A,110B,110N. A set of microservice configuration data conflicts with live working data if the set of microservice configuration data is inconsistent with the live working data. Consider an example in which an initial set of microservice configuration data includes a data unit of a particular type, and the microservice has already generated live data including objects that reference the data unit. If a new set of microservice configuration data replaces the previous data unit, then the new set of microservice configuration data may be inconsistent with the live data referencing the previous data unit. Consider another example in which the live data is part of an open workflow. A new set of microservice configuration data may be inconsistent with live data (e.g., until the open workflow is closed. Also, consider an example in which a data unit is removed from a set of microservice configuration data to form a revised set of microservice configuration data. The live data may be inconsistent the revised set of microservice configuration data if the removed data unit has already been used to generate and/or use the live data. A deploy request130from the DCDS104to the configuration API106may also include a set of microservice configuration parameter values for one or more of the microservices110A,110B,110N. In response to the deploy request130, the configuration API106compares the received sets of microservice configuration parameter values to live working data of the various microservices110A,110B,110N. If no conflicts are detected at a microservice110A, then the microservice110A is locked to prevent changes to microservice110A. This is repeated at the other microservices110B,110N. If a conflict is detected at any microservices110A,110B,110N, then the deploy request130fails and all microservices110A,110B,110N are unlocked. (A failure message may be sent to the DCDS104.) If no conflicts are detected at any of the microservices110A,110B,110N, then the sets of microservice configuration parameter values are deployed and the microservices110A,110B,110N are unlocked. Upon unlocking, the microservices110A,110B,110N operate according to the sets of microservice configuration parameter values included with the deploy request130. In some examples, the DCDS104and/or the configuration model data store114are in communication with an application development tool120. The application development tool120may generate an application configuration model124that references various service configuration models126A,126B,126N as described. In some examples, the application development tool120may be utilized to determine data for the application configuration model124including, for example, content, including application configuration parameter values, etc. In some examples, the application development tool120may be or use the SAP Web IDE product from SAP SE of Walldorf, Germany. FIG.3is a flowchart showing one example of a process flow300that may be executed by a DCDS, such as the DCDS104, to develop and deploy sets of microservice configuration parameter values to a set of microservices of a microservice-built application. At operation302, the DCDS extracts a plurality of configuration models. The extracted configuration models may include a plurality of microservice configuration models corresponding to a plurality of microservices making up all or part of an application. In some examples, the DCDS also extracts an application configuration model. At operation304, the DCDS relates the configuration models. For example, the DCDS may synthesize the service configuration models to generate application configuration parameter values and/or application configuration constraints. In some examples, when an application configuration model is present, the application configuration model defines some or all of the application configuration parameter values, some or all of the application configuration constraints, and/or application content including values for one or more application configuration parameter values. At operation306, the DCDS generates a configuration UI. The configuration UI prompts a user to provide values for the application configuration parameter values. In some examples, the DCDS also enforces application configuration constraints on the application configuration parameter values received through the configuration UI. For example, if the user provides a value that is inconsistent with an application configuration constraint, then the DCDS may decline to proceed with the inconsistent value and/or prevent the user from entering the value into the configuration UI. At operation308, the DCDS generates individual sets of microservice configuration parameter values. In some examples, a set of microservice configuration parameter values is generated for each microservice used by the application. The sets of microservice configuration parameter values may be derived, for example, from application configuration parameter values received through the UI, from content provided with an application configuration model, etc. At operation310, the sets of microservice configuration parameter values generated at operation308are checked against live working data at the various microservices. If the sets of microservice parameter values check at operation312, the sets of microservice configuration parameters are deployed at operation316. If not, remedial actions may be taken at operation314. Remedial actions may include prompting the user to provide new values of the application configuration parameters. In some examples, remedial actions include generating an additional temporary or permanent application configuration constraint to prevent a next set of application configuration values from failing the check. (In some examples, remediation is omitted.) If the sets of microservice configuration parameter values do check, they may be deployed at operation316. In some examples, the DCDS makes a deploy request at operation310and the configuration API or APIs may perform operations312,314, and/or316. In other examples, the DCDS performs operations310,312,314, and/or316. For example, the DCDS may make a check request to a configuration API associated with the DCDS and/or the various microservices and decide whether to deploy the parameters or remediate. In some examples, the DCDS makes a deploy request at316, which may result in the configuration API making an additional check of the sets of microservice configuration parameter values, as described herein. FIG.4is a flowchart showing one example of a process flow400that may be executed by the DCDS and a configuration API to check one or more sets of microservice configuration parameter values using a check request. The process flow400includes two columns401,403. Column401includes operations that are executed by the DCDS. Column403includes operations that are executed by the configuration API. At operation402, the DCDS sends a check request405to a configuration API. The check request405may include one or more sets of microservice configuration values. The DCDS may send the check request405on any suitable occasion in its processing. For example, the DCDS may send the check request405while in the process of determining one or more sets of microservice configuration parameter values. If the check request405fails, the DCDS may determine or obtain alternate configuration parameter values before determining complete sets of microservice configuration values for deployment. In another example, the DCDS may make the check request405after determining sets of microservice configuration parameter values and before attempting to deploy the sets. Also, althoughFIG.4shows a single configuration API handling multiple microservices, in some examples, the DCDS may correspond with multiple configuration APIs that each manage one or more microservices. At operation404, the configuration API receives the check request. At operation406, the configuration API checks the set's microservice configuration parameter values against live working data at the microservices, for example, as described herein. At operation408, the configuration API sends a response message407. The response message407indicates the results of the check at operation406. For example, if the received set or sets of microservice configuration parameter values all check, the response message407may indicate so. If one or more sets of microservice configuration parameter values do not check, the response message407may indicate this and may also indicate a description of the reason for one or more check failures. The DCDS receives the response message407at operation410. At operation412, the DCDS reads the response message407to determine whether the set or sets of microservice configuration parameters check. If not, the DCDS optionally revises the set or sets of microservice configuration parameter values at operation416. If the set or sets of microservice configuration parameters do check, the DCDS may proceed at operation414. FIG.5is a flowchart showing one example of a process flow500that may be executed by one or more configuration APIs to execute a deploy request. At operation502, the configuration API receives the deploy request. At operations504A,504B,504N, the configuration API checks sets of microservice configuration parameters against live working data at the respective microservices. For example, at operation504A, the configuration API checks a first set of microservice configuration parameter values at a first microservice. At operation504B, the configuration API checks a second set of microservice configuration parameter values at a second microservice, and so on. At operations506A,506B,506N, the configuration API determines if the sets of microservice configuration parameters checked at operations504A,504B,504N conflict with live data. For example, at operation506A, the configuration API determines if the first set of microservice configuration parameter values at the first microservice conflict with live data at the first microservice storage. At operation506B, the configuration API determines if the second set of microservice configuration parameter values at the second microservice conflict with live data at the second microservice storage, and so on. If the set of microservice configuration parameters for a microservice checks at operations506A,506B,506N, the configuration API locks the corresponding microservice at operations508A,508B,508N. For example, if the first set of microservice configuration parameters checks against live data at the first microservice storage, the configuration API locks the first microservice at operation508A. If the second set of microservice configuration parameters checks against live data at the second microservice storage, the configuration API locks the second microservice, and so on. Locking a microservice includes preventing the microservice from receiving changes to its configuration parameters. In some examples, locking a microservice also includes preventing the microservice from taking further action with respect to the application108. At operations510A,510B,510N, the configuration API registers a vote for the respective microservices. For example, if a set of configuration parameter values did not match live working data at a microservice, then the API registers a no vote. If the set of configuration parameter values does match the life working data at the microservice (and the microservice was locked at one of operations508A,508B,508N), the configuration API may register a yes vote. For example, the configuration API may register a vote for the first microservice at operation510A, register a vote for the second microservice at operation510B, and so on. At operation512, the configuration API may determine whether all of the votes from the respective microservices are yes. If not, the deploy request may fail at operation514. For example, the configuration API may optionally release any locks at the microservices that may have been implemented at operations508A,508B,508N. If all of the votes at operation512are yes, then at operation516, the configuration API may modify the configuration parameter values of the microservices, for example, by writing the respective sets of microservice configuration parameter sets to the respective microservices. For example, the first set of microservice configuration parameter values is written to the first microservice; the second set of microservice configuration parameter values is written to the second microservice, and so on. At operation518, the configuration API releases the locks that were implemented at operations508A,508B,508N. Although the process flow500is described as being performed by a single configuration API, in some examples, some or all of the functions of the process flow500are performed locally at the respective microservices. In some examples, each microservice includes a configuration API that is in communication with the DCDS. Also, in some examples, a configuration API portion at the configuration system102is in communication with a plurality of configuration APIs at the respective microservices. FIG.6is flowchart showing one example of a process flow600that may be executed by an application development tool, such as the application development tool120ofFIG.1. For example, a user, such as the user118, may use the application development tool to develop a microservice-built application. At operation602, the application development tool generates an application configuration model. At operation604, the application development tool receives, from the user, an indication of the microservices that will be used by the application. At operation606, the application development tool adds to the application configuration model an indication of the microservices that will be used by the application. At operation608, the application development tool may receive content for the application. Content may include, for example, predetermined values for one or more application configuration parameter values and/or microservice configuration parameter values. At operation610, the application development tool adds the content to the application configuration model. The application configuration model may be used to modify configuration parameters for one or more of the microservices, for example, as described herein. FIG.7is a diagram showing another example environment700for configuring an application that uses microservices. The environment700includes a DCDS702, which may be similar to the DCDS104and/or other DCDS configurations described herein. An application service and deploy package (ASDP)716includes a microservice-built application718and an application configuration model720. Upon deployment of the ASDP716, the ASDP716may read the application configuration model720and deploy the DCDS702. The DCDS702includes a model store726for storing application configuration models, such as720, as well as service configuration models, such as, for example, service configuration models referenced by the application configuration model720. The DCDS702also includes a content store728that may store and/or reference content included in or indicated by the application configuration model720. The DCDS702may provide a configuration UI706, as described herein, to receive application configuration parameters. The application configuration parameters may be utilized, in conjunction with the application configuration model, service configuration models, and/or content to generate sets of microservice configuration parameter values. The sets of configuration parameter values maybe provided to a deploy service708via APIs704A,704B. The deploy service708may provide sets of microservice configuration parameter values to the application718via a deploy API710. The deploy service708and/or deploy API710may perform functions similar to those of the configuration API106described herein. The deploy service708and DCDS702may be executed on a cloud platform, for example, including a platform-as-a-service (PAAS) runtime712. The PAAS runtime712may connect the DCDS702and deploy service708to underlying hardware in a cloud environment. In some examples, the PAAS runtime712is or includes a Cloud Foundry runtime, for example, available from the Cloud Foundry Foundation. In some examples, instead of or in addition to being stored at the application configuration model720, application model content724may be stored at a solution package722. The solution package722may include, for example, an application identification that references the application718. The DCDS may import content from the application model content724and use the content to configure the application718, as described herein. In some examples, the application model content724of the solution package722may be set by an administrator, or other key user, to define some or all of the application configuration parameter values for a particular implementation. For example, multiple solution packages, such as722, may be available for loading to the DCDS702at runtime. FIG.8is a diagram showing another example environment800for configuring an application824that uses microservices. The environment800includes a DCDS802that may operate in a manner similar to that of the other DCDSs described herein. The DCDS802includes (or is in communication with) a model store804. The model store804includes one or more application configuration models812for the application824. The model store804may also include one or more service configuration models814. The DCDS802may also include a content store806. The content store806may be a data structure that includes solution content816. In some examples, solution content816includes more than one example of solution content. For example, one example of solution content may include content for a first scenario for executing the application824while another example of solution content may include content for a second scenario for executing the application824. Application configuration parameter values are received via a configuration UI820, for example, as described herein. A constraint checker810checks the application configuration parameter values, for example, against microservice configuration constraints and/or application configuration constraints. In the example ofFIG.8, instead of having a single configuration API, such as the configuration API106ofFIG.1, the DCDS802includes a configuration deployer808that operates in conjunction with a configuration API830at the application824and/or with configuration APIs, such as configuration API834A, at the various microservices832A,832N. For example, the functionality of the process flows400and500may be performed by the configuration deployer808in conjunction with the configuration APIs830,834A. In the example ofFIG.8, the application824is executed at an application computing device822, which may be any suitable computing device or combination of networked computing devices. The application824is in communication with an application UI826for interfacing with a user. In addition to the configuration API830, the application824may also comprise a service consumption module828that coordinates communication with one or more microservices832A,832N. The microservices832A,832N are executed at one or more computing devices825A,825N. Although only one configuration API834A is shown, other microservices, such as microservice832N may also include configuration APIs. FIG.9is a diagram showing another configuration of the example environment800for configuring an application824that uses microservices. InFIG.9, configuration models for the application (application configuration model831) and for the microservices (service configuration model835A) are stored at the application824and microservices, such as832A, respectively. A model handler805at the DCDS802accesses configuration models831,835A, as described herein. FIG.10is a block diagram1000showing one example of a software architecture1002for a computing device. The architecture1002may be used in conjunction with various hardware architectures, for example, as described herein.FIG.10is merely a non-limiting example of a software architecture and many other architectures may be implemented to facilitate the functionality described herein. A representative hardware layer1004is illustrated and can represent, for example, any of the above referenced computing devices. In some examples, the hardware layer1004may be implemented according to the architecture of the computer system1100ofFIG.11. The representative hardware layer1004comprises one or more processing units1006having associated executable instructions1008. Executable instructions1008represent the executable instructions of the software architecture1002, including implementation of the methods, modules, subsystems, and components, and so forth described herein and may also include memory and/or storage modules1010, which also have executable instructions1008. Hardware layer1004may also comprise other hardware as indicated by other hardware1012which represents any other hardware of the hardware layer1004, such as the other hardware illustrated as part of computer system1100. In the example architecture ofFIG.10, the software architecture1002may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1002may include layers such as an operating system1014, libraries1016, frameworks/middleware1018, applications1020, and presentation layer1044. Operationally, the applications1020and/or other components within the layers may invoke API calls1024through the software stack and access a response, returned values, and so forth illustrated as messages1026in response to the API calls1024. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware layer1018, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1014may manage hardware resources and provide common services. The operating system1014may include, for example, a kernel1028, services1030, and drivers1032. The kernel1028may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1028may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1030may provide other common services for the other software layers. In some examples, the services1030include an interrupt service. The interrupt service may detect the receipt of an interrupt and, in response, cause the architecture1002to pause its current processing and execute an interrupt service routine (ISR) when an interrupt is accessed. The drivers1032may be responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1032may include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, NFC drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1016may provide a common infrastructure that may be utilized by the applications1020and/or other components and/or layers. The libraries1016typically provide functionality that allows other software modules to perform tasks in an easier fashion than to interface directly with the underlying operating system1014functionality (e.g., kernel1028, services1030and/or drivers1032). The libraries1016may include system1034libraries (e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1016may include API libraries1036such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1016may also include a wide variety of other libraries1038to provide many other APIs to the applications1020and other software components/modules. In some examples, libraries1038may provide one or more APIs serviced by a message oriented middleware. The frameworks1018(also sometimes referred to as middleware) may provide a higher-level common infrastructure that may be utilized by the applications1020and/or other software components/modules. For example, the frameworks1018may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks1018may provide a broad spectrum of other APIs that may be utilized by the applications1020and/or other software components/modules, some of which may be specific to a particular operating system or platform. The applications1020includes built-in applications1040and/or third party applications1042. Examples of representative built-in applications1040may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third party applications1042may include any of the built in applications as well as a broad assortment of other applications. In a specific example, the third party application1042(e.g., an application developed using the Android™ or iOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as iOS™, Android™, Windows® Phone, or other mobile computing device operating systems. In this example, the third party application1042may invoke the API calls1024provided by the mobile operating system such as operating system1014to facilitate functionality described herein. The applications1020may utilize built in operating system functions (e.g., kernel1028, services1030and/or drivers1032), libraries (e.g., system1034, APIs1036, and other libraries1038), frameworks/middleware1018to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as presentation layer1044. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. Some software architectures utilize virtual machines. In the example ofFIG.10, this is illustrated by virtual machine1048. A virtual machine creates a software environment where applications/modules can execute as if they were executing on a hardware computing device. A virtual machine is hosted by a host operating system (operating system1014) and typically, although not always, has a virtual machine monitor1046, which manages the operation of the virtual machine as well as the interface with the host operating system (i.e., operating system1014). A software architecture executes within the virtual machine such as an operating system1050, libraries1052, frameworks/middleware1054, applications1056and/or presentation layer1058. These layers of software architecture executing within the virtual machine1048can be the same as corresponding layers previously described or may be different. Modules, Components and Logic Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied (1) on a non-transitory machine-readable medium or (2) in a transmission signal) or hardware-implemented modules. A hardware-implemented module is a tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client, or server computer system) or one or more processors may be configured by software (e.g., an application or application portion) as a hardware-implemented module that operates to perform certain operations as described herein. In various embodiments, a hardware-implemented module may be implemented mechanically or electronically. For example, a hardware-implemented module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware-implemented module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or another programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware-implemented module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware-implemented module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily or transitorily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware-implemented modules are temporarily configured (e.g., programmed), each of the hardware-implemented modules need not be configured or instantiated at any one instance in time. For example, where the hardware-implemented modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware-implemented modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware-implemented module at one instance of time and to constitute a different hardware-implemented module at a different instance of time. Hardware-implemented modules can provide information to, and receive information from, other hardware-implemented modules. Accordingly, the described hardware-implemented modules may be regarded as being communicatively coupled. Where multiple of such hardware-implemented modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses that connect the hardware-implemented modules). In embodiments in which multiple hardware-implemented modules are configured or instantiated at different times, communications between such hardware-implemented modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware-implemented modules have access. For example, one hardware-implemented module may perform an operation, and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware-implemented module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware-implemented modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across a number of locations. The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., APIs). Electronic Apparatus and System Example embodiments may be implemented in digital electronic circuitry, or in computer hardware, firmware, or software, or in combinations of them. Example embodiments may be implemented using a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable medium for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. In example embodiments, operations may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method operations can also be performed by, and apparatus of example embodiments may be implemented as, special purpose logic circuitry, e.g., an FPGA or an ASIC. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that both hardware and software architectures merit consideration. Specifically, it will be appreciated that the choice of whether to implement certain functionality in permanently configured hardware (e.g., an ASIC), in temporarily configured hardware (e.g., a combination of software and a programmable processor), or in a combination of permanently and temporarily configured hardware may be a design choice. Below are set out hardware (e.g., machine) and software architectures that may be deployed, in various example embodiments. EXAMPLES Example 1 is a system for configuring an application that uses a plurality of microservices, the system comprising: at least one processor unit; and a machine-readable medium in communication with the at least one processor, the machine-readable medium comprising instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising: generating, by a distributed configuration deploy service executed at the at least one processor unit, a first set of microservice configuration parameter values for a first microservice of the plurality of microservices based at least in part on a first microservice configuration model for the first microservice and at least in part on a second configuration model for a second microservice of the plurality of microservices; determining, by a configuration application programming interface (API) executed at the at least one processor unit, that the first set of microservice configuration parameter values do not conflict with first live data at the first microservice; locking, by the configuration API, the first microservice; applying, by the configuration API, the first set of microservice configuration parameter values, to the first microservice; and releasing, by the configuration API, the locking of the first microservice. In Example 2, the subject matter of Example 1 optionally includes wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising: sending, by the distributed configuration deploy service and to a configuration API, a check request comprising a second set of microservice configuration parameter values for the second microservice; and receiving, by the distributed configuration deploy service and from the configuration API, an indication that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice. In Example 3, the subject matter of any one or more of Examples 1-2 optionally includes wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising: generating, by the distributed configuration deploy service, a second set of microservice configuration parameter values for the second microservice based at least in part on the first microservice configuration model and the second microservice configuration model; before applying the first set of microservice configuration parameter values to the first microservice, determining, by the configuration API, that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice; locking, by the configuration API, the second microservice; applying, by the configuration API, the second set of microservice configuration parameter values at the second microservice; and releasing, by the configuration API, the locking of the second microservice. In Example 4, the subject matter of any one or more of Examples 1-3 optionally includes wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising: accessing, by the distributed configuration deploy service, a first microservice configuration model for the first microservice; accessing, by the distributed configuration deploy service, a second microservice configuration model for the second microservice; generating, by the distributed configuration deploy service, a configuration user interface (UI) based at least in part on the first microservice configuration model and the second microservice configuration model; and receiving, by the distributed configuration deploy service, through the configuration UI, a plurality of configuration parameter values, wherein the generating of the first set of configuration parameter values is also based at least in part on at least a portion of the plurality of configuration parameter values. In Example 5, the subject matter of Example 4 optionally include wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising accessing, by the distributed configuration deploy service, an application configuration model comprising an indication of the first microservice configuration model and an indication of the second microservice configuration model. In Example 6, the subject matter of any one or more of Examples 1-5 optionally includes wherein the generating of the first set of microservice configuration parameter values is also based at least in part on an application configuration model, wherein the application configuration model comprises an indication of the plurality of microservices. In Example 7, the subject matter of any one or more of Examples 1-6 optionally includes wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising: generating, by the distributed configuration deploy service, a configuration user interface (UI), the configuration UI comprising: a first field to receive a first configuration parameter value for the first microservice; a second field to receive a second configuration parameter value for the second microservice; and a third field to receive a shared configuration parameter value common to the first microservice and the second microservice, wherein the first set of microservice configuration parameter values is based at least in part on the first configuration parameter value and the shared configuration parameter value. In Example 8, the subject matter of Example 7 optionally include wherein the machine-readable medium further comprises instructions stored thereon that are executable by the at least one processor unit to cause the system to perform operations comprising determining, by the distributed configuration deploy service, that the first configuration parameter value does not conflict with the second configuration parameter value. Example 9 is a method for configuring an application that uses a plurality of microservices, the method comprising: generating, by a distributed configuration deploy service executed at a computing system, a first set of microservice configuration parameter values for a first microservice of the plurality of microservices based at least in part on a first microservice configuration model for the first microservice and at least in part on a second configuration model for a second microservice of the plurality of microservices; determining, by a configuration application programming interface (API) executed at the computing system, that the first set of microservice configuration parameter values do not conflict with first live data at the first microservice; locking, by the configuration API, the first microservice; applying, by the configuration API, the first set of microservice configuration parameter values, to the first microservice; and releasing, by the configuration API, the locking of the first microservice. In Example 10, the subject matter of Example 9 optionally include sending, by the distributed configuration deploy service and to a configuration API, a check request comprising a second set of microservice configuration parameter values for the second microservice; and receiving, by the distributed configuration deploy service and from the configuration API, an indication that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice. In Example 11, the subject matter of any one or more of Examples 9-10 optionally includes generating, by the distributed configuration deploy service, a second set of microservice configuration parameter values for the second microservice based at least in part on the first microservice configuration model and the second microservice configuration model; before applying the first set of microservice configuration parameter values to the first microservice, determining, by the configuration API, that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice; locking, by the configuration API, the second microservice; applying, by the configuration API, the second set of microservice configuration parameter values at the second microservice; and releasing, by the configuration API, the locking of the second microservice. In Example 12, the subject matter of any one or more of Examples 9-11 optionally includes accessing, by the distributed configuration deploy service, a first microservice configuration model for the first microservice; accessing, by the distributed configuration deploy service, a second microservice configuration model for the second microservice; generating, by the distributed configuration deploy service, a configuration user interface (UI) based at least in part on the first microservice configuration model and the second microservice configuration model; and receiving, by the distributed configuration deploy service, through the configuration UI, a plurality of configuration parameter values, wherein the generating of the first set of configuration parameter values is also based at least in part on at least a portion of the plurality of configuration parameter values. In Example 13, the subject matter of Example 12 optionally include accessing, by the distributed configuration deploy service, an application configuration model comprising an indication of the first microservice configuration model and an indication of the second microservice configuration model. In Example 14, the subject matter of any one or more of Examples 9-13 optionally includes wherein the generating of the first set of microservice configuration parameter values is also based at least in part on an application configuration model, wherein the application configuration model comprises an indication of the plurality of microservices. In Example 15, the subject matter of any one or more of Examples 9-14 optionally includes generating, by the distributed configuration deploy service, a configuration user interface (UI), the configuration UI comprising: a first field to receive a first configuration parameter value for the first microservice; a second field to receive a second configuration parameter value for the second microservice; and a third field to receive a shared configuration parameter value common to the first microservice and the second microservice, wherein the first set of microservice configuration parameter values is based at least in part on the first configuration parameter value and the shared configuration parameter value. In Example 16, the subject matter of Example 15 optionally includes determining, by the distributed configuration deploy service, that the first configuration parameter value does not conflict with the second configuration parameter value. Example 17 is a machine-readable medium comprising instructions thereon that, when executed by at least one processor unit, causes the at least one processor unit to execute operations comprising: generating, by a distributed configuration deploy service, a first set of microservice configuration parameter values for a first microservice of a plurality of microservices for an application based at least in part on a first microservice configuration model for the first microservice and at least in part on a second configuration model for a second microservice of the plurality of microservices; determining, by a configuration application programming interface (API), that the first set of microservice configuration parameter values do not conflict with first live data at the first microservice; locking, by the configuration API, the first microservice; applying, by the configuration API, the first set of microservice configuration parameter values, to the first microservice; and releasing, by the configuration API, the locking of the first microservice. In Example 18, the subject matter of Example 17 optionally includes instructions thereon that, when executed by the at least one processor unit, causes the at least one processor unit to perform operations comprising: sending, by the distributed configuration deploy service and to a configuration API, a check request comprising a second set of microservice configuration parameter values for the second microservice; and receiving, by the distributed configuration deploy service and from the configuration API, an indication that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice. In Example 19, the subject matter of any one or more of Examples 17-18 optionally includes instructions thereon that, when executed by the at least one processor unit, causes the at least one processor unit to perform operations comprising: generating, by the distributed configuration deploy service, a second set of microservice configuration parameter values for the second microservice based at least in part on the first microservice configuration model and the second microservice configuration model; before applying the first set of microservice configuration parameter values to the first microservice, determining, by the configuration API, that the second set of microservice configuration parameter values do not conflict with second live data at the second microservice; locking, by the configuration API, the second microservice; applying, by the configuration API, the second set of microservice configuration parameter values at the second microservice; and releasing, by the configuration API, the locking of the second microservice. In Example 20, the subject matter of any one or more of Examples 17-19 optionally includes instructions thereon that, when executed by the at least one processor unit, causes the at least one processor unit to perform operations comprising: accessing, by the distributed configuration deploy service, a first microservice configuration model for the first microservice; accessing, by the distributed configuration deploy service, a second microservice configuration model for the second microservice; generating, by the distributed configuration deploy service, a configuration user interface (UI) based at least in part on the first microservice configuration model and the second microservice configuration model; and receiving, by the distributed configuration deploy service, through the configuration UI a plurality of configuration parameter values, wherein the generating of the first set of configuration parameter values is also based at least in part on at least a portion of the plurality of configuration parameter values. Example Machine Architecture and Machine-Readable Medium FIG.11is a block diagram of a machine in the example form of a computer system1100within which instructions1124may be executed for causing the machine to perform any one or more of the methodologies discussed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a web appliance, a network router, switch, or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system1100includes a processor1102(e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory1104, and a static memory1106, which communicate with each other via a bus1108. The computer system1100may further include a video display unit1110(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system1100also includes an alphanumeric input device1112(e.g., a keyboard or a touch-sensitive display screen), a UI navigation (or cursor control) device1114(e.g., a mouse), a disk drive unit1116, a signal generation device1118(e.g., a speaker), and a network interface device1120. Machine-Readable Medium The disk drive unit1116includes a machine-readable medium1122on which is stored one or more sets of data structures and instructions1124(e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions1124may also reside, completely or at least partially, within the main memory1104and/or within the processor1102during execution thereof by the computer system1100, with the main memory1104and the processor1102also constituting machine-readable media1122. While the machine-readable medium1122is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions1124or data structures. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding, or carrying instructions1124for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such instructions1124. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media1122include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. Transmission Medium The instructions1124may further be transmitted or received over a communications network1126using a transmission medium. The instructions1124may be transmitted using the network interface device1120and any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., WiFi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions1124for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
76,832
11861376
DETAILED DESCRIPTION Described herein are methods and systems for editing a configuration file during execution of a configuration process that is controlled by the instructions in the configuration file. A configuration process can set or modify parameters that control various operational aspects of the respective hardware, software, applications, operating systems, and/or virtual machines Configuration processes may be employed to establish, manage, and modify the configurations and their respective parameters of a computer system. A configuration manager may be employed to establish and modify configurations and configuration parameters of one or more computer systems. A computer system herein shall refer to a system comprising one or more processors, one or more memory devices, and one or more input/output (I/O) interfaces. In one illustrative example, a computer system may be provided by a server running an operating system and multiple applications. Each of those applications, their underlying operating systems, virtual machines, and hardware, may have various configurations and configuration parameters designed to control the operational aspects of the respective application, virtual machines, operating systems, and hardware. Examples of configuration parameters may include hardware configuration parameters, operating system configuration parameters, virtual machine configuration parameters, and application configuration parameters. A configuration manager may use a configuration file to establish, manage, and change configurations and configuration parameters on one or more computer systems. Such a configuration file may contain instructions for performing a configuration process that sets or modifies configuration parameters on one or more computer systems. The configuration file can control various types of configuration processes that can implement the setting or modification of configurations and configuration parameters on multiple computer systems. The setting or modifying a configuration or a configuration parameter for an element (e.g. application, virtual machine, operating systems, hardware, etc.) of a computer system may be referred to herein as a “configuration action”. For example, configuration actions may be performed when one or more servers in a data center may need to be synchronized with the configuration state of another server for a data center wide synchronization. In another example, configuration actions may be performed during an upgrade of computer systems or one or more particular applications where several configuration changes are made to a subset of computer system selected from a group of computer systems within an enterprise setting. In another example, configuration actions may include various configuration modifications being performed, while some configurations are reverted back, on one or more computer systems to troubleshoot a particular issue, or to find the best solution to a problem. Thus, executing a configuration process referenced in or controlled by the configuration file may involve performing multiple tasks that constitute the configuration process. In some situations the configuration process may be time consuming and may require large amounts of computing resources as the configuration process is executed and its constituent tasks are performed on multiple computer systems. In many cases it may be determined that a change should be made to a configuration process before the execution of the configuration process is complete. For example, during the execution of a configuration process it may be determined that an additional task should be performed, that one of the tasks should not be performed, or that certain configuration parameters in a task within the configuration file were input incorrectly. In this situation, if the configuration process is permitted to be fully executed, the result would include undesirable configurations having been made on the target computer systems which were being configured by the configuration process. Some approaches to making changes to such a configuration process entail completing the configuration process, editing the configuration file, and then executing the configuration process again based on the edited configuration file. Other approaches involve completely stopping the configuration process, at whatever point that it currently is, and editing the configuration file so that the configuration process can be re-run based on the edits. However, in both of these situations, time, energy, and computing resources are wasted because configurations and tasks that were performed as they should have been with proper configuration parameters are performed again when the edited configuration process is executed anew. Accordingly, such approaches not only take more time and delay the setting or modification of configurations but also increase load and strain on computing resources that are made unavailable for other processes. The detrimental consequences of such approaches are especially evident in cases when most of the configuration process has been completed at the time that it is determined that a change should be made to the configuration process. Such cases entail that a large portion (e.g., large number of tasks) of the configuration process needs to be re-executed after the modifications are made. Aspects of the disclosure address the above and other deficiencies by providing technology that enables a configuration process to be edited in the middle of its execution (i.e., enables a live edit of a configuration process execution). In accordance with one or more implementations of the present disclosure, a configuration process can be paused so that the configuration file that contains the instructions for performing the configuration process can be edited without stopping or restarting the configuration process. After the edits are made, the configuration process can be resumed from the point at which it was paused using the edited configuration file. Accordingly, the configuration process can be completed according to the modifications made to the configuration file without needing to be re-executed from the beginning. In some implementations, the configuration manager (CM) can be a configuration management application that uses configuration files to execute configuration processes on one or more target computer systems. The CM can be executed on a computer system that manages configurations and may be referred to herein as a control machine. The control machine may use the CM to perform configuration actions on one or more physical or virtual computer systems in accordance with the instructions of the configuration file. The configuration file can include instructions for performing a variety of configuration actions which can be performed in sequence or in parallel (i.e., simultaneously) on one or more computer systems. The instructions of the configuration file may be divided into tasks that are grouped into task groups. Accordingly the tasks contain the instructions for configuration actions that comprise the configuration process. In some implementations the configuration file may be a .yaml file playbook with task groups represented by plays containing one or more tasks. Each play (i.e., task group) may contain instructions for configuration actions that are to be performed on a particular set of target computer systems. Accordingly, the CM may execute different plays (i.e., perform the configuration actions corresponding to the respective tasks within each play) on different sets of target computer systems. Thus, to initiate the execution of a configuration process, a configuration file (e.g., a playbook) can be loaded into the CM so that it can begin performing the configuration actions of the tasks. The CM can execute the tasks sequentially or in parallel on one or more computer systems in accordance with the task groups (e.g., plays) each for which can correspond to a particular set of computer systems. In some implementations, the CM can be operated by a graphical user interface (GUI) while in others it can be operated by a command line interface (CLI) on the control machine. Each interface enables the user to send commands that are received by the CM and to receive messages and other data from the CM to be viewed. Thus, in some implementations, if it is determined that the configuration process should be modified (i.e., that a change is to be made to the configuration file), an edit initiation command (e.g., a live-edit command) can be entered using the GUI or CLI of the CM to edit the configuration file. Upon receipt of the command, the CM may pause the execution of the configuration process at the point that the command was received. However, if the CM was in the middle of executing a task when the command was received, it may complete the execution of the task and pause the execution of the configuration process immediately thereafter without beginning to execute the next task in the process. In some implementations, the CM can display a message on the GUI or CLI where the configuration file was being executed and indicate that the execution of the configuration process is paused and that the configuration file is open for editing. The CM can continuously record the state of the configuration process (e.g., progress state of the execution of the configuration process). Accordingly, the CM can record the progress state at the time that the execution was paused as data or metadata associated with the configuration file in a data store. In some implementations, the record of the progress state can be made within the configuration file. The record of the progress state can include an indication of all of the tasks that have been completed, an indication of the last task that has been completed, and/or an indication of the next task that was to be executed at the time that the execution of the configuration process was paused. In some implementations, the configuration process can be changed by modifying the configuration file. Thus, if the configuration process is running and its progress is being monitored in one user interface of the CM, the configuration file can be opened to be edited in another interface of the CM. Editing the configuration file can include adding tasks, removing tasks, changing the order of the tasks, modifying one or more tasks, as well as changing the tasks groupings. For example, in some implementations, editing the configuration file can include making changes the tasks in one or more plays of a play book. Upon completing the modifications, an edit completion command (e.g., :wq, shift+zz) can be entered using the GUI or CLI of the CM to end the editing of the configuration file. In some implementations, upon receipt of the edit completion command, the CM can check the configuration file to determine whether any modifications were made and determine whether any changes need to be implemented in the configuration process. For example, the content of the configuration file after editing can be compared with the content of the file immediately prior to the initiation of editing. The CM can also check whether as a result of the modifications to the configuration file any rule violations were introduced (e.g., spelling or syntax errors). If violations (i.e., errors) are found the CM can display a message via the GUI or CLI concerning the violation (e.g., a message indicating that an error exists or prompting a user to make corrections). In some implementations, the CM will not permit the editing of the configuration to be completed until the violations are remedied. When the editing is completed, the modified content of the configuration file can be saved in a data store and/or loaded into the CM. The modified content of the configuration file can replace original content of the configuration file that the CM was using to execute the configuration process. Accordingly, the modified content of the configuration file can be used by the CM to resume the execution of the configuration process. The CM can reference the record of the state (e.g., progress state of the execution of the configuration process) to resume the execution of the configuration process from the point at which it was paused. For example, the CM can begin executing the task that follows that last task that was completed according to the record of the state. The CM can then complete the execution of the remaining tasks in the configuration process in accordance with the modified content of the configuration file. In this manner, a configuration process can be modified while it is being executed without needing to restart the execution of the entire configuration process. The various implementations of the present disclosure enable modifications to be made to constituent tasks of a configuration file that controls the execution of the configuration process while removing the need to re-execute the tasks that have already been completed. These implementations can reduce the time needed to implement various configuration processes and reduces the total amount of computing resources used to execute configuration processes that need to be edited after their execution has already been started. Accordingly, this enables more flexibility in designing the configuration processes and respective configuration file since it becomes possible to edit them without needing to re-run the entire process. Overall, the implementations of the present disclosure can reduce computing resources consumed by resource intensive processes, improving performance and accuracy of computing systems. These and other features of the implementations can be better understood with reference toFIGS.1-6described below. To provide context for the various implementations and the determination of a distributed system topology for application component deployment, an example network architecture is initially described below. FIG.1illustrates a computer system100in which embodiments may operate. It should be noted that other architectures for computer system100(also referred to herein as system100) are possible, and that the implementation of a computer system utilizing embodiments of the disclosure are not necessarily limited to the specific architecture depicted byFIG.1. Terms such as “machine,” “device,” “computer,” “computer system,” and “computing system” may be used interchangeably and synonymously throughout this document. Computer system100can include a single computer system or multiple computer systems arranged in a heterogeneous or homogenous group (e.g., cluster). Computer system100can include one or more computer systems, such as computer systems100A,100B,100C, through100N in accordance with one or more implementations of the present disclosure. Computer system100may include computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to enable editing a configuration process during its execution through the embodiments discussed herein. Computer system100may include one or more virtual machines and one or more hypervisors. In some implementations, the computer systems100A-N can be located in a single physical location (e.g., data center) or can be distributed across multiple different locations (e.g., in different offices). Each of the computer systems100A-N may include communicably interconnected hardware components140such as a central processing unit (CPU)141, memory143, and input/output interfaces (I/O)145. These hardware components140may provide hardware functionality for performing computing tasks. Computer system100may include additional or different components, devices, and/or connectors in some implementations of the disclosure. One or more processors may be embodied as central processing unit (CPU)141, which can be a micro-processor, digital signal processor (DSP), or other processing component. CPU141may process various received data and may carry out the code or instructions or one or more computer programs, for example, to provide input/output operations specified by the instructions. CPU141may include one or more processors that are capable of executing the computing tasks described in the present disclosure. CPU141may be a single core processor that is capable of executing one instruction at a time (e.g., single pipeline of instructions) or may be a multi-core processor that simultaneously executes multiple instructions. The instructions may encode arithmetic, logical, or I/O operations (e.g., edit initiation and initiation commands received via I/O145). In one example, CPU141may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). Memory143may include volatile memory devices (e.g., random access memory (RAM)), non-volatile memory devices (e.g., flash memory), ROM (read-only memory), EEPROM (electrically erasable programmable read-only memory), and/or other types of memory devices), and a storage device (e.g., a magnetic hard disk, a Universal Serial Bus [USB] solid state drive, a Redundant Array of Independent Disks [RAID] system, a network attached storage [NAS] array, etc.). I/O145may be and/or include a device capable of providing an interface between a processor and an external device capable of inputting and/or outputting binary data. In some implementations, I/O145also include the GUI and/or the CLI of the CM. Computer system100may include one or more repositories160. Repository160may be a persistent storage that is capable of storing configuration files164, tasks166A-166N, etc. Repository160may be hosted by one or more storage devices, such as main memory, magnetic or optical storage based disks, tapes or hard drives, NAS, SAN, and so forth. Although depicted as separate from the computer system100A-100N, in an implementation, the one or more repository160may be part of one or more of the computer systems100A-100N. For example, repository106may be a data store of memory143. In some implementations, repository160may be a network-attached file server, while in other embodiments repository160may be some other type of persistent storage such as an object-oriented database, a relational database, a non-relational database, and so forth, that may be hosted by a server machine or one or more different machines coupled to the repository160via the network150. In some implementations, computer system100may be connected to a network150, which may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof. Computer systems100A through100N may be interconnected via network150. CPU141may execute an operating system (OS)120, as well as sub-programs and/or applications of OS120to provide various features and embodiments described herein. OS120may support one or more applications110residing on the computer system100. Applications110may include, for example, user application processes, virtual machines, containers and the like. Applications110may receive data through network150. OS120may provide an abstraction layer for the resources (especially CPU141, memory143and I/O devices145) that applications110may control to perform their function. OS120can make these resources available to applications110through inter-process communication mechanisms and system calls. There may be various configurations and respective configuration parameters associated with applications110, OS120, as well as with computer systems100A-100N and their respective hardware and software components (e.g., applications, operating systems, virtual machines, hardware devices). Implementations of the disclosure provide for editing of a configuration process that is being executed. In one example, OS120may include a configuration manager130(e.g., Ansible®, Puppet®, Chef®, SaltStack® etc.). Configuration manager130may be included in one or more of the computer systems100A-100N. For example, computer system100A may include configuration manager130and may be designated as the control machine. In one example, configuration manager130may only be included in the control machine and not on the other computer systems (e.g., target computer systems). In some implementations, configuration manager130(e.g., Ansible®) may have permission to connect (e.g., SSH access), read and/or execute tasks any machine or computer system. In some implementations, CM139can have such permissions only for other machines in the system100(e.g.,100B-100N) that are not designated as control machines. In other implementations, configuration manager130(e.g., Puppet®) may be included in the control machine110A (e.g., “master”) and also in the target machines100B-100N (e.g., “agent”) for a master-agent communication. In some implementations, CM130can load configuration file134that can contain instructions for performing a configuration process on one or more computer systems100A-100N. The configuration file134can include one or more tasks136A-136N that are to be performed as part of the communication process and may each correspond to one or more configuration actions. Various versions and copies of configuration files163may be saved and stored in repository160as well as loaded for use by CM130. In some implementations, CM130can receive a command and initiate a configuration process using configuration file134. The configuration file134can include task groups each of which include a set (e.g., a sequence) of tasks that make up the configuration process. For example, the configuration file134can include instructions for the performance of sequence of configuration process tasks136on one or more computer systems (e.g., each remote computer system100B-100N). During the execution of the configuration process, CM130can receive (e.g., via I/O145) a command (e.g., an edit initiation command) instructing the CM130running on CPU141to edit the configuration file134. In response to receiving the command (e.g., edit initiation command), the CM130can pause the execution of the configuration process. In some implementations, the CM can pause the configuration process at the point in the process at which the command was received. In other implementations, upon receipt of the command, before pausing the configuration process, the CM130can complete executing the task (e.g., a task of a play in a playbook) that was being executed when the command was received. In these implementations the CM130can pause the execution of the configuration process after completing whichever task was being executed when it received the command. In some implementations CM130can continuously monitor and record the state of the configuration process (e.g., progress state of the execution of the configuration process). In other implementations, CM130can record a state of the configuration process (e.g., progress state of the execution of the configuration process) when the execution of the configuration process is paused. The state can be recorded in the configuration file, in another file, or as metadata associated with the configuration file, each of which can be recorded in memory143or repository160. In some implementations the state can indicate all of the tasks of the sequence of tasks of the configuration process that have been completed; the last completed task of the sequence of tasks; and/or the next task that is to be executed in the sequence of tasks of the configuration process. Via one or more user interactions through I/O145, CM130can open for editing the configuration file containing the instructions of the configuration process that is being executed. For example, while the state of the configuration process (e.g., progress state of the execution of the configuration process) is being monitored in one GUI or CLI of CM130, the configuration file can be opened for editing in another GUI or CLI of CM130. Via one or more user interactions through I/O145, CM130can make changes to the configuration file (e.g., modify one or more tasks in the sequence of tasks referenced in the configuration file) by modifying the content of the configuration file and/or generating a modified configuration file. For example, CM130can modify the configuration file by adding, removing, or editing at least one of the one or more tasks as well as by changing their order or grouping within task groups. In some implementations the CM130can check whether the edits resulted in content that is different from the original content of the configuration file and check the modified content of the configuration file for errors (e.g., spelling and/or syntax errors). In some implementations, the CM130can save the configuration file164with its changes (e.g., its modified contents) and store it in repository164. The CM130can load configuration file164with modified content and replace the original content of the configuration file134with the modified content of the configuration file164. By referencing the record of the state (e.g., progress state of the execution of the configuration process), the CM130can resume the execution of the configuration process using the configuration file with the modified content from the point at which the execution was paused. For example, the CM130can determine the last task that was executed (e.g., as indicated in the state record) and initiate the execution of the next task in the configuration process. In this manner, the CM130can complete the execution of the configuration process that was edited while it was being executed by executing the remaining tasks (i.e., performing the configuration actions associated with the remaining tasks of the configuration process). In some embodiments, the CM130can be a combination of software and hardware components. For example, CM130can include one or more components described in connection withFIG.6and can implement one or more methods described in connection withFIGS.2-3. While various implementations are described in terms of the environment described above, the functionality may be implemented in a variety of other environments including a single, monolithic computer system, as well as various other combinations of virtual machines, computer systems or similar devices connected in various ways. For example, the CM130may be running on a virtual machine of computer system100A, and may execute a configuration process on a group of virtual and or physical machines on computer systems100B-100N. In some implementations, the CM130may include more components than those that are shown and may operate in conjunction with other CMs130of computing system100. The procedures for editing a configuration file controlling a configuration process is described with reference toFIGS.2-3.FIG.2depicts a flow diagram of an example method200for editing the configuration file during the execution of its corresponding configuration process, in accordance with some implementations.FIG.3depicts a flow diagram of an example method300for editing the configuration file during the execution of its corresponding configuration process, in accordance with some implementations. The methods200and300may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (e.g., executable code executed by a general purpose computer system or a dedicated machine), or a combination of both. Methods300and400and each of their individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, methods200and300may each be performed by a single processing thread. Alternatively, methods200and300may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing methods200and300may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing methods200and300may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. It should be understood that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. As used herein, the term “article of manufacture,” is intended to encompass a computer program accessible from any computer-readable device or memory page media. In alternative implementations, some or all of the parts of the methods200and300may be performed by some or all of the components of computing system100, computer system500, or device600. It should be noted that blocks depicted inFIGS.2-3can be performed simultaneously or in a different order than that depicted. In some implementations, at block202, the processing logic can initiate an execution of a configuration process using a configuration file referencing a sequence of tasks. The processing logic can load the configuration file and execute the configuration process in accordance with the instructions in the configuration file. The configuration file can include one or more tasks that the processing logic can perform which correspond to configuration actions that form part of the configuration process. The configuration process may include the processing logic executing one or more tasks included in the configuration file on one or more computer systems. While the processing logic is executing the configuration process initiated at block202, the processing logic can, at block204, receive a command (e.g., an edit initiation command) to edit the configuration file. In some implementations, the command can cause the processing logic to open the configuration file for editing. In response to receiving the command, the processing logic can, at block208, pause the execution of the configuration process at the point in the progress of executing the configuration process that it received the command. Additionally, in some implementations, at block206, the processing logic can complete the execution of a task that was being executed when the command was received (i.e., permit the execution of the task to finish) before pausing the configuration process. In some implementations, at block210, the processing logic can record a state of the configuration process (e.g., progress state of the execution of the configuration process) that indicates the last completed task. In other implementations the processing logic can continuously monitor and record a state of the configuration process. In other implementations, the processing logic can, at block210, record a state of the configuration process when the execution of the configuration process is paused. The processing logic can record the state (e.g., progress state of the execution of the configuration process) in the configuration file, in another file, or as metadata associated with the configuration file, and store it in a data store. The processing logic can include a task completion indicator that indicates (i) all of the tasks of the sequence of tasks of the configuration process that have been completed; (ii) the last completed task of the sequence of tasks; (iii) and/or the next task that is to be executed in the sequence of tasks of the configuration process in the record of the state. The processing logic can then refer to the record of the state and the included task completion indicator at a later time. In some implementations, at block212, the processing logic can modify one or more tasks in the sequence of tasks referenced in the configuration file to change the content of (e.g., generate modified content of) the configuration file. Having modified the content of the configuration file, at block212, the processing logic can, at block214, resume the execution of the configuration process using the modified content of the configuration file from the point at which the execution was paused. Thus, at block216, the processing logic can complete the execution of the configuration process based on the modified content of the configuration file (i.e., by completing the execution of the remaining tasks in the configuration file in accordance with the modified content). The editing, by the processing logic, of the configuration file and generation of the modified content in the configuration file, at block212, as well as the resumption, at block214, of the execution of the configuration process are described more in depth below. Thus a more detailed description of the operations involved in changing the content of (e.g., generating the modified content of) the configuration file and resuming the execution of the configuration process with reference to method300ofFIG.3. Accordingly, in some implementations, the modification of tasks and editing of the configuration file, at block212, can result in the processing logic changing the content of (e.g., generating modified content of) the configuration file, at block380. The generation of the modified content of the configuration file by the processing logic, at block380, can include the processing logic, at block381, editing the configuration file by adding, removing, or editing at least one of tasks (e.g., by changing the parameters of the configuration actions governed by the instructions of the task) or by changing the order or grouping of the tasks within the task groups of the configuration file (e.g., by changing the order of tasks within a play of a playbook or regrouping tasks into different playbooks). In some implementations, the processing logic, at block381, can edit the configuration file in response to receiving user input to make modifications to one or more tasks. Then, at block382, the processing logic can determine whether any changes were made to configuration file. In some implementations, to determine whether changes were made to the configuration file, the processing logic can compare the content of the configuration file that is being edited with the content of the file present immediately prior to the initiation of editing. For example, edits could have been made and reverted or the combination of the modifications made at block381may have resulted in the contents of the configuration file after editing being identical to the contents of the modification file before editing started. In this case the processing logic can determine that no changes were made and not proceed further until edits are made at block381that modify the configuration files in a manner that results in the content of the configuration file after editing being different from the content of the configuration file before editing. In some implementations, at block383, the processing logic can further determine whether there are any errors in the configuration file after the edits have been made. To make this determination, the processing logic can check whether the configuration file has any violations (e.g., spelling or syntax errors) that may have been caused by the modifications made in block381. If the processing logic determines at block383that a violation has occurred (i.e., that an error exists), the processing logic can, at block384, generate an error notification. For example the processing logic can display an error message in the configuration manager or present a prompt to the user to correct the error (i.e., remedy the violation). The processing logic can prevent the editing of the configuration file from being completed (i.e., require that the configuration file remain open for editing) until the error(s) are corrected/remedied. If the processing logic determines, at block383, that there are no violations/errors, it can, at block385, proceed to save the configuration file with the modified content. To save the modified content of the configuration file, the processing logic can, at block385, store the configuration file with the modified content in a data store and/or load it into a configuration manager. At block385, by loading the modified content of the configuration file into the configuration manager the processing logic also replace the original content of the configuration file with the modified content of the configuration file. Thus, in some implementations, at block390which corresponds to block214of method200, the processing logic can resume the execution of the configuration process using the modified content of the configuration file. Resuming the execution of the configuration process by the processing logic can include the processing logic initiating, at block392, the execution of a next task following the last completed task indicated in the recorded state (e.g., progress state of the execution of the configuration process). For example, the processing logic can reference the recorded state to determine the last task that was completed out of the sequence of tasks in the configuration file and begin executing the immediately subsequent task. Thereafter the processing logic can complete the execution of the configuration process, at block216of method200. In the various implementations described herein the configuration file can take different forms and have task included within it coded in a variety of ways. One such example is provided inFIG.4which depicts a block diagram of an example file of a configuration process (i.e., a configuration file), in accordance with some implementations of the present disclosure. In some implementations, CM130uses configuration files to execute configuration processes. To control the execution of configuration actions of the configuration process, the configuration file494may have multiple tasks496A-496N (each of which can correspond to one or more configuration actions) be grouped together into task groups495A-495M. In one example, a CM130(e.g., Ansible®) can execute a configuration process based on the tasks496A-496N (e.g., commands, instructions, executable codes, etc.), that are grouped into plays (i.e., task groups, ordered set of tasks task groups495A-495M), which are collected to form playbooks (i.e., configuration files494containing one or more plays). In an example, each play may comprise tasks related to a certain type of computer system or tasks that target a particular set of computing devices in a computing system architecture. Multiple configuration files (e.g., playbooks) may exist and be used by CM130, although not depicted inFIG.3. FIG.5depicts a block diagram of a computer system500operating in accordance with one or more aspects of the present disclosure. Computer system500may be the same or similar to a computer system100a-100N ofFIG.1, and may include one or more processing devices and one or more memory devices. In the example shown, computer system500may include a configuration execution module510, an error detection module515, a configuration process execution state monitoring module520, and a configuration file editing module530. The trace module510may enable a processing device to load a configuration file564containing instructions for executing a configuration process and to execute the configuration process according to the instructions. The configuration file can be stored in a memory device such as a data store along with other information and can include one or more tasks566A-566N. The configuration process execution state monitoring module520may enable the processing device to monitor and record the progress of the execution of the configuration progress. For example, configuration process execution state monitoring module520can enable the processing device to record which of the tasks566A-566N have been completed as the configuration process is being executed. The configuration file editing module530may enable the processing device to open and edit the configuration file564. In some implementations, the configuration file editing module530can modify the tasks566A-566N processing device in the configuration file564. The configuration file editing module530can work in conjunction with the error detection module515to determine whether any violations are present (e.g., any errors exist) in a configuration file564when it is modified/edited before it is saved on the memory device. FIG.6depicts a block diagram of a computer system operating in accordance with one or more aspects of the disclosure. In various illustrative examples, computer system600may correspond to computer system100ofFIG.1. The computer system may be included within a data center that supports virtualization and distributed computing. Virtualization within a data center results in a physical system being virtualized using virtual machines to consolidate the data center infrastructure and increase operational efficiencies. A virtual machine (VM) may be a program-based emulation of computer hardware. For example, the VM may operate based on computer architecture and functions of computer hardware resources associated with hard disks or other such memory. The VM may emulate a physical computing environment, but requests for a hard disk or memory may be managed by a virtualization layer of a computing device to translate these requests to the underlying physical computing hardware resources. This type of virtualization results in multiple VMs sharing physical resources. In certain implementations, computer system600may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system600may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system600may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein. In a further aspect, the computer system600may include a processing device602, a volatile memory604(e.g., random access memory (RAM)), a non-volatile memory606(e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage device616, which may communicate with each other via a bus608. Processing device602may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor). Computer system600may further include a network interface device622. Computer system600also may include a video display unit610(e.g., an LCD), an alphanumeric input device612(e.g., a keyboard), a cursor control device614(e.g., a mouse), and a signal generation device620. Data storage device616may include a non-transitory computer-readable storage medium624on which may store instructions626encoding any one or more of the methods or functions described herein (e.g., methods200,300), including instructions for implementing the configuration manager130. Instructions626may also reside, completely or partially, within volatile memory604and/or within processing device602during execution thereof by computer system600, hence, volatile memory604and processing device602may also constitute machine-readable storage media. While computer-readable storage medium624is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media. Other computer system designs and configurations may also be suitable to implement the system and methods described herein. The following examples illustrate various implementations in accordance with one or more aspects of the present disclosure. Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In certain implementations, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner. In certain implementations, not all operations or sub-operations of the methods herein are required to be performed. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the above description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that aspects of the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving,” “identifying,” “displaying,” “obtaining,” “creating,” “generating,” “mapping,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the specific purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. Aspects of the disclosure presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the specified method steps. The structure for a variety of these systems will appear as set forth in the description below. In addition, aspects of the present disclosure are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. Aspects of the present disclosure may be provided as a computer program product that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.). The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation.
51,917
11861377
DETAILED DESCRIPTION Example 1—Overview For software applications, there is an ongoing issue in that users of the software application are typically limited to particular functionality that has been programmed into an application. A user may be limited in terms of both the user interfaces (UIs) available for interacting with the application and the functionality of the application. A user might find it helpful, for example, to be able to specify a new filter for data that might be processed using the application. In some cases, the user may not have the ability to add or modify UI elements, or add or modify application functionality. In those cases, the best a user may hope for is that the desired changes get incorporated into a new release/version of the application. In other cases, the application, or an application framework, may allow a user to add/change UI features or application functionality. However, issues can still remain. One issue is that adding/changing application features may require a level of technical skill that is not possessed by end users. Or, even if an end user has the necessary technical knowledge, they may lack appropriate permissions to modify the software application. Even if an end user has the requisite knowledge and permissions to modify a software application, or if the modification is being carried out by a more technical user (such as a developer/programmer), modifying software functionality can be time consuming and tedious. These problems can be exacerbated to an extent when software applications or frameworks are designed to make it easier for users to create or modify software applications, including software applications that may take advantage of a development framework or have relatively limited functionality (often referred to as “apps”). In the case of “apps” for example, an app might be developed for a particular use case scenario, such as scenario where particular data is obtained and processed in a particular way to provide a report or data visualization. Another app may have similar functionality, but be directed to at least a somewhat different use case (or may simply have a different data source, type of processing, report format, visualization type, etc.). It may be desirable to modify both apps to incorporate some new or changed feature, but typically these modifications or extensions must be implemented separately for each app, increasing developmental efforts, and potentially resulting in duplicative code. Accordingly, room for improvement exists. One improvement that can be made to typical software augmentation paradigms is to allow a particular augmentation feature, such as new programmatic logic, data element (or attribute), or a new user interface element, to be easily shared between apps (or full software applications, but where “apps” will be generally discussed for the remainder of the disclosure for convenience of presentation). However, even in this case, a user may need to manually locate and “activate” an augmentation (also referred to as an “extension”). If an app includes many extensions, it can be time consuming to manually locate and activate these extensions for another app. Accordingly, in one aspect, the present disclosure provides for the grouping of extensions into a particular group, class, or domain. The term “extension group” is generally used through the remainder of the discussion, and the term “domain” can refer to an extension group/set of extensions that relates to a particular use or semantic concept. Apps themselves can also be assigned to a group, class, or domain, and can reflect a particular subject matter area of the apps. So, in particular implementations, disclosed techniques provide that a group of extensions can be specified, and where the group extensions can then be activated, or made available for activation, for one or more apps. Disclosed techniques can also assist in implementing extensions. For example, consider an app that presents data associated with one or more database tables. An extension can be used to add a field to the database table. Consider the COVID-19 pandemic that became widespread in 2020. Software companies may have already released software that assists businesses with human resources issues, including provided a database schema (one or more tables, and typically one or more views constructed from tables or optionally other views) to store employee information. After the pandemic began, it became important to track information such as whether a particular employee had been vaccinated or whether an employee had been infected with COVID. However, since COVID was unknown and unanticipated at the time of a latest release data of the human resources software application, the database schema was not configured to store information such as vaccination status. Adding a field to indicate a vaccination is a type of extension, then, that a user may wish to add. Software companies might not have anticipated that an extension for COVID vaccination status would be needed, but they may have anticipated that a user may wish to add some additional attribute or attributes to a schema, and so may have included functionality in the software application for implementing extensions. At a basic level, such functionality may allow a user to manually create additional fields for a database table, or associate fields with a database table (but where for example, the data for the added fields may be stored in a different database table than a database table being “extended” with the new field, such as the field for vaccination information). An additional field for “vaccination status” typically should be available at multiple levels of a software stack. That is, the field typically should be available in an artifact, such as a relational database table, that stores vaccination status information for particular employees. That field should also typically be “exposed” for use by an end user, such as showing a UI element where a user can enter a vaccination status for a particular employee or view vaccination status information for particular employees. There may be additional layers of a software stack where the field should be available, such as in an artifact of a virtual data model that serves as an intermediate layer between the app and the database. In some cases, a user must manually ensure that the new “vaccination status” field appears at all of the relevant layers. However, again, this can create difficulties if non-technical users wish to modify the functionality of an app, and can be time consuming to implement even for users who have the requisite knowledge. Accordingly, disclosed techniques can automate at least some aspects of app extensions, such as creating or modifying appropriate artifacts in a physical or virtual data model, including propagating an extension to lower levels of a view stack, or propagating an extension from a virtual data model to a physical data model. Examples 2-4 of the present disclosure provide a discussion of particular ways in which extensions can be implemented and stored. Examples 5-11 describe techniques for grouping apps and extensions, and activating groups of extensions for particular applications. Example 2— Example Implementation of Table Extensions As described in Example 1, data used by an app may be stored in a particular format, such as in the form of database tables. The data may be used by multiple apps, or one or more apps and other software, such as middleware or a more full-featured application. Some software implementations include standard database tables for storing application data. That is, the database tables may include standard fields that are expected to be used by many end users. The tables may be part of a schema that is defined for a particular subject matter or use case, such as a schema defined for human resources applications. However, as discussed with respect to Example 1, some end users may have different needs than other users, and the standard tables may not include all fields needed by a particular user, including because the needs of a user may change over time. Particular software can be developed anticipating that some end users may wish to add additional fields to standard database tables. For example, the ERP Central Component (ECC) of SAP SE of Walldorf, Germany, provides functionality for associating additional fields with standard database tables. In some cases, a software application and its data schema may provide standard functionality, with a standard table schema being defined for use with such standard functionality. For instance, the standard table may include base fields that are used with the application. However, certain users of the software may have particular needs such that it is desirable to include additional fields in the table. In more particular cases, an add-on or plugin to a base software application may be available to extend the functionality of the base software application. The add-on or plugin may use the standard table, but may add additional fields to the standard table to support the extended functionality, such as the “vaccination status” field discussed in Example 1. Or, some database environments, including at least certain products of SAP SE of Walldorf, Germany, provide for tables to be extended by including in a table definition a reference to an append, which can define additional fields for the table. With reference toFIG.1, a table108, representing a first version of a standard table (plus any customizations), can include a plurality of fields112. These fields may be specified, for example, in a schema or data dictionary entry for the table108. That is, the table108may have an identifier, and the identifier may be associated with a definition of the table, which definition can include the fields112, and optionally other information. For example, the definition can include, or specify, a type, where a new table can be instantiated as an instance of the particular type. The table108can include a customization, or extension, identifier116. The customization identifier116can be used to determine that the table108has information in addition to the standard information (such as standard information, including fields, maintained in the schema for the table, where a schema and associated fields can be based on a particular type associated with the table). The customization identifier116can be associated with a location or path120where additional information associated with the (custom version) of the table108can be retrieved. The path can be a particular file path. The customization identifier116and path120can be associated with an identifier124, or name, of a table or file128(including a file containing a table, or table elements) of the additional information132associated with the table108. The additional information132can be in the form of additional database fields136. The additional information132can include other types of information, such as defining foreign keys for the table108, including the fields112or any additional fields136of the additional information. The additional information132can also include help information, such as information to be displayed to a user when a user provides input requesting help with a particular field112,136. When the table108is instantiated, a data dictionary entry for the table108can be read, including the fields112. Based on the customization identifier116, the additional information132to be appended to the information in the data dictionary entry for the table108can be retrieved. The fields112and additional information132can be used to instantiate the table108, such as in a database layer and as a runtime object, which can have the structure (e.g., logical view) shown in table140. Example 3—Example Definition and Runtime Representations of Extended Tables FIG.2illustrates an example database environment200that includes a runtime environment202, a database layer204, and a data dictionary or schema layer206. A table can be represented in each of the layers202,204,206. For example, a table can be defined in the data dictionary layer206, such as by including a name or identifier for the table, field names and types for the table, primary key designations, and references for any fields that may serve as foreign keys. An object for the table in the database layer204can be a persisted version of the instantiated database table, including data populated into the table. An object for the table in the runtime environment202can be an in-memory version of the database table, or a version otherwise manipulable by a software application. The data dictionary layer206includes an entry210for a first version of Table 1. The entry210includes definition of a structure of Table 1 (e.g., the identity of fields212in the table, primary key designations214, foreign key references216, and the like). The entry210also includes a reference218to an entry222in the data dictionary layer206for an append structure. The append structure defined by the entry222can be a data object that is used with the entry210to create objects in the runtime layer202and the database layer204, including objects corresponding to the table of entry210. That is, objects in the database layer include data as defined by both the entry210and the entry222. In some cases, each append is associated with a single table in the data dictionary206. A single table, in particular aspects, can be associated with multiple append structures. In other embodiments, append structures can be associated with multiple database tables, or a table can have a single source of append or extension information. At least some of the tables in the database environment200need not have an associated append structure. The entry222in the data dictionary layer206for the append can include identities of fields identifiers224for additional fields to be included in the database layer204and runtime layer202representations of Table 1, as well as foreign key references226(e.g., designating fields224or fields212as foreign keys and providing a reference to a table associated with the foreign key), and display information228. The display information228can be, for example, help information to be displayed, including upon receipt of user input to display the help information. In a particular example, the display information228is associated with particular fields224or fields212. A database layer204object232corresponding to Table 1 includes table data234associated with the entry210, and append data236associated with the entry222. Similarly, a runtime layer202object238for Table 1 includes table data242associated with the entry210and append data244associated with the entry222. Typically, data is the same between the object232and the object238. However, the data can differ, such as if the object238is updated by an application and the changes have not yet been persisted in the object232. Example 4—Alternative Implementation of Table Extensions Extensions can be stored in manners other than that shown inFIG.2. For example,FIG.3illustrates a database environment300that includes a runtime environment302, a database layer304, and a data dictionary layer306that at least generally correspond to the similarly numbered elements202,204,206ofFIG.2. Rather than having a table210and an append222as in the data dictionary206, the data dictionary306includes a definition350of a “main” table350and a definition of an append table370that is associated with the table350. The tables350,370include respective fields312,372, primary keys314,376, and foreign keys316,374. The append table370and the main table350can be joined to retrieve what to external users or artefacts appears to be “unified data,” such as using the primary keys314of the table350and the foreign keys374of the append table370. The tables350,370can be associated with corresponding tables380,390at the database layer, and corresponding tables358,378in the runtime environment302. Example 5—Example Computing Environment Facilitating Creation and Deployment of Extension Groups FIG.4illustrates a computing environment400in which disclosed technologies can be implemented. Generally, the computing environment400includes an application framework410, a technology platform412, and a database414. In a specific implementation, the application framework410is the Fiori design system, the technology platform412is the ABAP Platform for S/4 HANA, and the database414is the HANA database software, all available from SAP SE, of Walldorf, Germany. The application framework410can support the development of apps418. Generally, the apps418access artifacts and functionality of the technology platform412, and can access data stored in the database414. The application framework410can include, for example, standard user interface features, such as a canvas or page, one or more layouts that can help organize UI elements on the page/canvas, and user interface elements, where the collection of user interface elements organized according to the layout on the page/canvas can be referred to as a “floorplan.” An extensibility user interface422of the application framework410can assist a user in creating and managing extension to the apps418, either to a specific app or to a particular app component (e.g., a page, layout, floorplan). A proposal generator426of the application framework410can assist in identifying and activating extensions, more particularly groups of extensions, including assisting a user in applying extensions from an existing library to a new use. For instance, a user interface element may have been implemented in a particular layout template at a particular location for a first app, but a second app for which a user might wish to apply one or more extensions may use a different layout. The application framework410communicates with the technology platform412. In particular, the application framework410and the technology platform can communicate through a gateway430. An extension group engine438can perform or coordinate various functionality of the present disclosure, including performing CRUD (create, read, update, delete) operations with respect to individual extensions, groups of extensions, or associations of groups of extensions with apps, where information about individual extensions, and optionally information about extension groupings or associations between apps and extension groups, can be stored in an extensibility registry434. Extension group read requests and extension group creation requests can be carried out by an extension group extension retriever442and an extension group generator446, respectively, of the extension group engine438. An extension group enhancer service450of the extension group engine438can perform actions such as updating elements of artifacts458of a virtual data model454. The modified elements can represent new data fields, and optionally information describing the calculation or use of such data fields, including how the data fields should be displayed for an app418. Or, modified elements can be elements that modify the properties of an existing data element, or the overall function of a data element. Although the virtual data model454is shown as being in the technology platform412, in other cases the virtual data model is located other than in the technology platform, but is in communication with the extension group enhancer service450, and optionally other components of the computing environment400as appropriate for implementing disclosed technologies. For example, the virtual data model454can be in a software layer that sits between the technology platform and the database414. The database414communicates with the technology platform412, and through the technology platform with the application framework410. In some cases, the application framework410, or components of the computing environment400, such as a client478, can communicate directly with the database414. The database414stores operational data and extensions466, where the operational data can be in the form of database artifacts such as tables or views. The database artifacts can include “base” or “standard” artifacts, or can have a set of standard attributes, where the extension are database artifacts that add additional fields to a standard artifact, or are attributes that are directly included in a standard database artifact or are otherwise associated with a standard database artifact. The database414also includes an extension group template repository470, which can include one or more database artifacts that store data associated with disclosed techniques for defining extension groups (which can include one or more extensions, and more typically include a plurality of extensions) or associations between extension groups and apps. The client478can perform various actions, such as submitting CRUD requests to the technology platform412regarding extensions, extension groups, or assignments of extension groups to apps. The client478can also access the application framework410to use the apps418, or to interact with the extensibility UI422. The client478can query the database414, either by directly accessing database artifacts in the database or accessing such artifacts indirectly, such as using the virtual data model454and its artifacts458. An integrated developed environment (IDE)482can be accessed by users (such as developers or programmers), including acting as clients478, to perform various actions, including with respect to the application framework410or the technology platform412. For example, the IDE can be used, among other things, to create or edit the apps418, including modifying UI elements included in a floorplan or the arrangement of such elements. The IDE482can also be used to access the extensibility UI422, and to interact with the proposal generator426. The IDE482can optionally allow users to perform other actions, such as to perform CRUD operations with respect to data artifacts of the virtual data model454or the database414. Example 6— Example App Framework and App Structure FIG.5illustrates a computing environment500that depicts how extensions and apps can be grouped, and associated with one another. The computing environment500includes an app framework504, which in at least some implementations can be the app framework410ofFIG.4. The app framework504includes a variety of apps (corresponding, for example, to the apps418). The apps can have a variety of configurations. Some apps, such as an app512, do not have any extensions. Generally, an app includes app logic514, which can include user interface logic516and programmatic logic518. Programmatic logic518can include logic to accept input from a user (such as in conjunction with the user interface logic516), including through various UI elements, process the input, and take appropriate action based on that input, such as retrieving, and optionally processing, data from a database to be displayed to a user through a UI screen of the app. User interface logic516can include logic to manage the display of UI elements on UI screens of the app and the receipt of user input through the user interface. As will be further discussed, in at least some cases, different UI elements or collections of UI elements can be selected to be displayed or not displayed, or displayed to particular users, such as users having specific user identifiers or users having particular credentials or roles. In particular, some apps can have UI logic516that allows one or more sets of extensions to be selectively enabled/displayed. The app512includes layout/presentation information522. As described in Example 5, the layout/presentation information522can include information that defines a presentation format for an app, such as for a particular UI screen of the app. The layout/presentation information522can include various templates that can be selected, created, or modified by users, which thus facilitates user design of apps. For instance, a user can start with more basic UI screen features such as a page type. The page can be a general type of “canvas” on which graphical elements and UI elements can be organized, and which can confirm to particular standards used by an engine that renders user interface screens for display. A layout can define general content areas, such as a header section, a content section, and a footer section, where graphical elements may be included to provide a consistent UI theme, and to reduce the effort in app development. Layout information can include, for example, whether a single page is presented on single UI screen or where multiple pages are presented on a single UI screen. At a more granular level, templates, referred to as “floorplans” in the FIORI design system of SAP SE, of Walldorf, Germany, can specify graphical elements and UI elements for particular sections of a layout. Templates can be included for common types of displays provided to a user or types of tasks that might be performed by a user. In some cases, different floorplans can include at least some of the same UI elements, but the overall “look and feel” of UI screens based on the different floorplans can be quite different, and can facilitate different user tasks (including in a way where information presented is consistent with a task a user wishes to perform, which in turn may reflect the underlying programmatic logic518of the app). Examples of floorplans can be those configured to display a list of items, details regarding a single item, reports with graphs, or wizards that guide a user through a particular task. An app530is generally similar to the app512, including the app logic514and the app layout/presentation information522. For simplified presentation, the details of the app logic514and the layout/presentation information522are not shown for the app530(or other apps shown inFIG.5). However, the app530includes extensions532,534. That is, the app530may be identical to, or based on, the app512, except for features of the app530, such as app logic514and layout/presentation information522, associated with the extensions532,534. Now consider that a user may desire to add one or both of the extensions532,534to one or more of a collection of apps540(and which can include an app being newly developed by the user). Typically, a user would have to manually add each extension532,534(or one of any number of other extensions), to each app540. Even if the extensions532,534existed as “objects” that could be associated with one of the apps540, it would still take time to manually identify each relevant extension and manually associate each extension with the app, including identifying how the extension should be implemented in a particular floorplan/UI design of the app. The present disclosure allows extensions to be placed in groups, where a group of extensions can then be applied to particular apps540. In particular, an extension group550can be defined that includes a plurality of extensions552. The extension group550has been applied to an app554to provide the app with the extensions552. In some cases, the extension group550itself is included in the app554, while in other cases the grouping information is used to apply the extensions552to the group, but the extensions are provided in the app554as individual extensions, in a similar manner as to how the extension532,534are included in the app530. Note that an app can be associated with multiple extension groups, or domains, where the app554is shown as also being associated with an extension group556having extensions558. An app can optionally include extensions that are not associated with an extension group (or, while the extensions may be part of an extension group, the extensions are directly associated with the app rather than through an extension group, such as if the extension group contains other extensions which are not desired to be associated with an app). Extension groups562, including the extension groups550,556, can be stored in a repository560. In some cases, the repository560includes implementations of the actual extensions564in an extension group, while in other cases a definition of an extension group562includes identifiers of extensions in the group, but the implementation details of the extensions are included elsewhere, including within the repository560or in another location. In some cases, an extension group562can be given a semantic identifier, which can serve as a domain (or other type of group or category) that can be assigned to apps540. Or, a domain can be defined independently of an extension group550, which can be useful in that it can facilitate multiple extension groups being associated with a single domain, or a single extension group being associated with multiple domains. Apps540can also be assigned to domains, and so a domain may be associated with one or more extension groups562, one or more apps540, or with both extension groups and apps. In particular, a domain570is shown as being defined with respect to a single extension group572, where the extension group has an identifier that also serves as the domain identifier. A domain576is shown as including extension groups574a,574b, where the domain is independent of the extension groups, as evidenced by the extension group574aalso being included in a domain580. FIG.5also illustrates how identifiers for apps540can be associated with domains (and in turn extension groups), where a domain586includes app identifiers588a,588b. Example 7— Example Process for Grouping Extensions and Deploying Extension Groups to Apps FIG.6is a timing diagram illustrating operations in a process600for creating a custom extension group, such as an extension group as described in conjunction withFIG.4orFIG.5. The process600involves operations performed by an extensibility UI604(e.g., the extensibility UI422ofFIG.4), a proposal generator606(e.g., the proposal generator426), an extension group generator608(e.g., the extension group generator446), an extension group template repository610(e.g., the extension group template repository470), an apps repository612(e.g., the apps418of the application framework410), and an extension group service enhancer614(e.g., the extension group service enhancer450). At620, an extension group creation request is provided by a user through the extensibility UI604. The extension group creation can include a name, such as a semantic name that provides an indication to a user of the purpose or scope of the extension group and optionally a type. Types can be used, for example, to determine when extensions associated with an extension group will be visible. That is, the extension group may be associated with a particular computing process or feature, such that the extensions will be available if the particular feature or process is active. Multiple extensions, including multiple extension groups, can be associated with a given process or feature, which can facilitate making many extensions available under the same conditions. Alternatively, extension group extensions can be designated as “always active,” or separate/discrete switching logic can be provided for a given extension group. In addition to providing a general type, an extension group can be associated with a particular instance of the type—such as associating an extension group with a particular process or feature in addition to designating the extension group as being associated with a type or feature, generally. The extension group creation request is sent by the extensibility UI604to the extension group generator608at624. At628, the extension group generator608can perform various actions with respect to the request, such as determining whether the requested extension group is unique. For example, the extension group generator608can send a request630to/search the extension group template repository610. In some cases, “uniqueness” is determined by identifying whether another extension group has a same semantic identifier. That is, multiple extension groups can optionally have overlapping or even identical content (including extensions), provided that they are given different identifiers that will be used by users. In other cases, “uniqueness” can require some substantive difference from another extension group, such as a different switching type, mechanism, or identifier, or a difference in the extensions (including as formulated into extension groups) between extension groups. In a particular implementation, if it is determined at628that an extension group requested for creation is not substantively different from an existing extension group, the process600can throw an error, or can result in a message being presented to a user asking the user to confirm whether a substantively duplicated extension group should be created or if a user would like to take an action with respect to an existing extension group that substantively matches the requested new extension group (where such action can include, for example, associating the extension group with an app that is not already associated with the extension group). If the extension group is determined to be unique, the extension group generator608can create a new identifier for the extension group, such as in the extension group template repository610. In particular, an entry that includes an identifier for the extension group can be added to a table that stores extension group template identifiers. An entry can also be added to a table that associates extension groups with particular switching functionality. A success or failure message (such as in response to determining that an extension group was not determined to be unique) can be sent by the extension group template repository610to the extension group generator at634. An example of a suitable table structure for storing extension group definitional information is presented inFIG.7as a table700having a column702that stores an extension group identifier, a column704that stores a description of the extension group, a column706that identifies a switch feature type, and a column708that holds a switch feature identifier. Table700includes a row710that illustrates sample values for the columns702,704,706,708of the table. Note that values for at least some of the columns of the table can be optional, including only an extension group identifier for the column702being required. Similarly, some columns, such as columns706,708can optionally be provided with default values if no specific value is provided by a user in an extension group creation request. Typically, when a user creates an extension group, it is for the intent of associating the extension group with one or more existing apps (although the disclosed techniques can be incorporated into a process where a user is creating a new app). Accordingly, the extension group generator608can send a request640to the app repository612for a list of available apps. The list of available apps is retrieved/determined (for instance, in a table that lists apps that are available to a particular user, such as a user of the extensibility UI604), and then is sent to the extensibility UI604at642. At644, a user selects one or more apps from the list to which the newly created extension group should be applied (and, optionally, one or more existing extension groups). Identifiers of the selected apps (and optionally extension groups) are sent to the extension group generator608at648. At652, the extension group generator608creates an association between the identified extension group (or extension groups) and the apps selected by the user at644. The association can be stored in a table, in a particular implementation, such as the table730ofFIG.7. The table730includes a column734for an extension group identifier, a column738for an app identifier (thus associating the extension group with the app), and a column742that provides a description of the app. Also at652, the extension group generator508can associate particular extensions included in one or more extension groups for a relevant app/extension group.FIG.7illustrates a table760that can create associations between apps/extension groups and particular extensions within such extension groups. The table760can be used for a variety of purposes, including to determine what extensions should be installed/activated, or made available for activation, for a particular app. As shown, the table760includes a column764for an extension group identifier, a column768for an app identifier, a column772for an app description, a column776for a switch feature (such as identifying a particular switch feature of the type given in the column706of the table700), and columns780-788for particular extensions associated with the row (and thus extension group ID/app ID combination). Note that the column776can be particularly useful when extensions are available from multiple sources, but are desired to be treated together. For instance, assume that a particular app supports extension switching of a particular type, which allows extensions to be selectively displayed (where if the extensions are not displayed, a “base” set of user interface controls/elements are available). A group of extensions might relate to the same semantic concept, but come from different sources, such as one or more extension groups or an extension group and one or more other extension sources (such as custom extensions that are assigned to an app, but may not be associated with an extension group to facilitate associating those extensions with other apps). Grouping the extensions using the switch feature780can allow a set of extensions to be selectively activated/made available for activation as a group, rather than having to, for example, manually switch on extensions for each of multiple extension groups. Control returns to the extensibility UI604at656. With extension group extensions defined, a user can select through the extensibility UI604to active or publish the extension group extensions to an app. The user can select one or more apps, and optionally particular extensions, identifiers for which are sent to the proposal generator606at660. At664, the proposal generator606can read the table760(or otherwise obtain information about extension group extensions) to be published/activated for a selected app. The proposal generator, also at664, can determine qualities of the app, such as analyzing a template (e.g., design template) used to define the app, such as determining a floorplan for the app. The proposal generator606can then predict how the extension of the extension groups should be implemented in the framework. In some cases, the prediction can be based on other usages of a particular extension. That is, if an extension in an extension group was manually applied to a first app in certain way, that information can be presented as a proposal to a user implementing the extension (as part of an extension group) for a second app. Analyzing the floorplan to which extension group extensions are to be applied, and optionally determining extension use information in order to provide an implementation proposal, can be obtained by sending one or more requests to the apps repository612at668, where the relevant information is returned at672. One or more proposals are sent from the proposal generator606to the extensibility UI at676. A user can select a proposal to be used, and can optionally choose to modify at least certain elements of the proposal. The selected proposal, and optionally any modification thereto, are sent to the extension group servicer enhancer614at680. In some cases, the communication sent at680can include a proposal ID, and indicators for any changes to the proposal, where the details of the proposal can be retrieved, such as from a database table or view, by the extension group servicer enhancer614. The extension group service enhancer614can implement the extensions associated with the one or more extension groups that have been associated with an app. In a particular implementation, the extensions are activated by annotating data artifacts, such as one or more views of a virtual data model, where in turn changes or additions can be applied to other data artifacts, such as data artifacts in a physical database that is targeted by the virtual data model. In some cases, the extended app can read or translate this information in the data artifacts, such as using data received from the extension group service enhancer614at684, to generate suitable user interface elements, such as a user interface element representing a filter condition specified by a particular extension of a particular extension group. As a particular example, consider that a result of applying an extension group to an app is the addition of an extension field “Sust_ext_field1,” and the proposal option chosen by a user is to add this field as a filter on a list report table. The extension group service enhancer would add these details to an appropriate computing object (such as a CDS view, in technologies available from SAP SE, of Walldorf Germany)@UI.SelectionField({position: n})Sust_ext_field1; Example 8— Example Deployment of Extension Groups to Apps FIG.8is a timing diagram illustrating operations in a process800for selecting one or more extension groups (each having one more extensions) to be applied to an app. The process800is generally similar to the process600ofFIG.6, but the process600contains operations for defining a new extension group whereas the process800is used to associate previously defined extension groups with an app. The components used in the process800are generally similar to those used in the process600, including an extensibility UI804, a proposal generator806, an extension group template repository808, an apps repository810, and an extension group service enhancer812, which can be analogous to the correspondingly titled components604,606,610,612,614ofFIG.6. Since the process800does not involve the creation of a new extension group, the process is not shown as including the extension group generator608. Of course, even though the extension group generator608is not used in the process800, a computing environment in which the process is executed can include an extension group generator. A user can use the extensibility UI804to send to the extension group template repository808a request820for a list of available extension groups, which are returned to the extensibility UI in a communication824. The user can send a request828using the extensibility UI804to the apps repository810for a list of apps that can be extended, such as with one or more extension groups selected by a user. In some cases, all apps available to the user are returned from the apps repository810to the extensibility UI804in a communication832, while in other cases app identifiers returned in the communication832can be a subset of available apps. For example, a particular extension of an extension group may rely on particular data artifacts. If those data artifacts are not associated with a particular app, then that app may not be identified in the communication832, which may only include apps that are associated with the relevant data artifacts, or having relevant components thereof (such as particular data elements/attributes). A user can select one or more apps to be extended by one or more extension groups using the extensibility UI804, and the identities of the apps/extension groups can be sent to the proposal generator806at836. The proposal generator806can then determine one or more proposals for incorporating elements of the selected extension group or extension groups into the select app or apps at840, which can be carried out in an analogous manner as the operations664ofFIG.6, including the sending of a request844to, and the receipt of a response from, the apps repository810at848to obtain floorplan details about the apps to be extended, and optionally other apps to which a particular extension group, or particular extension group elements, have already been applied. One or more proposals are sent from the proposal generator806to the extensibility UI804in a communication852, where the user can then select a proposal to be implemented for particular apps, optionally with particular modifications. The user selects a particular proposal to be implemented (optionally including additions or modifications), which information is sent from the extensibility UI804to the extension group service enhancer812at856. The extension group service enhancer812then implements extensions associated with one or more selected extension groups, such as described in conjunction with the communication680ofFIG.6. The updated app is then rendered on the extensibility UI804using extension data (or metadata) received from the extension group service enhancer812at860. Example 9— Example User Interface Facilitating Deployment of Extension Groups to Apps FIG.9is an example user interface screen900that lists created/available extension groups, and can be used to create new extension groups or to associate existing extension groups with particular apps. The user interface screen900includes navigation options908,910,912,914that allow a user to navigate amongst various user interface screens, including the user interface screen900, to perform various actions to customize apps. In particular, selecting navigation option908causes a user interface screen to be displayed that shows custom fields that have been created, or allows a user to create custom fields, and then associate them with apps. Selection of navigation option910causes a user interface screen to be displayed that shows data source extensions that have been created, or allows a user to create data source extensions, and then associate particular extensions with particular apps. Selection of navigation option912causes a user interface screen to be displayed that shows custom logic that has been defined, or allows a user to define custom logic, and then associate the custom logic with apps. Selection of navigation option914causes the user interface screen900to be displayed. The user interface screen900includes a search interface920that allows a user to search for particular extension groups, and a user interface control924that can be selected to create a new extension group. The user interface screen900includes a list928of available extension groups, where the list includes a column932that provides a semantic label for the extension group, a column934that provides an identifier (such as a technical key or identifier) for the extension group, a column936indicating a context associated with the extension group (which can be, for example, a particular data object or data artifact used in a particular application, where the data object or artifact can represent an analog-world object or concept, such as a data object representing a “bill of materials,” a “supplier,” or an “employee”), a column938providing a “type” for the extension group (or a particular extension therefor, where the type can indicate a particular data type or functionality associated with an extension group/extension group elements, such as whether the extension group/extension group element is a “flat” textual element, and is a link or path (such as a web address), or has some other functionality—such as a list or interval definition that can be used in application processing). The user interface screen900can include additional information describing a particular extension group (or extension thereof), such a column940that provides a status of the extension group (for example, whether the extension group has been published/activated, including for a particular app, which can be an app to which the user interface screen900is specific), a column942that identifies a user who created a particular extension group, and a column944that identifies a date on which the extension group was created (or, in other cases, the columns942,944can indicate an individual who last modified an extension group, and a date the extension group was last modified). Example 10— Example Proposal Generation User Interface FIG.10illustrates an embodiment of a user interface screen1000that displays a proposal that can be generated by proposal generator for incorporating extensions associated with an extension group (or other extension group) into an app. The user interface screen1000includes at least a portion of the extensions associated with the app, extensions1010,1014, and1018. From reading layout information associated with the app, program logic determines that the layout includes an object page1022, a header1026, and a table filter1030as locations/layout elements where the extensions can be implemented in the app. A user can select a location where a particular extension should be implemented using an appropriate user interface control for a given layout location1022,1026,1030. The extension group and location information can be saved, and the app can be suitably modified if a user decides to activate/publish the extensions to an app (or a particular instance of an app). If the extension is activated, the app can be modified to include the extensions at the location specified in the proposal (or, at this point, an extension group implementation definition), which can optionally involve modifying particular computing objects (such as data objects or artifacts, such as classes or elements of a virtual or physical data model). Example 11— Example Technique for Applying Extension Groups to an App FIG.11is a flowchart of a method1100for associating at least one extension group with at least one app. The method1100can be implemented in the computing environment400ofFIG.4or the computing environment500ofFIG.5, and the process ofFIG.6or the process ofFIG.8can be particular examples of the method1100. At1110, a list of one or more extension groups are rendered on a user interface, where at least a first extension group of the one or more extension groups includes a plurality of extensions. A selection of the at least a first extension group is received through the user interface at1114, where the selection indicates that at least a first app is to be extended using the at least a first extension group. At1118, an identifier of the at last a first extension group is associated with an identifier of the at least a first app. One or more computing artifacts of the at least a first app are updated at1122to include extensions of the plurality of extensions of the at least a first extension group. Example 12— Computing Systems FIG.12depicts a generalized example of a suitable computing system1200in which the described innovations may be implemented. The computing system1200is not intended to suggest any limitation as to scope of use or functionality of the present disclosure, as the innovations may be implemented in diverse general-purpose or special-purpose computing systems. With reference toFIG.12, the computing system1200includes one or more processing units1210,1215and memory1220,1225. InFIG.12, this basic configuration1230is included within a dashed line. The processing units1210,1215execute computer-executable instructions, such as for implementing a data archival environment, and associated methods, such as described Examples 1-11. A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC), or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example,FIG.12shows a central processing unit1210as well as a graphics processing unit or co-processing unit1215. The tangible memory1220,1225may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s)1210,1215. The memory1220,1225stores software1280implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s)1210,1215. A computing system1200may have additional features. For example, the computing system1200includes storage1240, one or more input devices1250, one or more output devices1260, and one or more communication connections1270, including input devices, output devices, and communication connections for interacting with a user. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing system1200. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing system1200, and coordinates activities of the components of the computing system1200. The tangible storage1240may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing system1200. The storage1240stores instructions for the software1280implementing one or more innovations described herein. The input device(s)1250may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing system1200. The output device(s)1260may be a display, printer, speaker, CD-writer, or another device that provides output from the computing system1200. The communication connection(s)1270enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier. The innovations can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing system on a target real or virtual processor. Generally, program modules or components include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing system. The terms “system” and “device” are used interchangeably herein. Unless the context clearly indicates otherwise, neither term implies any limitation on a type of computing system or computing device. In general, a computing system or computing device can be local or distributed, and can include any combination of special-purpose hardware and/or general-purpose hardware with software implementing the functionality described herein. In various examples described herein, a module (e.g., component or engine) can be “coded” to perform certain operations or provide certain functionality, indicating that computer-executable instructions for the module can be executed to perform such operations, cause such operations to be performed, or to otherwise provide such functionality. Although functionality described with respect to a software component, module, or engine can be carried out as a discrete software unit (e.g., program, function, class method), it need not be implemented as a discrete unit. That is, the functionality can be incorporated into a larger or more general-purpose program, such as one or more lines of code in a larger or general-purpose program. For the sake of presentation, the detailed description uses terms like “determine” and “use” to describe computer operations in a computing system. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation. Example 13— Cloud Computing Environment FIG.13depicts an example cloud computing environment1300in which the described technologies can be implemented. The cloud computing environment1300comprises cloud computing services1310. The cloud computing services1310can comprise various types of cloud computing resources, such as computer servers, data storage repositories, networking resources, etc. The cloud computing services1310can be centrally located (e.g., provided by a data center of a business or organization) or distributed (e.g., provided by various computing resources located at different locations, such as different data centers and/or located in different cities or countries). The cloud computing services1310are utilized by various types of computing devices (e.g., client computing devices), such as computing devices1320,1322, and1324. For example, the computing devices (e.g.,1320,1322, and1324) can be computers (e.g., desktop or laptop computers), mobile devices (e.g., tablet computers or smart phones), or other types of computing devices. For example, the computing devices (e.g.,1320,1322, and1324) can utilize the cloud computing services1310to perform computing operations (e.g., data processing, data storage, and the like). Example 14— Implementations Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth herein. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods. Any of the disclosed methods can be implemented as computer-executable instructions or a computer program product stored on one or more computer-readable storage media and executed on a computing device (e.g., any available computing device, including smart phones or other mobile devices that include computing hardware). Tangible computer-readable storage media are any available tangible media that can be accessed within a computing environment (e.g., one or more optical media discs such as DVD or CD, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as flash memory or hard drives)). By way of example and with reference toFIG.12, computer-readable storage media include memory1220and1225, and storage1240. The term computer-readable storage media does not include signals and carrier waves. In addition, the term computer-readable storage media does not include communication connections (e.g.,1270). Any of the computer-executable instructions for implementing the disclosed techniques as well as any data created and used during implementation of the disclosed embodiments can be stored on one or more computer-readable storage media. The computer-executable instructions can be part of, for example, a dedicated software application or a software application that is accessed or downloaded via a web browser or other software application (such as a remote computing application). Such software can be executed, for example, on a single local computer (e.g., any suitable commercially available computer) or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a client-server network (such as a cloud computing network, or other such network) using one or more network computers. For clarity, only certain selected aspects of the software-based implementations are described. Other details that are well known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any specific computer language or program. For instance, the disclosed technology can be implemented by software written in C++, Java, Perl, JavaScript, Python, Ruby, ABAP, SQL, Adobe Flash, or any other suitable programming language, or, in some examples, markup languages such as html or XML, or combinations of suitable programming languages and markup languages. Likewise, the disclosed technology is not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure. Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means. The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub combinations with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present, or problems be solved. The technologies from any example can be combined with the technologies described in any one or more of the other examples. In view of the many possible embodiments to which the principles of the disclosed technology may be applied, it should be recognized that the illustrated embodiments are examples of the disclosed technology and should not be taken as a limitation on the scope of the disclosed technology. Rather, the scope of the disclosed technology includes what is covered by the scope and spirit of the following claims.
62,676
11861378
DETAILED DESCRIPTION Graphical user interfaces (GUIs) are used in a wide variety of applications to present information to a user, and GUIs may be presented on a wide variety of devices, including but not limited to computer screens, tablets, smartphones, smart watches, and smart appliances. A device that presents a GUI may receive information or instructions for how to present or render that GUI on a display. For example, a device that presents a web page may receive the instructions in one or more files, such as HTML files, CSS (cascading stylesheets) files, or Javascript files. For another example, an Android device may receive the instructions in one or more files, such as XML files. Other devices may receive the information for presenting the GUI in another format, and the techniques described herein are not limited to any particular manner of instructing a device to present a GUI. Although the examples herein will use HTML as an example of instructions for presenting a GUI, the techniques described herein are not limited to HTML and web pages and may be applied with any format of instructions for presenting a GUI. The techniques described herein relate to representing instructions (such as code, software, or scripts) for presenting a GUI page (e.g., an HTML file) as a vector in a vector space. Vector space representations have been successfully applied in other applications (e.g., word embeddings and sentence encodings), and accordingly a vector space representation of a GUI page may be used in applications of GUI pages. For example, to determine if two GUI pages have similar content, a distance may be computed between the vector space representation the two GUI pages. As used herein, a GUI page may include any user interface that is presented to a user, such as a user interface presented by a device (e.g., a computer screen, monitor, tablet, or portable device) or is otherwise visible (e.g., projected onto a surface). A GUI page may be presented as a web page, may be presented by an application or “app”, or may presented by any other appropriate means. A GUI page may also be a portion of a larger user interface, such as a window in a larger user interface or any other relevant portion of a larger user interface. FIG.1Aillustrates an example of a vector space representation of words or word embeddings. A word embedding is a vector in a vector space that represents the word but does so in a manner that preserves useful information about the meaning of the word. For example, the word embeddings may be constructed so that words with similar meanings or categories may be close to one another in the vector space. For example, the word embeddings for “cat” and “cats” may be close to each other because they have similar meanings, and the words “cat” and “dog” may be close to each other because they both relate to pets. FIG.1Billustrates an example of a vector space representation of sentences or sentence encodings. Similar to word embeddings, a sentence encoding is a vector in a vector space that represents the sentence but does so in a manner that preserves useful information about the meaning of the sentence. For example, the sentence encodings may be constructed so that sentences with similar meanings or categories may be close to one another in the vector space. For example, the sentence encodings for “yes” and “please do” may be close to each other because they have similar meanings. The techniques described herein relate to computing vector space representations of GUI elements (e.g., HTML elements) and GUI pages (e.g., web pages). As an illustrative example, an example web page is now described. FIG.2is an example of a portion of an HTML file for presenting a GUI page. An HTML file is made up of HTML elements that may be represented using a tree structure. The top or root of the tree structure is the <html> element. In this example, the <html> element has two child elements, the <head> element and the <body> element. The <head> and <body> elements may each have multiple child elements and so forth. For clarity of presentation, terminology relating to HTML elements will now be described. An HTML element has a “tag” that indicates the type of HTML element. Examples of tags include html, head, body, div, form, p, and so forth. Some HTML elements may have an opening tag and a closing tag with other content (such as text or other HTML elements) between the opening tag and the closing tag. Some HTML elements may consist of a single tag, such as the <img> element for an image. An HTML element may have one or more attributes where each attribute may be a name or a name/value pair. For example, the following HTML element<label class=“control-label”>Name:</label> has one attribute where the name of the attribute is “class” and the value of the attribute is “control-label”. An HTML element may have more than one attribute. An element may include text and/or one or more child elements between the opening tag and the closing tag. In this situation, the text generally corresponds to the element, while the child elements are separate elements even though they are represented within their parent element. For example, for the element<p>Hello <span>World!</span><p> The text “Hello” is part of the paragraph element while the span element and its contents correspond to a child element. In some implementations, a DOM (document object model) representation of a GUI page may be used in addition to or instead of HTML. A DOM representation of a web page may include modifications that are not present in an HTML file received at a device. For example, with a single-page web app, Javascript may modify the DOM without receiving HTML corresponding to those changes. Where HTML is used in examples described herein, it is understood that a DOM representation may be based on an HTML code, and the DOM representation may be used in addition to or instead of the HTML representation. For other kinds of GUI pages, the instructions for presenting the GUI may have similarities with the HTML instructions described above but may also have differences. For example, XML instructions may include additional features, such as one or namespaces that are associated with elements or attributes. GUI Element Embeddings A GUI element embedding is a vector in a vector space that represents a GUI element. The GUI element embedding may have similar properties to word embeddings and may be used for applications of GUI pages. For example, the similarity of two GUI elements may be determined by computing a distance between the GUI element embeddings of the two GUI elements. Techniques for computing GUI element embeddings are now described. FIG.3is an example system300for training GUI element embeddings from a training corpus of GUI pages. GUI element embedding model310may be trained by processing a corpus of GUI pages. The output of the training process may include GUI element embeddings for individual GUI elements seen in the training corpus and possibly an additional out-of-vocabulary (OOV) GUI element embedding for rarely seen GUI elements. For example, the training corpus may be a corpus of web pages, and the training process may determine HTML element embeddings for individual HTML elements in the training corpus of web pages. GUI elements may include any appropriate elements such as text, boxes, hidden elements, drop-down menus, a span of text, tables, and the like. In some instances, GUI elements may be one or more individual HTML tags or a group of HTML tags. In some instances, a GUI element may include one or more HTML tags and any content that is encapsulated by the tags. In some embodiments, each of a HTML tag, attributes of a tag, and the content of a tag may each be treated as different GUI elements. GUI element embedding model310may include any appropriate mathematical model, such as a neural network (e.g., a multi-layer perceptron). GUI element embedding model310may be trained to process a representation of an input GUI element (e.g., one or more one-hot vectors) and predict representations of neighboring GUI elements, such as the parent GUI element of the input GUI element, a sum of representations of child elements of the input GUI element, and a sum of representations of sibling GUI elements of the input GUI element. Other variations are possible as described in greater detail below. FIG.4illustrates an example representation of a GUI element that may be processed to train GUI element embeddings. In some implementations, a GUI element may be represented as one or more one-hot vectors. A one-hot vector is a vector that has a value of 1 in one position and a value of 0 in all of the other positions. The length of the one-hot vector may correspond to the number of possible values (with a possible additional value to represent out-of-vocabulary values). For example, if there are 8 possible values, then the values may be represented with one-hot vectors of length 8. A GUI element may also be represented using a one-cold vector that has a value of 0 in one position and a value of 1 in all of the other positions (one-hot and one-cold vectors may be used interchangeably for the techniques described herein). In some implementations, a GUI element may be represented using a single one-hot vector. For example, the number of unique GUI elements may be determined, and one-hot vectors of that length may be used to represent the GUI elements. In some implementations, a GUI element may be represented using more than one one-hot vector. For example, inFIG.4, a GUI element may be represented using a combination of three one-hot vectors. InFIG.4, the possible GUI element tags may be represented with a first one-hot vector. The length of a one-hot vector may be based on or related to the total number of possible different values or instances the vector may be used to represent. To represent N possible values for a GUI element, a one-hot vector may have a length of at least N. For example, if the number of possible tags is 110, then a one-hot vector of at least length 110 may be used to represent all the possible tags. InFIG.4, the possible element attributes may be represented using a second one-hot vector. For example, the element's attributes may be concatenated to form a string that is then used to generate the one-hot vector. In some implementations, other techniques may be implemented to remove differences that are not relevant to the GUI element. For example, the element attributes may be sorted by name. In some implementations, each attribute of an HTML element may be represented as a one-hot vector, and the combination of all the attributes of the element may be represented as a sum of the one-hot vectors for individual attributes. The number of possible element attributes may be larger than the number of possible tags, and thus a one-hot vector for representing attributes may be longer than the one-hot vector for tags. InFIG.4, the possible element text may be represented using a third one-hot vector. In some implementations, one-hot vectors may be created for the element text where the length of the one-hot vector is equal to the number of unique text strings in GUI elements. The number of possible text strings in GUI elements may be larger than the number of possible element attributes, and thus a one-hot vector for GUI element text may be longer than the one-hot vector for GUI element attributes. In some implementations, the GUI element text may be represented as a sum of one-hot vectors of individual words or a bag of words. In some implementations, the text of GUI elements may be further processed before determining the one-hot vectors. The corpus of GUI elements being used to train GUI element embeddings may include data from real GUIs (e.g., websites) and thus may contain personal information, such as personally identifiable information, or other sensitive information, such as credit card numbers and social security numbers. To reduce risks associated with processing personal information, the text of GUI elements may be processed with a cryptographic hash function to conceal or prevent disclosure of the sensitive information. The computed hash values may be used in place of the original GUI element text. In some implementations, all GUI element text may be processed or hashed before being used to determine one-hot vectors regardless of whether the GUI element text actually contains personal or sensitive information. Any appropriate modification or hash function may be used, such as secure hash algorithms (SHA) or digital signature algorithms (DSA). In some implementations, other portions of a GUI element may also hashed before determining a one-hot vector. For example, an attribute value of a GUI element may include personal or sensitive information, and the attribute values may also be hashed before determining one-hot vectors for the attributes. A GUI element may accordingly be represented as a single one-hot vector, a combination of one-hot vectors (such as the concatenation of the three one-hot vectors illustrated inFIG.4), or as a combination of sums of one-hot vectors (e.g., where GUI element text is represented as a bag of words). For clarity of presentation, where processing described herein relates to processing a one-hot representation, that processing may also include processing any combination of one-hot vectors described herein, such as the concatenation of one-hot vectors ofFIG.4or the concatenation of sums of one-hot vectors. In some implementations, a single representation may be used to represent multiple, related GUI elements. For example, a GUI element may have a large number of child elements or sibling elements (e.g., hundreds or thousands) and it may not be feasible to process representations of all such elements individually. The single representation of multiple GUI elements may be any appropriate combination, such as a sum of the representations of the individual elements (e.g., one-hot representations). For clarity of presentation, where processing described herein relates to processing a one-hot representation, that processing may also include processing a combination of one-hot representations, such as the sum of one-hot representations of related GUI elements. Combining the preceding paragraphs, where processing described herein relates to processing a one-hot representation, the one-hot representation may include any of the following: a one-hot vector (a vector with 1 at one positions and 0's at other positions), a concatenation of two or more one-hot vectors, a sum of two or more one-hot vectors, a sum of two or more vectors where each vector in the sum is a one-hot representation of a GUI element, or any combination of the foregoing. In some implementations, additional information may be used in computing a GUI element embedding, such as the position of the GUI element in the GUI page or information about parent elements of the GUI element. In some implementations, a one-hot representation of a GUI element may include the depth or distance of the GUI element from the root element of the GUI page, such as by representing the depth as a one-hot vector (e.g., the depth for a first GUI element may be determined from the number of GUI elements between the first GUI element and the root GUI element of a GUI element tree). In some implementations, a one-hot representation of a GUI element may include the lateral position or distance from the left-most element at the same depth, such as by representing the lateral position as a one-hot vector. In some implementations, a one-hot representation of a GUI element may include information about one or more ancestor elements (e.g., parent and grandparent elements), such as the tags of the ancestor elements represented as one-hot vectors or as a sum of one-hot vectors. FIG.5illustrates example relationships between GUI elements. InFIG.5, a starting or current element is indicated by E. The element has a parent element P, a grandparent element GP, three child elements C, and three grandchild elements GC. In addition, element E has two sibling elements S, two uncle elements U, two nephew elements N, two cousin elements Cs, and two cousin elements once-removed X. The GUI element embedding model310ofFIG.3may be generalized to process any one-hot representation of an input GUI element (or a combination of GUI elements) and predict one or more one-hot representations of neighboring GUI elements. The following are non-limiting examples of generalizations of GUI element embedding model310:(i) a model that processes a one-hot representation of an input element to predict a one-hot representation of a single parent element, a one-hot representation of a single sibling element (e.g., a randomly selected sibling), and a one-hot representation of a single child element (e.g., a randomly selected child);(ii) a model that processes a one-hot representation of an input element to predict a one-hot representation of a single parent element, a one-hot representation of all child elements, and a one-hot representation of all sibling elements;(iii) a model that processes a one-hot representation of an input element to predict a one-hot representation of a parent element and all grandparent elements, a one-hot representation of all child elements and grandchild elements, and a one-hot representation of all sibling elements and all cousin elements;(iv) a model that processes a one-hot representation of an input element to predict a one-hot representation of a combination of a parent element, all child elements, and all sibling elements; and(v) a model that processes a one-hot representation of a combination of an input element and all siblings of the input element to predict a one-hot representation of a parent element, a one-hot representation of all child elements of the input element, and a one-hot representation of all grandchild elements of the input element. FIG.6is an example architecture of a mathematical model for GUI element embedding model310to compute GUI element embeddings. In the example ofFIG.6, the mathematical model is a neural network that includes a multi-layer perceptron. InFIG.6, the input to the neural network is a one-hot representation of an input GUI element, such as any of the one-hot representations described herein. The one-hot representation is processed by a first linear layer to compute a GUI element embedding. The GUI element embedding is then processed by a second linear layer to predict a one-hot representation of a neighboring GUI element, such as a parent element, a sum of child elements, or a sum of sibling elements. Other layers may be added to predict other neighboring elements. For example, the second layer may be used to predict a one-hot representation of a parent element. A third layer (parallel to the second layer) may be added that processes the GUI element embedding to predict a one-hot representation of a combination of child elements. A fourth layer (also parallel to the second layer) may be added that processes the GUI element embedding to predict a one-hot representation of a combination of sibling elements, and so forth. FIG.7is a flowchart of an example method for training a mathematical model for computing GUI element embeddings. InFIG.7and other flowcharts herein, the ordering of steps is a non-limiting example, not all steps are required, and steps may be combined or divided. The methods described by any flowcharts described herein may be implemented by any of the computers or systems described herein. At step710, a training corpus of GUI pages is obtained. Any appropriate training corpus may be used. For example, in some implementations, software may be used to record sequences of GUI pages as users of a GUI (e.g., users of a website) navigate through different pages of the GUI. Other initialization steps may be performed as well, such as initializing parameters of a GUI element embedding model with random values. At step720, one-hot representations are computed for GUI elements of the training corpus. In some implementations, a one-hot representation may be computed for all of the GUI elements in the corpus, and in some implementations, a one-hot representation may be computed for a subset of the GUI elements in the training corpus. The one-hot representations may include any of the one-hot representations described herein. At steps730to770, the training process iterates over the training corpus to process individual GUI elements in the training corpus. For clarity of presentation, the training process is described as iterating over a single GUI element at a time, but in some implementations, the training process may iterate over batches of GUI elements. At step730, a GUI element is selected. The GUI element may be selected using any appropriate techniques, such as random selection or sequentially iterating over GUI elements in the training corpus. At step740, a training input is determined for the selected GUI element. The training input may include any of the one-hot representations described herein. At step750, a second training input is determined for a neighboring GUI element of the selected GUI element. The neighboring element may be a direct neighbor (such as a parent or child) or may be a more distant neighbor (such as an uncle or a grandparent). The second training input may include any of the one-hot representations described herein. In some implementations, the second training input for a neighboring element may be for a group of neighboring elements, such as all children of the selected element, all siblings of the selected element, or any other group of neighboring elements. The second training input for a group of neighboring element may be any combination of one-hot representations of the GUI elements in the group, such as a sum of the one-hot representations. In some implementations, additional training inputs may be used. For example, a third training input may be determined that corresponds to a second neighboring element or a second group of neighboring elements. At step760, parameters of the model are updated using the training inputs. The model parameters may be updated using any appropriate techniques. In some implementations, such as when the GUI element embedding model includes a neural network, forward propagation may be performed using the training input for the selected GUI element to compute a model output. An error value may be determined using the model output and the second training input for the neighboring GUI element (and possibly other training inputs for other neighboring GUI elements). Back propagation may then be performed to update the model parameters. In some implementations, the error value may be computed using a cross entropy loss or a squared error loss. At step770, it is determined if the training process is complete. If the training process is not complete, then processing continues to step730where another GUI element is selected. If the training process is complete, then processing continues to step780. Any appropriate techniques may be used to determine if the training process is complete. For example, training may be complete after a fixed number of iterations or after a convergence parameter reaches a desired threshold. At step780, the trained GUI element embeddings are output. The GUI element embeddings may be obtained from the model using any appropriate techniques. For example, where the GUI element embedding model includes a neural network, such as a multi-layer perceptron, the GUI element embeddings may be the intermediate activation obtained at the output of the first fully connected layer. The GUI element embeddings may then be used for any applications relating to processing GUI pages. In some implementations, the GUI element embeddings may be used to compute GUI page encodings as described below. GUI Page Encodings A GUI page encoding is a vector in a vector space that represents a GUI page. The GUI page encoding may have similar properties to sentence encodings and may be used for applications of GUI pages. For example, the similarity of two GUI pages may be determined by computing a distance between the GUI page encodings of the two GUI pages. Techniques for computing GUI page encodings are now described. When computing GUI element embeddings, as described above, it may be possible to compute a GUI element embedding in advance for each possible GUI element (including a possible OOV GUI element embedding for less common GUI elements). When computing GUI page encodings for GUI pages, it may be preferred to compute the GUI page encoding for each GUI page as needed since there may be greater variability in GUI pages (such as when a website presents information about a large number of products, users, customers, or employees). Accordingly, a mathematical model may be trained that computes a GUI page encoding from a GUI page by processing the GUI elements of the GUI page. This mathematical model may be referred to as a GUI page encoding model. An implementation of a GUI page encoding model may process a representation of GUI elements of a GUI page to compute a vector that represents the GUI page in a vector space. A GUI page encoding model may process any appropriate representation of GUI elements, such as GUI element embeddings or one-hot representations. Now described are example implementations of a GUI page encoding model. In some implementations, a GUI page encoding model may compute a weighted sum of GUI element representations of the GUI page.FIG.8is an example system800for computing a GUI page encoding as a weighted sum of GUI element representations. InFIG.8, GUI-element-1, GUI-element-2, to GUI-element-N correspond to GUI element representations of the GUI page. InFIG.8, weight computation component810computes a weight from a GUI element representation. Weight computation component810may use any appropriate techniques to compute the weights. In some implementations, weight computation component810may compute the weights using a neural network. For example, a neural network may process a GUI element representation with a multi-layer perceptron to compute the weight for the GUI element. The neural network may be trained to compute larger weights for more important GUI elements (e.g., elements that relate to the purpose of the GUI page) and compute smaller weights for less important GUI elements (e.g., elements that appear on many pages, such as elements in the header or footer of the GUI page). Weighted sum computation component820may then compute the GUI page encoding as a weighted sum of the GUI element representations using any appropriate techniques. In some implementations, a weighted sum of GUI element representations may also incorporate information about the position of the GUI element in the GUI page (where the GUI element representation does not already include position information). To include information about the position of a GUI element, the representation of a GUI element may be combined with other information that indicates the position of the element in the GUI page. For example, a GUI element representation may be combined with another vector that indicates the position of the GUI element by concatenating them together. The combined vector may be used directly or projected into a smaller vector space, such as by processing the combined vector with a neural network that outputs a vector with a shorter length. Where the combined vector is projected into a smaller vector space, the projection may be learned as part of training task, such as any of the training tasks described herein. Any appropriate position information may be used, such as any of the following: a one-hot vector indicating the depth or distance of the element from the root element; a one-hot vector indicating a lateral position or distance from the left-most element at the same depth; or a representation of the parent GUI element (since the position of all GUI elements can be determined from combinations of GUI elements and their parents). Any other appropriate techniques may also be used to combine a GUI element representation with position information. Position information may be absolute (e.g., relative to a root of a GUI element tree) or may be relative to another GUI element. For example, techniques similar to frequency modulation may be used to modify a GUI element representation to include position information. For another example, the position of a GUI element may be encoded using prime number factorization. In some implementations, a GUI page encoding model may be adapted to the hierarchical relationship of GUI elements in a GUI page. For example, a GUI page encoding model may process all the GUI element of a GUI page in a specified order, such as depth first (pre-order, in-order, post-order, or any other order) traversal or a breadth first traversal. A mathematical model, such as a neural network may sequentially process the representations of each node in the specified order to output a final value to represent the GUI page. In some implementations, a tree of GUI elements may be processed using a depth-first post order traversal. An initial representation of each node may be the corresponding GUI element representation. Processing may start at leaves of subtrees of the GUI page, and the representations may be updated as processing proceeds up the tree. For illustration,FIG.9is an example tree of GUI elements. The representations of nodes A1, A2, and A3may be processed with a mathematical model to compute a combined representation of these nodes that is referred to as A*. The A* representation may then be combined with the representation of node B3to compute a combined representation of the subtree under node B3that is denoted as B3@. Afterwards, the representations of nodes B1and B2and the subtree B3@ may be processed with the mathematical model to compute a combined representation of these nodes and the nodes underneath them and that is referred to as B*. The B* representation may be combined with the representation of node D2to compute a combined representation of the subtree under D2that is denoted as D2@. This process may be continued for the remaining nodes of the tree to compute a representation E1@ that represents the entire tree of nodes. Any appropriate mathematical model may be used to process the nodes of the tree. In some implementations, a recurrent neural network may be used to process the representations of each set of sibling elements (including subtree representations corresponding to a node). For example, a recurrent neural network may sequentially process A1, A2, and A3to compute A*. Similarly, the recurrent neural network may sequentially process B1, B2, and B3@ to compute B*. Any appropriate techniques may be used to combine the representation of sibling nodes (e.g., A*) with their parent node (e.g., B3). In some implementations, the representation of the sibling nodes may be added to the representation of the parent node. In some implementations, the representation of the sibling nodes may be concatenated with the representation of the parent node, and nodes without children may be concatenated with an empty (or some other) vector. For example, B3may be concatenated with A* and each of B1and B2may be concatenated with an empty vector. The above examples of GUI page encoding models include parameters that may need to be learned or trained to improve the performance of the resulting GUI page encodings. Now described are example techniques for training the parameters of a GUI page encoding model. FIG.10Ais an example system1000for training a GUI page encoding model while performing a task.FIG.10Bis an example system1050for using the trained GUI page encoding model to compute a GUI page encoding from a GUI page. InFIGS.10A and10B, GUI page encoding model1010processes GUI element representations of a GUI page to compute a GUI page encoding. InFIG.10A, GUI page encoding model1010is trained in the context of performing another task. Processing related to the task may be implemented by task processing component1020that computes a task output. Any appropriate task may be used to train GUI page encoding model1010. For example, the task output may be the sentiment of the GUI page; people, places, or things mentioned in the GUI page; or whether the GUI page includes personal, sensitive, or confidential information. In some implementations the task may involve supervised training of a task model for performing the task, where the task model is trained simultaneously with GUI page encoding model1010. The training data for training the task model may include a corpus of GUI pages where GUI pages of the corpus are associated with one or more labels. For example, the labels may include any of the following: a sentiment of the GUI page (e.g., if the GUI page includes a review of a product or service, whether the review is positive, neutral, or negative); people, places, or things mentioned in the GUI page; or whether the GUI page includes personal, sensitive, or confidential information. FIG.11is a flowchart of an example method for training a GUI page encoding model in the context of performing a task. At step1110, a training corpus of GUI pages is obtained where GUI pages may be associated with one or more labels. The training corpus may be obtained in any appropriate manner, such as recording use of a GUI (e.g., through specialized software) or by using software to automatically navigate a GUI. Any appropriate labels may be used, such as any of the labels described herein. Other initialization steps may be performed as well, such as initializing the parameters of the GUI page encoding model and/or other models with random values. At steps1120to1160, the training process iterates over the training corpus to process individual GUI pages in the training corpus. For clarity of presentation, the training process is described by iterating over a single GUI page at a time, but in some implementations, the training process may iterate over batches of GUI pages. At step1120, a GUI page is selected. The GUI page may be selected using any appropriate techniques, such as random selection or sequentially iterating over GUI pages in the training corpus. At step1130, a GUI page encoding is computed by processing GUI element representations of the of GUI page with the GUI page encoding model. Any appropriate techniques may be used to compute the GUI page encoding, such as any of the techniques described herein. At step1140, the GUI page encoding is processed by a task processing component to predict one or more labels that are associated with the GUI page. Any appropriate task and task processing component may be used, such as any of the tasks described herein. The task processing component may include any appropriate mathematical model, such as a neural network (e.g., multi-layer perceptron, convolutional neural network, or recurrent neural network) or a support vector machine. At step1150, an error value is computed for the prediction by comparing the true label for the GUI page from the training corpus with the predicted label. Any appropriate techniques may be used to compute the error value. In some implementations, the error value may be computed as a cross entropy loss or a squared error loss. In some implementations, the error value may be zero if the predicted label is equal to the true label and a non-zero value otherwise. In some implementations, the error value may be a function of the predicted label and the true label, such as a distance between them. At step1160, back propagation is performed using the error value to update the parameters of the GUI page encoding model and/or the task processing component using any appropriate techniques. In some implementations, the parameters of the task processing component may be fixed and only the parameters of the GUI page encoding model may be updated. At step1170, it is determined whether the training process is complete. If the training process is not complete, then processing continues to step1120where another GUI element is selected. If the training process is complete, then processing continues to step1180. At step1180, the trained GUI page encoding model is output so that it may be used for applications of processing GUI pages. In some implementations, GUI page encoding model1010may be trained using more than one task. For example, two training corpuses of GUI pages may be available: a first training corpus of GUI pages with a first set of labels and a second training corpus of GUI pages with a second set of labels. A first task processing component may process a GUI page encoding to predict labels of the first set of labels and a second task processing component may process a GUI page encoding to predict labels of the second set of labels. A GUI page encoding model may be trained using both tasks, such as by alternating training iterations of the two tasks. After GUI page encoding model1010has been trained, it may be used to compute GUI page encodings as depicted inFIG.10B. GUI page encoding model1010may be used for the same task that was used for training or may be used for different tasks. In some implementations, GUI page encoding model1010may be trained using a training corpus of sequences of GUI pages.FIG.12Aillustrates an example sequence of GUI pages where GUI-1through GUI-5indicate GUI pages and A-1through A-5indicate actions performed (e.g., clicking on a particular GUI element with a mouse) to transition from one GUI page to the next GUI page. FIG.12Bis an example system1200for training a GUI page encoding model using sequences of GUI pages.FIG.12Bincludes GUI page encoding model1010, as described above, and page predictor component1210. Page predictor component1210includes a mathematical model that processes a first GUI page encoding of a first GUI page to predict a GUI page encoding of a subsequent page. In some implementations, page predictor component1210may also process an action performed on the first GUI page (e.g., a mouse click of a particular element) in predicting the GUI page encoding of a subsequent page. FIG.13is a flowchart of an example method for training a GUI page encoding model using sequences of GUI pages. At step1310, a training corpus of sequences of GUI pages is obtained. In some implementations, the sequences may include information about an action that was performed to transition from a GUI page to a subsequent GUI page, such as an element of GUI page that was acted upon and the action that was performed. Other initialization steps may be performed as well, such as initializing the parameters of the GUI page encoding model and/or other models with random values. At steps1320to1360, the training process iterates over the training corpus to process pairs of sequential GUI pages in the training corpus. For clarity of presentation, the training process is described as iterating over a single pair at a time, but in some implementations, the training process may iterate over batches of pairs of GUI pages. At step1320, first and second GUI pages are selected where the second GUI page was subsequent to the first GUI page. The pair of GUI pages may be selected using any appropriate techniques, such as random selection or sequentially iterating over pairs of GUI pages in the training corpus. At step1330, first and second GUI page encodings are computed by processing GUI element representations of the first and second GUI pages with the GUI page encoding model. Any appropriate techniques may be used to compute the GUI page encodings, such as any of the techniques described herein. At step1340, the first GUI page encoding is processed by a page predictor component to predict a GUI page encoding of a GUI page subsequent to the first GUI page. The GUI page predictor component may be implemented using any appropriate mathematical model, such as a neural network (e.g., multi-layer perceptron, convolutional neural network, or recurrent neural network) or a support vector machine. In some implementations, the page predictor component may also process a representation of the action that was performed to transition from the first GUI page to the second GUI page. For example, the GUI element that was acted upon may be represented as a GUI element embedding and the action that was performed may be represented as a one-hot vector (of the length of possible actions, such as click, double click, right click, text entry etc.). At step1350, an error value is computed for the prediction by comparing the second GUI page encoding with the predicted GUI page encoding. Any appropriate techniques may be used to compute the error value. In some implementations, the error value may be computed as a distance or similarity (e.g., a mean-squared error, cosine similarity, or inner product) between the second GUI page encoding with the predicted GUI page encoding. At step1360, back propagation is performed using the error value to update the parameters of the GUI page encoding model and/or the page predictor model using any appropriate techniques. In some implementations, the parameters of the page predictor model may be fixed and only the parameters of the GUI page encoding model may be updated. At step1370, it is determined whether the training process is complete. If the training process is not complete, then processing continues to step1320where another pair of GUI pages are selected. If the training process is complete, then processing continues to step1380. At step1380, the trained GUI page encoding model is output so that it may be used for applications of processing GUI pages. In some implementations, GUI page encoding model1010may be trained using an autoencoder.FIG.14is an example system1400for training a GUI page encoding model using an autoencoder. InFIG.14, GUI page encoding model1010processes a GUI page to compute a GUI page encoding and then GUI decoding component1410processes the GUI page encoding to reconstruct the GUI page from the GUI page encoding. FIG.15is a flowchart of an example method for training a GUI page encoding model using an autoencoder. At step1510, a training corpus of GUI pages is obtained. The training corpus may be obtained in any appropriate manner. Other initialization steps may be performed as well, such as initializing the parameters of the GUI page encoding model and/or other models with random values. At steps1520to1560, the training process iterates over the training corpus to process individual GUI pages in the training corpus. For clarity of presentation, the training process is described by iterating over a single GUI page at a time, but in some implementations, the training process may iterate over batches of GUI pages. At step1520, a GUI page is selected. The GUI page may be selected using any appropriate techniques, such as random selection or sequentially iterating over GUI pages in the training corpus. At step1530, a GUI page encoding is computed by processing GUI element representations of the of GUI page with the GUI page encoding model. Any appropriate techniques may be used to compute the GUI page encoding, such as any of the techniques described herein. At step1540, the GUI page encoding is processed by a GUI decoding component to reconstruct the GUI page. The GUI decoding component may include any appropriate mathematical model, such as a neural network (e.g., multi-layer perceptron, convolutional neural network, or recurrent neural network) or a support vector machine. At step1550, an error value is computed by comparing the GUI page to the reconstructed GUI page. Any appropriate techniques may be used to compute the error value. In some implementations, the error value may be computed by computing a distance between GUI elements of the GUI page and the reconstructed GUI page. For example, the distance may be computed using GUI element representations or GUI element representations that have been modified to include position information (such as using any of the techniques described above). In some implementations, the error value may be computed using graph alignment metrics where each GUI element in the reconstructed GUI page may be aligned with a closest GUI element from the GUI page. In some implementations, local distance metrics may be used, such as by predicting a GUI element given its children and the reconstruction of the GUI element. Any combination of the above may also be used to compute an error value. At step1560, back propagation is performed using the error value to update the parameters of the GUI page encoding model and/or the GUI decoder component using any appropriate techniques. At step1570, it is determined whether the training process is complete. If the training process is not complete, then processing continues to step1520where another GUI element is selected. If the training process is complete, then processing continues to step1580. At step1580, the trained GUI page encoding model is output so that it may be used for applications of processing GUI pages. GUI Element Encodings Above, techniques are described for computing GUI element embeddings for GUI elements, and for computing GUI page encodings for GUI pages. In addition, any of the techniques described above for computing GUI page encodings for GUI pages may also be used to compute GUI element encodings for GUI elements. In some implementations, a first GUI element encoding for a first GUI element may be computed by sequentially processing GUI elements of the GUI page along the path from the root GUI element (e.g., the <html> element of a web page) to the first GUI element. Any appropriate techniques may be used to process the GUI elements along the path to compute the first GUI element encoding, such as any of the techniques described herein. In some implementations, the GUI elements along the path may be sequentially processed with a mathematical model that may be referred to as a GUI element encoding model. A GUI element encoding model may include any appropriate mathematical models, such as a neural network (e.g., a recurrent neural network). The output of the mathematical model after processing the GUI elements along the path may be used as the first GUI element encoding for the first GUI element. The mathematical model may process any appropriate representation of the GUI elements along the path, such as one-hot representations of the GUI elements or GUI element embeddings of the GUI elements. A GUI element encoding model may be trained using any appropriate techniques, such as any of the techniques described herein. GUI page encodings may also be computed using GUI element encodings, instead of or in addition to using GUI element embeddings. Implementation FIG.16illustrates components of one implementation of a computing device1600for implementing any of the techniques described above. InFIG.16, the components are shown as being on a single computing device, but the components may be distributed among multiple computing devices, such as a system of computing devices, including, for example, an end-user computing device (e.g., a smart phone or a tablet) and/or a server computing device (e.g., cloud computing). Computing device1600may include any components typical of a computing device, such as volatile or nonvolatile memory1610, one or more processors1611, and one or more network interfaces1612. Computing device1600may also include any input and output components, such as displays, keyboards, and touch screens. Computing device1600may also include a variety of components or modules providing specific functionality, and these components or modules may be implemented in software, hardware, or a combination thereof. Below, several examples of components are described for one example implementation, and other implementations may include additional components or exclude some of the components described below. Computing device1600may have a one-hot representation component1620that may compute one-hot representations of GUI elements using any of the techniques described herein. Computing device1600may have a GUI element embedding training component1621that may train a set of GUI element embeddings from a training corpus using any of the techniques described herein. Computing device1600may have a GUI page encoding training component1622that may train a mathematical model for computing GUI page encodings using any of the techniques described herein. Computing device1600may have a GUI page encoding computation component1623that may compute a GUI page encoding using a mathematical model and using any of the techniques described herein. Computing device1600may have a task processing component1624that may perform a task relating to a GUI page using any of the techniques described herein. Computing device1600may have a page predictor component1625that may predict a subsequent GUI page using any of the techniques described herein. Computing device1600may have an autoencoder component1626that may be used to train a GUI page encoding model using any of the techniques described herein. Computing device1600may include or have access to various data stores. Data stores may use any known storage technology such as files, relational databases, non-relational databases, or any non-transitory computer-readable media. Computing device1600may have a training corpus data store1640that may store a training corpus for training models for GUI element embeddings and GUI page encodings. Computing device1600may have a GUI element embeddings data store1641that may be used to store GUI element embeddings for use in an application relating to GUI pages. It can be seen that the implementations set forth throughout the present disclosure provide technical improvements for rapid and reliable comparison, analysis, adaptation, and/or verification of GUI pages and/or GUI elements. The development of a vector space representation of a GUI, and/or the development of an encoding model for a GUI, facilitate numerous technical improvements over previously known systems and operations. Without limitation to any other aspect of the present disclosure, implementations set forth herein provide for: systematic comparison of GUI pages and/or GUI elements (e.g., to ensure that all functions are present and/or available as planned or relative to a target GUI, to compare a competitive or baseline GUI for functionality, capability, and/or look-and-feel); systematic verification of GUI pages and/or GUI elements (e.g., ensuring that requirements are met, to test changes or updates, and/or to ensure that translation to a different context such as a different user device type, operating system, and/or accessing browser or application maintains expected functionality, capability, and/or look-and-feel); to parse portions of a GUI page and/or GUI element (e.g., to facilitate data capture from operations utilizing the GUI, to compare one or more specific aspects of a GUI, and/or to ensure that planned changes or updates to a GUI are unlikely to result in unexpected consequences or interactions); operations to summarize operational and/or aesthetic aspects of a GUI page and/or GUI element; operations to convert a GUI page and/or GUI element from one context to a different context; and/or operations to analyze a GUI page and/or GUI element to define or verify behavior (e.g., ensuring that different user types interacting with the GUI each have the desired GUI experience, capability to perform desired or authorized functions, and/or inability to access or perform undesired or unauthorized functions). It can be seen that the implementations set forth throughout the present disclosure additionally provide for, where desired, algorithmic interaction with a GUI page and/or GUI element, such as with an application or an application programming interface (API). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. “Processor” as used herein is meant to include at least one processor and unless context clearly indicates otherwise, the plural and the singular should be understood to be interchangeable. Any aspects of the present disclosure may be implemented as a computer-implemented method on the machine, as a system or apparatus as part of or in relation to the machine, or as a computer program product embodied in a computer readable medium executing on one or more of the machines. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer-to-peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine-readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law. All documents referenced herein are hereby incorporated by reference in the entirety.
67,455
11861379
Corresponding reference characters indicate corresponding parts throughout the drawings. DETAILED DESCRIPTION Aspects of the disclosure are described herein in terms of Human Machine Interfaces (HMI), Supervisory Control and Data Acquisition (SCADA), and/or process automation. One having ordinary skill in the art will understand that the described concepts are not limited to these environments and may apply generally to dynamically composing any runtime graphical user interface by looking for available visual content to display, with that visual content organized in a hierarchy and tagged with metadata. FIG.1illustrates an exemplary generation of graphical user interface components in accordance with an embodiment of the disclosure. Aspects of the disclosure enable automatic navigation through graphical user interface screens of an application based on an asset hierarchy and/or custom hierarchy, placement of graphical components (e.g., visual elements) with a predefined layout format based on the navigation model and metadata information defined on the graphical components, and automatically generate graphical user interface screens to create a data-driven HMI application experience. In an embodiment, a user develops a layout and/or a navigation model during design time in a configuration environment102. For example, a user can create a layout container component that includes a navigation model within. As used herein, the term “navigation model” includes a hierarchy of navigation items built by a user in a configuration environment either from scratch or using an asset model as input. The navigation model drives the composition of the view of visual content during runtime. In addition, the term “layout” includes a definition of how a screen is divided into named panes (e.g., rectangular segments) with the expectation that certain visual content will be placed in certain panes based on a user configuration. The layout container component is responsible for saving and loading the layout and its underlying navigation model. When a user creates a layout, the configuration environment creates a navigation model with one navigation item (e.g., a root navigation item). As used herein, the term “navigation item” includes an individual navigation item that contains properties for visual representation of an asset (e.g., equipment such as valves, tanks, and the like) and actions (e.g., “Show content X in location Y”) to execute when selected by a user in a configuration environment and/or a runtime environment. A navigation item may or may not have a context to an asset model. When a user drags and drops content onto a pane within the layout editor a ShowContentAction is created to show the dropped content on the pane. The ShowContentAction stores information regarding the pane, the layout, and/or the content within the pane. The layout information and navigation model may each be saved by serialization into a blob. As used herein, the term “custom actions” includes actions (e.g., “put content X in pane Y”) created by dragging and dropping content from a toolbox-like area to a specific pane. These actions exist on custom navigation nodes that are manually created. Custom actions may also be referred to as “explicit actions” in one or more embodiments. As used herein, the term “implicit actions” includes actions and/or content that already exist in a hierarchy of content and the auto-fill algorithm is attempting to place the content in a pane that is a best match for the content. These actions exist on an included hierarchy of content (e.g., asset hierarchy, equipment hierarchy). As illustrated, the navigation model (e.g., as .aaNav files) is published from the configuration environment102to the runtime environment104as part of a layout (e.g., as XML files) and/or as part of a view application (e.g., ViewApp). In accordance with one or more embodiments, when the application is started in the runtime environment104the startup layouts are loaded and displayed on the configured screens. At that point, the navigation model that drives the visual content is hosted by a view application. In accordance with an aspect of the disclosure, a navigation model has at least two uses, depending on whether it is used with a layout or a view application. When the navigation model is used with a layout, the layout acts as a small widget of behavior with the expectation that the entire widget will be placed in a pane and used during runtime. When the navigation model is used with a view application (e.g., a ViewApp), the view application knows about which screen profile is being targeted, allowing the user to place layouts on screens and then compose the visual content across all of the screens. FIG.2illustrates a system, generally indicated at200, within which an embodiment of the disclosure may be incorporated. The system200in the illustrated embodiment includes a configuration device202, a communications network204, and a runtime device206. The configuration device202includes at least one processor, a memory, a display, and an input/output (I/O) interface and is configured to provide the configuration environment102via a software environment. The memory of configuration device202includes a navigation editor, a navigation model, and hierarchy aware controls. As used herein, the term “navigation editor” includes an editor enabling a user to build a navigation model. And the navigation editor comprises a configuration environment (e.g., a ViewApp Editor, Layout Editor, etc.) in accordance with one or more embodiments. As used herein, the term “hierarchy aware control” includes a control configured in a configuration environment (e.g., design time) and utilized in a runtime environment that knows how to react to a hierarchy (i.e., a navigation model) and displays the requested hierarchy information as required. Exemplary hierarchy aware controls include a tree control, a menu control, a tile list, a tab control, and the like. For instance, trees and menus may show multiple levels of a navigation model while a list or tab control may display one level of the navigation model in accordance with one or more embodiments. In this manner, configuration device202comprises a special-purpose computing device for automated graphical user interface configuration. The one or more processors, memory, display, and I/O interface are communicatively connected and/or electrical connected to each other. The configuration processor is adapted to execute processor-executable instructions stored in the memory for implementing the automated graphical user interface configuration system and method. In an embodiment, the I/O interface is a network interface card (NIC) or modem connecting configuration device202to communications network204. Additionally or alternatively, the I/O interface is a human input device, such as a touchscreen, a mouse, a keyboard, or the like. The communications network204is capable of facilitating the exchange of data among various components of system200, including configuration device202and runtime device206. The communications network204in the embodiment ofFIG.2includes a wide area network (WAN) that is connectable to other telecommunications networks, including other WANs or portions of the Internet or an intranet, including local area networks (LANs). The communications network204may be any telecommunications network that facilitates the exchange of data, such as those that operate according to the IEEE 802.3 (e.g., Ethernet) and/or the IEEE 802.11 (e.g., Wi-Fi) protocols, for example. In another embodiment, communications network204is any medium that allows data to be physically transferred through serial or parallel communication channels (e.g., copper wire, optical fiber, computer bus, wireless communication channel, etc.). In an embodiment, communications network204comprises at least in part a process control network. In another embodiment, communications network204comprises at least in part a SCADA system. In yet another embodiment, communications network204comprises at least in part an enterprise manufacturing intelligence (EMI)/operational intelligence (0I) system. The runtime device206includes at least one processor, a memory, a display, and an I/O interface. The memory of runtime device206includes a graphic runtime module, visual content, a screen profile, hierarchy aware controls, a navigation model, and a layout. In an embodiment, the term “screen profile” includes a definition of how a set of screens are physically composed across display devices and the term “graphic runtime modules (GRM)” includes an engine by which HMI graphical components are rendered to a screen on a display device with real-time data values from process control devices. In this manner, runtime device206comprises a special-purpose computing device for automated graphical user interface configuration. The one or more processors, memory, display, and I/O interface are communicatively connected and/or electrical connected to each other. The processor is adapted to execute processor-executable instructions stored in the memory for implementing the automated graphical user interface configuration system and method. In an embodiment, the I/O interface is a network interface card (NIC) or modem connecting runtime device206to communications network204. Additionally or alternatively, the I/O interface is a human input device, such as a touchscreen, a mouse, a keyboard, or the like. FIG.3illustrates a navigation model, generally indicated at300, in accordance with an embodiment. The navigation model300is comprised of navigation items302. In configuration environment102(e.g., a ViewApp, Layout, etc.) navigation model300defines the navigation through display screens for an end-user (e.g., runtime) experience. Each navigation item302holds a configuration that allows the navigation item to express itself visually in several ways, ranging from simple text to complex visual content representations. Each navigation item302also holds properties to toggle visibility, checked state, and like properties. Each navigation item302is configured to hold one or more actions to execute when selected by a user. In an embodiment, configuration environment102includes a single navigation model300that may be customized to meet user needs. The navigation model300may include multiple root nodes that serve as different visual representations of the application in a single navigation model. The navigation model300defines the navigation hierarchies, references to visual content utilized by navigation aware controls, and the actions to execute when selected. These behaviors are separated from the actual control which will present the navigation model300data to the user. This separation allows runtime environment104to visually express navigation model300in multiple ways at runtime. The power of this multiple expression behavior becomes more apparent when multiple hierarchy aware controls are used together to represent a single navigation model300to the user. In an embodiment, navigation model300is not an individual item in a project, but is part of one or more objects in the project. For example, navigation model300may be serialized with a layout object, serialized with a ViewApp object, and the like. The navigation model300travels with the import/export of the object with which it is serialized. In runtime environment104all published navigation models300from configuration environment102(e.g., various layouts and the single ViewApp) reside within a published application folder. Navigation aware controls request specific navigation items302based on human readable path strings which are references into particular hierarchies of navigation model300. The runtime environment104executes requests from the navigation aware controls to return results based on the request. In an embodiment, navigation aware controls are unaware of which specific navigation model300they are using, but rely on the path string that identifies the link into a particular navigation model's hierarchy. During runtime, a navigation model path string may be injected into a “NavigationModel” property of the control and the control must react accordingly based on the control's configuration. Examples of URI usage are further described herein. Each individual navigation item302in navigation model300contains the necessary information to display a visual representation of an item in various forms, and one or more actions executed when a user triggers (e.g., selects via a GUI on a display device) the navigation item. In an embodiment, navigation items302are “all inclusive.” In another embodiment, navigation items302include the ability to be all inclusive regardless of which type of hierarchy aware control is being used to represent the navigation item. The navigation item302does not need to support all visual representations possible. The navigation items302may be organized (e.g., by a user) in configuration environment102in a single flat list or in a hierarchy to comprise navigation model300. There is no difference between a navigation item that is a leaf node as compared to a navigation item that contains child navigation items. In other words, navigation items containing other navigation items can also trigger actions when selected and are not simply containers for navigation items. In an embodiment, navigation items302have a context to the asset model. In this embodiment, a user can override default values of the asset model and tailor them for usage in navigation model300(e.g., override name “WSTINTANK_004” with “West lnTake Tank 4”). In another embodiment, navigation items302have a context into a visual toolbox (GTB). In yet another embodiment, navigation items302are query related and evaluated at runtime. Exemplary query navigation items include a content most recently used (MRU) query that returns the last number of items viewed by the user and a query to return all specific content types for assets in alarm. As used herein, the term “content type” includes a defined value in a project that acts as a tag to define the nature and usage of a particular visual content component (e.g., an asset) in the project and the term “content types” includes a list of content types available to a user for tagging content (e.g., metadata to describe content in the project). And the term “visual content” as used herein includes a graphical (i.e., visual) widget that may be displayed on a screen by a display device. Exemplary graphical widgets include HMI graphical components, Windows Presentation Foundation (WPF) controls, a WinForms control, a web snippet, a document, and the like. Referring further to the embodiment ofFIG.3, navigation items302include an auto-fill behavior. In this embodiment, each navigation item302controls whether available panes on currently displayed layouts should be auto-filled. Before an auto-fill algorithm (described further herein) can execute, the appropriate layouts must be applied to the available screens. The layouts are determined by looking at the current navigation item302to determine if there are any specific layouts that should be applied to the screens. If any screens are still empty without layouts, the parent navigation item302is queried to continue to search for layouts appropriate for screens. This process continues until the root node of the navigation hierarchy is reached, or all screens have layouts. In an embodiment, it is possible for a user to explicitly mark a screen as empty without a layout in configuration environment102(e.g., design time). In this embodiment the layout on the screen is closed. Once the appropriate layouts are determined, the auto-fill algorithm executes against the available layouts. Auto-filling includes one or more options to control how content is automatically injected into panes without the user having to individually configure the view. In a “none” mode, no content is automatically displayed unless specific commands have been created to display content. In the “none” mode the user has to manually assign the content to specific panes via drag/drop operations. Any commands in parent navigation items or child navigation items are not considered to be used to fill in screens or panes. The “none” mode is available on custom navigation items302. In a “fill from current” mode, the user assigns layouts to screens, but any content on the selected navigation item302would automatically be filled into the available panes using a best available pane algorithm. In a “fill from current and look up” mode, for example, the auto-fill algorithm looks for layouts that initially fill the screens prior to determining which content should be displayed in particular panes. The current navigation item302is inspected for any commands that place layouts or contents in a full screen. For any screens that are still empty, the auto-fill algorithm looks up to the parent navigation item302for additional Showlayout or ShowScreen commands to apply to the empty screens. Once the contents have been established on screens, any content residing on the current navigation item302is displayed on the screen using a best available pane algorithm for each individual piece of content. For any panes that are still empty, the auto-fill pane algorithm looks up to the parent navigation item302for additional commands/content to apply to the empty panes. This process continues until all panes are filled or the root of navigation model300is reached. If the auto-fill pane algorithm intends to replace pane content with exactly the same content it already has, that command is skipped so the existing content is not disturbed. In an embodiment, some panes are not filled and are made empty. A “fill from current and look up then down” mode can be configured as identical to the “fill from current and look up” mode with one addition. If the auto-fill pane algorithm has reached the root of navigation model300and there are still empty screens/panes, the first child of the current navigation item302is inspected to see if it has content that applies to the available screens/panes. This process continues down the navigation model300using the first child at each level until a leaf node is hit, then the auto-fill algorithm stops. In an embodiment, the “fill from current and look up then down” mode is a default mode. The auto-fill algorithm makes no distinction between explicit or implicit commands on any navigation item302when deciding whether to use the content of the action to place in a pane. This lack of distinction also holds true for when the auto-fill algorithm is looking upward or downward in the navigation model300. Explicit actions are triggered by the presence of a pane name on a screen for placement. Regardless of what layout is actually on the screen, the auto-fill algorithm will use an explicit action on any level of the navigation model300if the pane name specified in the action is present on the target screen and the pane is available to receive the content. A single navigation item may have multiple explicit actions targeted at a single pane along with other explicit actions at different levels of the navigation model300if the auto-fill mode is “fill from current and look up” or “fill from current and look up then down” and the explicit actions use the pane name. Implicit actions are triggered by an available pane that matches the content type of the content. A multi-content pane can be the destination of multiple implicit action calls while executing the auto-fill algorithm. For instance, if a pane is configured to host multiple contents and tagged to host overview type pieces of content, and there is a hierarchy of L1, L2, and L3 navigation items302with each navigation item having an overview type content, then the overview pane would display three pieces of content. In an embodiment with multi-content panes (e.g., a single pane filled with content from different levels in navigation model300) the order of the content is dependent upon how it was filled with the auto-fill algorithm. Content types enable a user to tag visual content for specific purposes. Exemplary content types include “Faceplate”, “Overview”, and the like. Content types enable a user to control where content of a particular type is displayed within runtime environment104. In an embodiment, a user can tag content on assets with certain content types so they are automatically utilized when the asset model is utilized in navigation model300. At a high level, a hierarchy control (e.g., hierarchy aware control) reads and/or is injected with the navigation model it should display and displays a particular portion of navigation model300based on its configuration. A particular hierarchy aware control knows how many levels of the hierarchy (e.g., of navigation model300) to display based on its configuration. For instance, a tree control has the ability to display multiple levels of navigation model300but a grid control may only be able to display a single level. For controls that can display multiple levels, one or more properties control how many levels it displays from navigation model300. Moreover, each hierarchy aware control acts in a publish/subscribe (“pub/sub”) model with other tightly coupled hierarchy aware controls to allow multiple controls to visualize a single navigation model300. In an embodiment, only one navigation item302in the runtime navigation model300for a ViewApp is active at a time. The manner in which that single navigation item is selected can happen via various methodologies (e.g., mouse, keyboard, touchscreen, etc.). In an embodiment, a user may link two or more hierarchy controls together to achieve advanced behaviors in runtime environment104. Linking hierarchy controls enables multiple navigation controls to visually express a single navigation model300in multiple ways. Within configuration environment102(e.g., at design time) a user may point the navigation model300of one hierarchy aware control to an existing hierarchy aware control.FIG.4illustrates an exemplary embodiment of wiring together multiple navigation aware controls. When a currently selected navigation item302changes in one or more ways a navigation model service is notified. The navigation model service then looks at all subscribers to the service. The selection change is sent to the first subscriber of the navigation model300, which in the illustrated embodiment is a first hierarchy aware control402(e.g., “HierarchyAwareControll”). The first hierarchy aware control402is linked directly against a node in navigation model300and performs a linking process. The first hierarchy aware control402determines whether it holds the selected navigation item302itself, and if so, visually selects it. If there are any subscribers to the first hierarchy aware control402it publishes the new selected node at the current level. When the first hierarchy aware control402holds the selected navigation item302the current selection was changed and the downstream second hierarchy aware control404refreshes to reflect this new value. After refreshing the downstream hierarchy aware control by reading the children and populating the view the process begins again. Referring further toFIG.4, if the first hierarchy aware control402determines it is not holding the selected navigation item302itself the first hierarchy aware control402determines if it is holding a parent of the selected navigation item302. If it is holding a parent of the selected navigation item302it visually selects the parent. If there are any subscribers to the first hierarchy aware control402it publishes the new selected node at the current level. If the first hierarchy aware control402determines it is holding neither the selected navigation item302nor a parent of the selected navigation item302the first hierarchy aware control402maintains the existing selection and takes no further action (e.g., does not forward the OnSelectedltemChanged to any subscribing controls). FIG.5illustrates an exemplary view application editor500GUI according to an embodiment. In an embodiment, the view application editor500includes a navigation model editor, which is an editor for creating and modifying navigation model300. The editor500works in configuration environment102to present a “what you see is what you get” (WYSIWYG) experience to build the visual representation of what is displayed at each level in the navigation model300. The view application editor500includes a representation1of navigation model300holding a hierarchy of navigation items302, a representation2of auto-fill behavior specific to the selected navigation item302, a list3of actions for showing which content will be placed in which pane, an overview4of screens that are part of the screen profile (e.g., ViewApp), a representation5of the current layout being used on the current screen, a representation6of which layouts are being utilized on which screens, and a WYSIWYG visual representation7of how the content is applied to the current screen using the current layout. FIG.6illustrates an exemplary layout600of a graphical user interface screen of runtime environment104according to an embodiment. The layout600includes a navigation model pane602, a unit overview pane604, an equipment pane606, a devices pane608, and a unit alarms pane610. The view application editor500is a view composition engine that drives the final presentation of the layout600to the screen. To fully compose layout600actions from multiple levels of navigation model300are combined together to fill as many panes as possible based on the currently selected navigation item302. This implies that what the user sees at any particular moment is a combination of content which is shown by the currently selected navigation item and other content shown by the parent nodes and first child nodes (e.g., when the auto-fill mode is “up and down”) of the currently selected navigation item. When a navigation item302triggers opening one or more pieces of content to panes, a process begins to fill the panes. All screens across the screen profile are considered empty. Layouts are determined across the available screens using inheritance. Thus, if the current level of the navigation model300specifies a layout for use on a screen then it is applied to that screen. If any screens are left empty the parent navigation item302is queried to determine if a layout is specified for that screen. All panes across all now-determined layouts are assumed to be empty while the composition engine determines what should be visible in each pane. The content is shown in the appropriate pane(s) given the pane type or a direct reference to the pane. Layouts that have pre-existing content coming from the layout would not be considered for a pane type match. Other panes that do not display content are enumerated to retrieve their individual types. For each pane type, the current level in the navigation model300is queried to see if any explicit actions exist that specify a pane name that exists in the destination screen and/or if any implicit actions exist based on content of that type. In an embodiment, this request is used for navigation items302when the navigation item has a context into the navigation model300because a navigation item302may have several different types of content on it (e.g., overview, faceplate, etc.). If any actions are found at that level of navigation model300, they are displayed in the panes that were determined across the available screens using inheritance. For empty panes, the parent of the current navigation item302is retrieved and the search starts again, attempting to satisfy all the content types of the pane. The search ends when either all content is found for the panes or the root of navigation model300is reached. In an embodiment in which there are multiple panes with the same content type the pane is chosen based on the alphabetical order of the pane name. Referring further toFIG.6, navigation model pane602holds a tree control with the entire asset model of L1, L2, and L3 displays in an embodiment. The unit overview pane604includes an L1 display of primary graphic overviews. The equipment pane606includes an L2 display of groups of equipment graphics. The devices pane608includes an L3 display of faceplates for a specific device. When a user selects a navigation item302in navigation model pane602on a screen of runtime device206that opens an L1 display then the L1 display is opened in unit overview pane604. The runtime device206then looks for L2, L3 and embedded alarm control (EAC) content on the L1 navigation item. When runtime device206finds no L2, L3, or EAC content the process ends. When runtime device206finds L2, L3, and/or EAC content then the content is displayed in each corresponding pane. When a user selects a navigation item302in navigation model pane602on a screen of runtime device206that opens an L2 display then the L2 display is opened in equipment pane606. The runtime device206then looks for L1, L3, and EAC content on the L2 navigation item. When runtime device206finds L1, L3, and/or EAC content then the content is displayed in each corresponding pane. When runtime device206finds no L1, L3, or EAC content, runtime device206goes up a level from the L2 navigation item to the L1 navigation item and looks for L1, L3, and/or EAC content on the L1 navigation item. When runtime device206finds an L1 display the L1 display is displayed in unit overview pane604. When a user selects a navigation item302in navigation model pane602on a screen of runtime device206that opens an L3 display then the L3 display is opened in devices pane608. When runtime device206finds L1, L2, and/or EAC content then the content is displayed in each corresponding pane. When runtime device206finds no L1, L2, or EAC content, runtime device206goes up a level from the L3 navigation item to the L2 navigation item and looks for L1, L2, and/or EAC content on the L2 navigation item. When runtime device206finds an L2 display the L2 display is displayed in equipment pane606. The runtime device206then goes up a level from the L2 navigation item to the L1 navigation item and looks for L1 and/or EAC content on the L1 navigation item. When the runtime device206finds an L1 display the L1 display is displayed in unit overview pane604. A ViewApp follows the same pattern as the layout described above, but is at an abstraction level higher than the layout. At the layout level, the only visible places to place content are in panes602-610. At the ViewApp level, the initial places to put content are based on the screen profile supported by the ViewApp. The ViewApp must first specify which layouts belong on which screen profiles. Commands executed after that would populate the panes602-610in each of the layouts. When one of panes602-610has hierarchy navigation enabled, a user may swipe up to move to the previous navigation item302or swipe down to move to the next sibling navigation item302. In an embodiment, panes602-610each include small indicators (e.g., arrows) at the top and bottom of the pane that accomplish the same behavior when selected (e.g., clicked). The top indicator holds the name of the previous navigation item302and the count of previous navigation items, and the bottom indicator holds the name of the next navigation item302and the count of next navigation items to aid the user. Selection of the up/down indicators triggers activation of a new navigation item302. When a particular pane602-610has been populated with a piece of content, the pane will have to know which navigation item302caused that content to end up in the pane. For example, to process an up to down swipe the pane will need to determine from navigation model300the previous sibling navigation item302and the content that will end up in this pane to process the visual slide. In an embodiment, the up to down swipe will execute the navigation item in the same manner as a click (e.g., via a mouse) on a navigation aware control to trigger the same item. To know which visual content to slide in, the pane forwards a request for content based on the behaviors of the previous sibling navigation item302(i.e., asking the previous navigation item “what do you expect is in this pane when you are triggered?”). The result of the request is visually slid into the pane. Panes602-610in layout600may be filled in using the auto-fill algorithm described herein. In a single layout600the content displayed in panes602-610may come from different navigation items302. Using hierarchy navigation in any pane602-610will utilize the specific navigation item302for swiping behavior which placed that original content in the pane. When a particular pane602-610is set to a “multiple” mode and has been populated with multiple pieces of content, content navigation enables a user to move through the multiple pieces of content in the pane. In an embodiment, content navigation is accomplished by swiping left or right in the pane to move the previous sib ling content and next sibling content, respectively. In another embodiment, the pane displays swipe indicators to allow users to see which swipes are available along with a count of the number of previous and/or next sibling contents as a reference. In an embodiment, panes602-610each include small indicators (e.g., arrows) at the left and right of the pane that accomplish the same behavior when selected (e.g., clicked). For instance, in a multi-content pane the left/right indicators move back and forth to visibly display one piece of content at a time. Selection of the left/right indicators cycles through the pieces of content rather than displaying and/or triggering any navigation items. FIGS.7A-7Fillustrate the auto-fill algorithm in accordance with an embodiment. The algorithm determines how panes602-610will be filled when actions on a selected navigation item302are executed. In accordance with one or more embodiments, the algorithm considers factor such as the content type of contents and available panes602-610, existing content on layout600, and/or clearing of contents after execution of the algorithm. The algorithm enables the automatic filling of panes through navigation in both configuration environment102and runtime environment104. In the exemplary embodiment described below, a processor of runtime device206executes processor-executable instructions stored in the memory to perform the auto-fill algorithm via runtime environment104. Referring toFIG.7A, upon selection of a displayed navigation item302the auto-fill algorithm sets the selected navigation item302on the navigation model300at step702. At step704, the auto-fill algorithm fetches all screens for the current screen profile. The auto-fill algorithm then gets all layouts at step706. In an embodiment, step706includes subprocess706-A in which the auto-fill algorithm gets layouts associated with the currently selected navigation item302at step706-B, determines whether a screen is available at the required level for the selected navigation item302at step706-C, and gets all layouts for the available screen from parent navigation items of the currently selected navigation item within the navigation model300at step706-D. The auto-fill algorithm then gets all unlocked panes at step708. In an embodiment, step708includes walking the screens and enumerating all of the panes. Generally, panes are unlocked but if content was placed in a pane at the layout level then it would be considered locked. In an embodiment in which a user has configured a pane to not participate in auto-fill that pane will also be considered locked. At step710, the auto-fill algorithm displays the layouts. In an embodiment, step710includes subprocess710-A in which the default content of the layouts are displayed at step710-B and/or un-required layouts from previous auto-fill executions are closed at step710-C. The auto-fill algorithm then determines which contents to display at step712. Referring toFIG.7B, the auto-fill algorithm determines the auto-fill mode at step714. When no auto-fill mode is selected, the auto-fill algorithm gets all custom actions on the current level (e.g., level of selected navigation item302) at step716and executes the content actions at step718before ending. When the auto-fill mode is “up,” “up and down,” or “current” the auto-fill algorithm performs different processes that include common steps. Referring toFIG.7C, in which the auto-fill mode is “up,” the algorithm gets all custom actions on the current level at step716. At step720, the algorithm determines whether any empty panes are available to fill with visual content. If no empty panes are available, the algorithm stops and executes the content actions at step718(FIG.7B) before ending. When there are empty panes available, the algorithm gets all implicit actions on the current level at step722. Then for each action, the algorithm performs a subprocess that includes steps724,726,728, and730. At step724, the algorithm gets the content type for the current implicit action. The algorithm determines, at step726, whether matching content type is available. When matching content type is available, the algorithm associates the implicit action to this pane at step728. When matching content type is unavailable, the algorithm skips this implicit action as shown at step730. After completing the subprocess for each implicit action, the algorithm determines whether there are any empty panes available at step732. When no panes are available, the algorithm stops and executes the content actions at step718before ending. When at least one empty pane is available, the algorithm attempts to get the parent level of the currently selected navigation item302at step734. When the algorithm determines, at step736, that no parent level can be found, the algorithm stops and executes the content actions at step718before ending. When a parent level is found the algorithm gets the parent level and sets it as the current level for which to process the actions at step738before continuing back to step716. Referring toFIGS.7D and7E, in which the auto-fill mode is “up and down,” the algorithm performs steps716and720-736as described above with respect to the “up” mode. But when the algorithm determines, at step736, that a parent level is found the algorithm gets the parent level and sets it as the current level for which to process the actions at step738before continuing back to step722. When no parent level can be found, the algorithm goes to the first child navigation item of the selected navigation item302within the navigation model300at step740. When no first child navigation item is found at step742, the algorithm stops and executes the content actions at step718before ending. When the first child navigation item is found at step742, the algorithm sets the first child navigation item as the current level for which to process the actions at step744before continuing back to step722. Referring toFIG.7F, in which the auto-fill mode is “current,” the algorithm performs steps716and720-730as described above with respect to the “up” mode. As described herein, a ViewApp contains a navigation model that defines navigation of an HMI application. The navigation model defines navigation hierarchies, references to graphical components (e.g., visual content) within the hierarchy, and the actions (e.g., show visual content) to execute when a particular hierarchical item (e.g., navigation item) is selected by a user during design time and/or runtime. These behaviors are separated from the display of the navigation model, which allows the same navigation model to be displayed in multiple forms during runtime. When a user selects a navigation item (i.e., navigates) the computing device executing the processor-executable instructions automatically places associated graphical components (e.g., visual content) in appropriate locations (e.g., panes) on the screen based on the auto-fill algorithm. The auto-fill algorithm traverses the navigation model hierarchy to discover content with appropriate metadata (e.g., content type, etc.). Moreover, upon selection of a navigation item the computing device executing the processor-executable instructions sets context attributes that are utilized to update existing and newly added content. In an exemplary embodiment, when a user selects a navigation item (e.g., asset) in the hierarchy of the navigation model the computing device executing the processor-executable instructions automatically places the content in each pane of a layout and updates the context so all of the panes in the layout display content representing data associated with the selected navigation item. For example, if a user selects a “Reactor West” navigation item the computing device executing the processor-executable instructions displays symbols and trends associated with “Reactor West” and an alarm display pane reacts to the context change and displays alarms from “Reactor West.” In an embodiment in which the computing device includes multiple screens (e.g., multiple monitors) the computing device executing the processor-executable instructions automatically manages content placement across all screens. In addition to the embodiment described above with respect toFIG.2, embodiments of the present disclosure may comprise a special purpose computer including a variety of computer hardware, as described in greater detail below. Embodiments within the scope of the present disclosure also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a special purpose computer. By way of example, and not limitation, computer-readable storage media include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media are non-transitory and include, but are not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disks (DVD), or other optical disk storage, solid state drives (SSDs), magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and that can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The following discussion is intended to provide a brief, general description of a suitable computing environment in which aspects of the disclosure may be implemented. Although not required, aspects of the disclosure will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in network environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps. Those skilled in the art will appreciate that aspects of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Aspects of the disclosure may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. An exemplary system for implementing aspects of the disclosure includes a special purpose computing device in the form of a conventional computer, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory includes nonvolatile and volatile memory types. A basic input/output system (BIOS), containing the basic routines that help transfer information between elements within the computer, such as during start-up, may be stored in ROM. Further, the computer may include any device (e.g., computer, laptop, tablet, PDA, cell phone, mobile phone, a smart television, and the like) that is capable of receiving or transmitting an IP address wirelessly to or from the internet. The computer may also include a magnetic hard disk drive for reading from and writing to a magnetic hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to removable optical disk such as a CD-ROM or other optical media. The magnetic hard disk drive, magnetic disk drive, and optical disk drive are connected to the system bus by a hard disk drive interface, a magnetic disk drive-interface, and an optical drive interface, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer. Although the exemplary environment described herein employs a magnetic hard disk, a removable magnetic disk, and a removable optical disk, other types of computer readable media for storing data can be used, including magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAMs, ROMs, SSDs, and the like. Communication media typically embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. One or more aspects of the disclosure may be embodied in computer-executable instructions (i.e., software), routines, or functions stored in system memory or nonvolatile memory as application programs, program modules, and/or program data. The software may alternatively be stored remotely, such as on a remote computer with remote application programs. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on one or more tangible, non-transitory computer readable media (e.g., hard disk, optical disk, removable storage media, solid state memory, RAM, etc.) and executed by one or more processors or other devices. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, application specific integrated circuits, field programmable gate arrays (FPGA), and the like. The computer may operate in a networked environment using logical connections to one or more remote computers. The remote computers may each be another personal computer, a tablet, a PDA, a server, a router, a network PC, a peer device, or other common network node, and typically include many or all of the elements described above relative to the computer. The logical connections include a local area network (LAN) and a wide area network (WAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the Internet. When used in a LAN networking environment, the computer is connected to the local network through a network interface or adapter. When used in a WAN networking environment, the computer may include a modem, a wireless link, or other means for establishing communications over the wide area network, such as the Internet. The modem, which may be internal or external, is connected to the system bus via the serial port interface. In a networked environment, program modules depicted relative to the computer, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing communications over wide area network may be used. Preferably, computer-executable instructions are stored in a memory, such as the hard disk drive, and executed by the computer. Advantageously, the computer processor has the capability to perform all operations (e.g., execute computer-executable instructions) in real-time. The order of execution or performance of the operations in embodiments illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure. Embodiments may be implemented with computer-executable instructions. The computer-executable instructions may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. When introducing elements of aspects of the disclosure or the embodiments thereof, the articles “a”, “an”, “the” and “said” are intended to mean that there are one or more of the elements. The terms “comprising”, “including”, and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Having described aspects of the disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
52,425
11861380
DETAILED DESCRIPTION Various examples are directed to systems and methods for enabling a robust implementation of an application and the functionalities thereof throughout a group-based communication system by facilitating the production, centralization, and organization of application-specific data defined by uniform system-defined specifications. In at least one example, the group-based communication system can be a channel-based messaging system or any other system that facilitates communication between users in a same organization and/or between users across organizations. A first embodiment is directed to a system configured for consolidating application data associated with an application within a group-based communication interface, the system comprising at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the system to: upon detecting a trigger event associated with an application via a group-based communication interface, retrieve application data associated with the application from a group-based communication repository, the application data comprising application contextual data and application home interface contextual data; generate an application home interface associated with the application based at least in part on the application data retrieved from the group-based communication repository, the application home interface comprising: one or more application home interface pages configured to display at least a portion of the application data; and one or more executable processing action elements, each executable processing action element being associated with a respective processing action of the application and configured to initialize the processing action respectively associated therewith upon a selection of the executable processing action elements by a client device; and transmit the application home interface to the client device for rendering within the group-based communication interface via a display device of the client device. In various examples, the application home interface further comprises a user engagement pane configured to display application data received from an application system associated with the application; wherein the user engagement pane is configured to reflect execution of one or more user engagement pane instructions associated with the application, the one or more user engagement pane instructions corresponding to one or more functionalities of the application. In some examples, the one or more processors are further configured to: receive from the application system a notification signal associated with at least one of the one or more user engagement blocks defined by the user engagement pane upon the occurrence of a notification triggering event, the notification signal corresponding at least in part to the one or more user engagement pane instructions associated with the at least one of the user engagement blocks, wherein the notification triggering event is based at least in part on user engagement with the application home interface; and render within the user engagement pane the notification signal associated with at least a portion of the application data and the at least one user engagement blocks. In various examples, each of the one or more executable processing action elements are arranged relative to one another based at least in part on an executable processing action element priority order, wherein the executable processing action element priority order defines the organization of each of the one or more executable processing action elements relative to the other executable processing action elements. Further, in some examples, the executable processing action element priority order is based at least in part on at least one of environmental contextual data and application contextual data associated with a user identifier associated with the client device. In some examples, the one or more processers are further configured to: receive from the client device a selection of an executable action interaction element of the one or more executable processing action elements displayed within the application home interface; upon receipt of a selection of an executable processing action element, retrieve processing action data associated with the selected processing action from the group-based communication repository; generate a processing action execution data packet comprising processing action routing data and payload data, the processing action routing data is generated based at least in part on the processing action data and identifies (1) a processing action to be performed by the application system and (2) a client token identifying a client device requesting the processing action data, and the payload data comprising processing action execution data; and provide the processing action execution data packet to the application system to enable the application system to execute the selected processing action based at least in part on the payload data. In various examples, the one or more processers are further configured to: retrieve environmental contextual data generated for the client device from the group-based communication repository; and populate at least a portion of an interactive dialog based at least in part on the environmental contextual data. In various examples, the one or more processors are further configured to: upon detecting a trigger event associated with the application home interface via the group-based communication interface, determine an initial page of the one or more application home interface pages for display based at least in part on detection of one or more of a previously unvisited indicator, a previously visited indicator, and an unread message indicator associated with the application home interface and a user identifier associated with the client device; and wherein transmitting the application home interface to the client device for rendering within the group-based communication interface via the display device of the client device comprises causing display of the initial page within the application home interface. In various examples, determining an initial page of the one or more application home interface pages for display comprises: detecting an unread message indicator associated with the application home interface identifier and the user identifier; upon detecting an unread message indicator associated with the application home interface identifier and the user identifier, selecting an application home interface message page as the initial page. In some examples, the one or more processors are further configured to: collect application contextual data associated with an application identifier associated with the application, wherein the application contextual data is based on user interaction with the application within the group-based communication interface; and store the application contextual data associated with the application identifier within the group-based communication repository; and display within the application home interface at least a portion of the application contextual data retrieved from the group-based communication repository and application home configuration data generated as user input received from a developer client device associated with a developer user identifier associated with a developer user, the application, and the application contextual data. In various examples, the one or more processors are further configured to: store application settings preference data within the group-based communication repository, the application settings preference data being associated with the application and a user identifier associated with the client device; and display, within the application home interface, at least a portion of the application settings preference data; wherein the application home interface comprises an interactive settings pane comprising one or more interactive settings pane inputs configured based at least in part on user input received from a developer client device associated with a developer user identifier associated with the application. Further, in various examples, the one or more processors are further configured to: provide an application settings data packet to the application system, the applications settings data packet comprising application settings routing data and payload data, wherein the application settings routing data identifies a configurable application functionality corresponding to application settings preference data received at the interactive settings pane to be stored by the application system, and the payload data comprises the application settings preference data. Various examples are directed to methods for providing application data associated with an application within a group-based communication interface, an exemplary method comprising: upon detecting a trigger event associated with an application via a group-based communication interface, retrieving application data associated with the application from a group-based communication repository, the application data comprising application contextual data and application home interface contextual data; generating an application home interface associated with the application based at least in part on the application data retrieved from the group-based communication repository, the application home interface comprising: one or more application home interface pages configured to display at least a portion of the application data; and one or more executable processing action elements, each executable processing action element being associated with a respective processing action of the application and configured to initialize the processing action respectively associated therewith upon a selection of the executable processing action elements by a client device; and transmitting the application home interface to the client device for rendering within the group-based communication interface via a display device of the client device. In some examples, the application home interface may further comprise a user engagement pane configured to display application data received from an application system associated with the application; wherein the user engagement pane may be configured to reflect execution of one or more user engagement pane instructions associated with the application, the one or more user engagement pane instructions corresponding to one or more functionalities of the application. In various examples, an example method may further comprise receiving from the application system a notification signal associated with at least one of the one or more user engagement blocks defined by the user engagement pane upon the occurrence of a notification triggering event, the notification signal corresponding at least in part to the one or more user engagement pane instructions associated with the at least one of the user engagement blocks, wherein the notification triggering event may be based at least in part on user engagement with the application home interface; and rendering within the user engagement pane the notification signal associated with at least a portion of the application data and the at least one user engagement blocks. In some examples, each of the one or more executable processing action elements may be arranged relative to one another based at least in part on an executable processing action element priority order, wherein the executable processing action element priority order defines an organization of each of the one or more executable processing action elements relative to the other executable processing action elements. Further, in various examples, the executable processing action element priority order may be based at least in part on at least one of environmental contextual data and application contextual data associated with a user identifier associated with the client device. In various examples, an example method may further comprise receiving from the client device a selection of an executable processing action element of the one or more executable processing action elements displayed within the application home interface; upon receipt of the selection of the executable processing action element, retrieve processing action data associated with the selected processing action from the group-based communication repository; generating a processing action execution data packet comprising processing action routing data and payload data, the processing action routing data is generated based at least in part on the processing action data and identifies (1) a processing action to be performed by an application system associated with the application and (2) a client token identifying a client device requesting the processing action data, and the payload data comprising processing action execution data; and providing the processing action execution data packet to the application system to enable the application system to execute the selected processing action based at least in part on the payload data. Further, in some examples, an exemplary method may comprise retrieving environmental contextual data generated for the client device from the group-based communication repository; and populating at least a portion of an interactive dialog based at least in part on the environmental contextual data. In various examples, an example method may further comprise, upon detecting a trigger event associated with the application home interface via the group-based communication interface, determining an initial page of the one or more application home interface pages for display based at least in part on detection of one or more of a previously unvisited indicator, a previously visited indicator, and an unread message indicator associated with the application home interface and a user identifier associated with the client device. Further, in various examples, transmitting the application home interface to the client device for rendering within the group-based communication interface via the display device of the client device may comprise causing display of the initial page within the application home interface. In various examples, determining an initial page of the one or more application home interface pages for display may comprise: detecting an unread message indicator associated with the application home interface and the user identifier; and upon detecting an unread message indicator associated with the application home interface and the user identifier, selecting an application home interface message page as the initial page. In various examples, an example method may further comprise collecting application contextual data associated with an application identifier associated with the application, wherein the application contextual data is based on user interaction with the application within the group-based communication interface; storing the application contextual data associated with the application identifier within the group-based communication repository; and displaying within the application home interface at least a portion of the application contextual data retrieved from the group-based communication repository and application home configuration data generated as user input received from a developer client device associated with a developer user identifier associated with a developer user, the application, and the application contextual data. In some examples, an example method may further comprise storing application settings preference data within the group-based communication repository, the application settings preference data being associated with the application and a user identifier associated with the client device; and displaying, within the application home interface, at least a portion of the application settings preference data. In various examples, the application home interface may comprise an interactive settings pane comprising one or more interactive settings pane inputs configured based at least in part on user input received from a developer client device associated with a developer user identifier associated with the application. In some examples, an example method may further comprise providing an application settings data packet to an application system associated with the application, the applications settings data packet comprising application settings routing data and payload data, wherein the application settings routing data may identify a configurable application functionality corresponding to application settings preference data received at the interactive settings pane to be stored by the application system, and the payload data comprises the application settings preference data. Various examples are directed to a computer program product for providing application data associated with an application within a group-based communication interface, the computer program product comprising at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the computer program product to: upon detecting a trigger event associated with an application via a group-based communication interface, retrieve application data associated with the application from a group-based communication repository, the application data comprising application contextual data and application home interface contextual data; generate an application home interface associated with the application based at least in part on the application data retrieved from the group-based communication repository, the application home interface comprising: one or more application home interface pages configured to display at least a portion of the application data; and one or more executable processing action elements, each executable processing action element being associated with a respective processing action of the application and configured to initialize the processing action respectively associated therewith upon a selection of the executable processing action elements by a client device; and transmit the application home interface to the client device for rendering within the group-based communication interface via a display device of the client device. In some examples, the application home interface further may comprise a user engagement pane configured to display application data received from an application system associated with the application; wherein the user engagement pane is configured to reflect execution of one or more user engagement pane instructions associated with the application, the one or more user engagement pane instructions corresponding to one or more functionalities of the application. In various examples, the one or more processors are further configured to: receive from the application system a notification signal associated with at least one of the one or more user engagement blocks defined by the user engagement pane upon the occurrence of a notification triggering event, the notification signal corresponding at least in part to the one or more user engagement pane instructions associated with the at least one of the user engagement blocks, wherein the notification triggering event is based at least in part on user engagement with the application home interface; and render within the user engagement pane the notification signal associated with at least a portion of the application data and the at least one user engagement blocks. In some examples, each of the one or more executable processing action elements may be arranged relative to one another based at least in part on an executable processing action element priority order, wherein the executable processing action element priority order may define the organization of each of the one or more executable processing action elements relative to the other executable processing action elements. In various examples, the executable processing action element priority order is based at least in part on at least one of environmental contextual data and application contextual data associated with a user identifier associated with the client device. In some examples, the one or more processers may be further configured to: receive from the client device a selection of an executable processing action element of the one or more executable processing action elements displayed within the application home interface; upon receipt of the selection of the executable processing action element, retrieve processing action data associated with the selected processing action from the group-based communication repository; generate a processing action execution data packet comprising processing action routing data and payload data, the processing action routing data is generated based at least in part on the processing action data and identifies (1) a processing action to be performed by an application system associated with the application and (2) a client token identifying a client device requesting the processing action data, and the payload data comprising processing action execution data; and provide the processing action execution data packet to the application system to enable the application system to execute the selected processing action based at least in part on the payload data. In various examples, the one or more processers may be further configured to: retrieve environmental contextual data generated for the client device from the group-based communication repository; and populate at least a portion of an interactive dialog based at least in part on the environmental contextual data. In various examples, the one or more processors may be further configured to: upon detecting a trigger event associated with the application home interface via the group-based communication interface, determine an initial page of the one or more application home interface pages for display based at least in part on detection of one or more of a previously unvisited indicator, a previously visited indicator, and an unread message indicator associated with the application home interface and a user identifier associated with the client device. In various examples, transmitting the application home interface to the client device for rendering within the group-based communication interface via the display device of the client device may comprise causing display of the initial page within the application home interface. In some examples, determining an initial page of the one or more application home interface pages for display may comprise: detecting an unread message indicator associated with the application home interface and the user identifier; and upon detecting an unread message indicator associated with the application home interface and the user identifier, selecting an application home interface message page as the initial page. In various examples, the one or more processors may be further configured to: collect application contextual data associated with an application identifier associated with the application, wherein the application contextual data is based on user interaction with the application within the group-based communication interface; store the application contextual data associated with the application identifier within the group-based communication repository; and display within the application home interface at least a portion of the application contextual data retrieved from the group-based communication repository and application home configuration data generated as user input received from a developer client device associated with a developer user identifier associated with a developer user, the application, and the application contextual data. In various examples, the one or more processors may be further configured to: store application settings preference data within the group-based communication repository, the application settings preference data being associated with the application and a user identifier associated with the client device; and display, within the application home interface, at least a portion of the application settings preference data. In certain circumstances, the application home interface may comprise an interactive settings pane comprising one or more interactive settings pane inputs configured based at least in part on user input received from a developer client device associated with a developer user identifier associated with the application. In various examples, the one or more processors may be further configured to: provide an application settings data packet to an application system associated with the application, the applications settings data packet comprising application settings routing data and payload data, wherein the application settings routing data identifies a configurable application functionality corresponding to application settings preference data received at the interactive settings pane to be stored by the application system, and the payload data comprises the application settings preference data. A second embodiment is directed to a system configured for providing an interactive developer interface of a group-based communication system, the system comprising at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the system to: receive application data provided via an interactive developer interface from a developer client device, the application data comprising processing action data and user engagement pane data, wherein at least a portion of the processing action data defines a functionality of a processing action, and wherein at least a portion of the user engagement pane data comprises block data associated with one or more user engagement blocks configured to reflect execution of one or more user engagement pane instructions corresponding to one or more functionalities of an application; based at least in part on a determination that the processing action data satisfies a user interface criteria, generate one or more executable processing action elements for display with an application home interface associated with the application; and store the application data within a group-based communication repository; wherein the user engagement pane data is configured for display within the application user interface via the one or more user engagement blocks; and wherein the processing action data comprises processing action type data associated with the processing action. In various examples, the processing action is defined by one or more processing action characteristics, wherein the one or more processing action characteristics comprise a processing action identifier, one or more processing action parameters, and a processing action type, wherein the one or more processing action parameters define one or more input variables associated with the processing action and configured to facilitate the execution of the processing action based at least in part on corresponding user input. In some examples, the processing action type data comprises a processing action type associated with the processing action, wherein processing action type is defined at least in part by one or more required processing action parameters for executing the processing action; and wherein the processing action type may comprise one of a message processing action, a channel processing action, and a global processing action. Further, in various examples, the processing action type data is based at least in part on one or both of user input from the developer client device at the interactive developer interface and the one or more processing action parameters associated with the processing action. In various examples, the one or more processors are further configured to: receive processing action type data from the developer client device via the interactive developer interface; based at least in part on the processing action type data, determine that the one or more processing action parameters associated with the processing action are configured so as to ensure that the processing action can be operably executed via a group-based communication interface. In various examples, the application data received from the client device further comprises application informational data and application settings data, wherein the application informational data and the application settings data define at least a portion of application home interface configuration data such that at least one of the application settings data or the application informational data may be selectively retrieved by a group-based communication server to render at least a portion of an application home interface associated with the application, wherein the application settings data comprises one or more instructions related to one or more application functionalities within the group-based communication system. Various examples are directed to example methods for providing an interactive developer interface of a group-based communication system, the method comprising: receiving application data provided via an interactive developer interface from a developer client device, the application data comprising processing action data and user engagement pane data, wherein at least a portion of the processing action data defines a functionality of a processing action, and wherein at least a portion of the user engagement pane data comprises block data associated with one or more user engagement blocks configured to reflect execution of one or more user engagement pane instructions corresponding to one or more functionalities of an application; based at least in part on a determination that the processing action data satisfies a user interface criteria, generating one or more executable processing action elements for display with an application home interface associated with the application; and storing the application data within a group-based communication repository. In various examples, the user engagement pane data may be configured for display within the application home interface via the one or more user engagement blocks; and the processing action data may comprise processing action type data associated with the processing action. In various examples, a processing action may defined by one or more processing action characteristics, wherein the one or more processing action characteristics comprise a processing action identifier, one or more processing action parameters, and a processing action type, wherein the one or more processing action parameters define one or more input variables associated with the processing action and configured to facilitate an execution of the processing action based at least in part on corresponding user input. In various examples, the processing action type data may comprise a processing action type associated with the processing action, wherein processing action type is defined at least in part by one or more required processing action parameters for executing the processing action; and wherein the processing action type may comprise one of a message processing action, a channel processing action, and a global processing action. In some examples, the processing action type data may be based at least in part on one or both of user input from the developer client device at the interactive developer interface and the one or more processing action parameters associated with the processing action. In various examples, an exemplary method may further comprise receiving processing action type data from the developer client device via the interactive developer interface; and based at least in part on the processing action type data, determining that the one or more processing action parameters associated with the processing action are configured so as to ensure that the processing action can be operably executed via a group-based communication interface. In some examples, the application data received from the client device further comprises application informational data and application settings data, wherein the application informational data and the application settings data may define at least a portion of application home interface configuration data such that at least one of the application settings data or the application informational data may be selectively retrieved by a group-based communication server to render at least a portion of an application home interface associated with the application, wherein the application settings data comprises one or more instructions related to one or more application functionalities within the group-based communication system. Various examples are directed to a computer program product for providing an interactive developer interface of a group-based communication system. In various examples, the computer program product may comprise at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the computer program product to: receive application data provided via an interactive developer interface from a developer client device, the application data comprising processing action data and user engagement pane data, wherein at least a portion of the processing action data defines a functionality of a processing action, and wherein at least a portion of the user engagement pane data comprises block data associated with one or more user engagement blocks configured to reflect execution of one or more user engagement pane instructions corresponding to one or more functionalities of an application; based at least in part on a determination that the processing action data satisfies a user interface criteria, generate one or more executable processing action elements for display with an application home interface associated with the application; and store the application data within a group-based communication repository. In various examples, the user engagement pane data may be configured for display within the application home interface via the one or more user engagement blocks; and the processing action data may comprise processing action type data associated with the processing action. In some examples, the processing action may be defined by one or more processing action characteristics, wherein the one or more processing action characteristics may comprise a processing action identifier, one or more processing action parameters, and a processing action type, wherein the one or more processing action parameters define one or more input variables associated with the processing action and configured to facilitate an execution of the processing action based at least in part on corresponding user input. In some examples, the processing action type data may comprise a processing action type associated with the processing action, wherein processing action type may be defined at least in part by one or more required processing action parameters for executing the processing action; and wherein the processing action type may comprise one of a message processing action, a channel processing action, and a global processing action. In various examples, the processing action type data may be based at least in part on one or both of user input from the developer client device at the interactive developer interface and the one or more processing action parameters associated with the processing action. In some examples, the one or more processors may be further configured to: receive processing action type data from the developer client device via the interactive developer interface; and based at least in part on the processing action type data, determine that the one or more processing action parameters associated with the processing action are configured so as to ensure that the processing action can be operably executed via a group-based communication interface. In various examples, the application data received from the client device further may comprise application informational data and application settings data, wherein the application informational data and the application settings data may define at least a portion of application home interface configuration data such that at least one of the application settings data or the application informational data may be selectively retrieved by a group-based communication server to render at least a portion of an application home interface associated with the application, wherein the application settings data may comprise one or more instructions related to one or more application functionalities within the group-based communication system. A third embodiment is directed to a system configured for indexing processing actions associated with a plurality of applications, the system comprising at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the system to: receive application data for a plurality of applications, wherein the application data comprises processing action data corresponding to one or more processing actions executable by a corresponding application of the plurality of applications, wherein each of the one or more processing actions is defined by a plurality of processing action characteristics comprising an application identifier and a processing action identifier; store the application data associated with the application identifier and each of the one or more processing action identifiers within an application table for a group, at a group-based communication repository; indexing at least one of the one or more processing actions within the application table for the group based at least in part on processing action characteristics. In some examples, the one or more processors are further configured to: generate one or more processing action characteristic identifiers, each associated with a processing action characteristic and a processing action identifier associated with a corresponding processing action, wherein a corresponding processing action comprises a processing action of the one or more processing actions that is defined at least in part by the processing action characteristic; and associate the application identifier with each of the one or more processing action characteristic identifiers, such that the application data comprises each of the processing action characteristic identifiers. In various examples, the application table comprises a processing action characteristic table identifying each of the plurality of processing action characteristics of each plurality of processing action characteristics defining each of the one or more processing actions; and the one or more processors are further configured to: receive environmental contextual data generated for a client device, wherein the environmental contextual data is generated based at least in part on interactions of the client device with the group-based communication system during a current connection session; generate relevance scores for each of the one or more processing action characteristics identified within the processing action characteristic table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generate a contextual processing action characteristics list of at least one of the one or more processing action characteristics; retrieve processing action characteristic data associated with each of the processing action characteristic identifiers associated with each of the one or more processing action characteristics; based at least on the environmental contextual data, generate one or more contextual processing action lists of one or more processing actions associated with a processing action characteristic identifier associated with the one or more processing action characteristics, each of the one or more contextual processing action lists respectively corresponding to the at least one of the one or more of processing action characteristics of the contextual processing action characteristics list; retrieve processing action data associated with each of the processing actions of the processing actions of the one or more contextual processing action lists; and transmit the contextual processing action characteristic list and the corresponding one or more contextual processing action lists to the client device for presentation via a group-based communication interface; wherein each processing action of the one or more contextual processing action lists is associated with an application of the plurality of applications. In some examples, the one or more processors are further configured to: receive environmental contextual data generated for a client device, wherein the environmental contextual data is generated based at least in part on interactions of the client device with the group-based communication system during a current connection session; generate relevance scores for each of a plurality of applications identified within the application table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generate a contextual application list of one or more of the plurality of applications; retrieve processing action data associated with each of the processing action identifiers associated with each of the one or more of the plurality of applications; based at least on the environmental contextual data, generate one or more contextual processing action lists, each of the one or more contextual processing action lists corresponding to an application of the one or more of the plurality of applications; and transmit the contextual processing action list of the one or more of the plurality of processing actions to the client device for presentation via a group-based communication interface; wherein each of the one or more contextual processing action lists comprises one or more processing actions associated with one or more processing action identifiers, each of the one or more processing action identifiers being associated with an application of one or more of the plurality of applications. In various examples, the one or more processors are further configured to: receive from a client device a processing action pin request associated with a user identifier associated with the client device, a group-based communication channel identifier, and a processing action associated with an application of a plurality of applications, wherein the user identifier is associated with access rights to a group-based communication channel associated with the group-based communication channel identifier, and wherein the processing action pin request is further associated with a processing action identifier and an application identifier associated with the application; associating the group-based communication channel identifier and the processing action identifier associated with the processing action pin request; rendering for display within a group-based communication channel interface associated with the group-based communication channel identifier an executable processing action element corresponding to the processing action identifier associated with the processing action pin request. Further, in various examples, the one or more processors are further configured to: receive from the client device associated with the processing action pin request a secondary processing action accessibility data associated with the group-based communication channel identifier and the processing action identifier, wherein the secondary processing action accessibility data comprises an executable instruction configured to at least partially restrict access to the processing action within the group-based communication channel. Various examples are directed to an exemplary method for indexing processing actions associated with a plurality of applications, the method comprising: receiving application data for a plurality of applications, wherein the application data comprises processing action data corresponding to one or more processing actions executable by a corresponding application of the plurality of applications, wherein each of the one or more processing actions is defined by a plurality of processing action characteristics comprising an application identifier and a processing action identifier; storing the application data associated with the application identifier and each of the one or more processing action identifiers within an application table for a group, at a group-based communication repository; and indexing at least one of the one or more processing actions within the application table for the group based at least in part on processing action characteristics. In some examples, the exemplary method may further comprise: generating one or more processing action characteristic identifiers, each associated with a processing action characteristic and a processing action identifier associated with a corresponding processing action, wherein a corresponding processing action comprises a processing action of the one or more processing actions that is defined at least in part by the processing action characteristic; and associating the application identifier with each of the one or more processing action characteristic identifiers, such that the application data comprises each of the processing action characteristic identifiers. In some examples, the exemplary method may further comprise: receiving environmental contextual data generated for a client device, wherein the environmental contextual data may be generated based at least in part on interactions of the client device with a group-based communication system during a current connection session; generating relevance scores for each of the one or more processing action characteristics identified within a processing action characteristic table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generating a contextual processing action characteristics list of at least one of the one or more processing action characteristics; retrieving processing action characteristic data associated with each of the processing action characteristic identifiers associated with each of the one or more processing action characteristics; based at least on the environmental contextual data, generating one or more contextual processing action lists of one or more processing actions associated with a processing action characteristic identifier associated with the one or more processing action characteristics, each of the one or more contextual processing action lists respectively corresponding to the at least one of the one or more of processing action characteristics of the contextual processing action characteristics list; retrieving processing action data associated with each of the processing actions of the processing actions of the one or more contextual processing action lists; and transmitting the contextual processing action characteristic list and the one or more contextual processing action lists corresponding thereto to the client device for presentation via a group-based communication interface. In various examples, each processing action of the one or more contextual processing action lists may be associated with an application of the plurality of applications. In various examples, the application table may comprise the processing action characteristic table identifying each of the plurality of processing action characteristics of each plurality of processing action characteristics defining each of the one or more processing actions. In some examples, the exemplary method may further comprise: receiving environmental contextual data generated for a client device, wherein the environmental contextual data may be generated based at least in part on interactions of the client device with a group-based communication system during a current connection session; generating relevance scores for each of a plurality of applications identified within the application table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generating a contextual application list of one or more of the plurality of applications; retrieving processing action data associated with each of the processing action identifiers associated with each of the one or more of the plurality of applications; based at least on the environmental contextual data, generating one or more contextual processing action lists, each of the one or more contextual processing action lists corresponding to an application of the one or more of the plurality of applications; and transmitting the contextual processing action lists of the one or more processing actions to the client device for presentation via a group-based communication interface. In various examples, each of the one or more contextual processing action lists may comprise one or more processing actions associated with one or more processing action identifiers, each of the one or more processing action identifiers being associated with an application of one or more of the plurality of applications. In some examples, the exemplary method may further comprise: receiving from a client device a processing action pin request associated with a user identifier associated with the client device, a group-based communication channel identifier, and a processing action associated with an application of a plurality of applications, wherein the user identifier may be associated with access rights to a group-based communication channel associated with the group-based communication channel identifier, and wherein the processing action pin request may be further associated with a processing action identifier and an application identifier associated with the application; associating the group-based communication channel identifier and the processing action identifier associated with the processing action pin request; and rendering for display within a group-based communication channel interface associated with the group-based communication channel identifier an executable processing action element corresponding to the processing action identifier associated with the processing action pin request. In some examples, the exemplary method may further comprise: receiving from the client device associated with the processing action pin request a secondary processing action accessibility data associated with the group-based communication channel identifier and the processing action identifier, wherein the secondary processing action accessibility data comprises an executable instruction configured to at least partially restrict access to the processing action within the group-based communication channel. Various examples may be directed to a computer program product for indexing processing actions associated with a plurality of applications. In various examples, the computer program product may comprise at least one processor, and at least one non-transitory memory comprising instructions that, with the at least one processor, cause the apparatus to: receive application data for a plurality of applications, wherein the application data comprises processing action data corresponding to one or more processing actions executable by a corresponding application of the plurality of applications, wherein each of the one or more processing actions is defined by a plurality of processing action characteristics comprising an application identifier and a processing action identifier; store the application data associated with the application identifier and each of the one or more processing action identifiers within an application table for a group, at a group-based communication repository; and index at least one of the one or more processing actions within the application table for the group based at least in part on processing action characteristics. In some examples, the one or more processors are further configured to: generate one or more processing action characteristic identifiers, each associated with a processing action characteristic and a processing action identifier associated with a corresponding processing action, wherein a corresponding processing action may comprise a processing action of the one or more processing actions that is defined at least in part by the processing action characteristic; and associate the application identifier with each of the one or more processing action characteristic identifiers, such that the application data comprises each of the processing action characteristic identifiers. In various examples, the application table may comprise a processing action characteristic table identifying each of the plurality of processing action characteristics of each plurality of processing action characteristics defining each of the one or more processing actions. In some examples, the one or more processors may be further configured to: receive environmental contextual data generated for a client device, wherein the environmental contextual data is generated based at least in part on interactions of the client device with a group-based communication system during a current connection session; generate relevance scores for each of the one or more processing action characteristics identified within the processing action characteristic table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generate a contextual processing action characteristics list of at least one of the one or more processing action characteristics; retrieve processing action characteristic data associated with each of the processing action characteristic identifiers associated with each of the one or more processing action characteristics; based at least on the environmental contextual data, generate one or more contextual processing action lists of one or more processing actions associated with a processing action characteristic identifier associated with the one or more processing action characteristics, each of the one or more contextual processing action lists respectively corresponding to the at least one of the one or more of processing action characteristics of the contextual processing action characteristics list; retrieve processing action data associated with each of the processing actions of the processing actions of the one or more contextual processing action lists; and transmit the contextual processing action characteristic list and the one or more contextual processing action lists corresponding thereto to the client device for presentation via a group-based communication interface. In certain circumstance, each processing action of the one or more contextual processing action lists may be associated with an application of the plurality of applications. In various examples, the one or more processors may be further configured to: receive environmental contextual data generated for a client device, wherein the environmental contextual data may be generated based at least in part on interactions of the client device with a group-based communication system during a current connection session; generate relevance scores for each of a plurality of applications identified within the application table based at least in part on the environmental contextual data generated for the client device; based at least in part on the relevance scores, generate a contextual application list of one or more of the plurality of applications; retrieve processing action data associated with each of the processing action identifiers associated with each of the one or more of the plurality of applications; based at least on the environmental contextual data, generate one or more contextual processing action lists, each of the one or more contextual processing action lists corresponding to an application of the one or more of the plurality of applications; and transmit the contextual processing action lists of the one or more processing actions to the client device for presentation via a group-based communication interface. In some examples, each of the one or more contextual processing action lists may comprise one or more processing actions associated with one or more processing action identifiers, each of the one or more processing action identifiers being associated with an application of one or more of the plurality of applications. In various examples, the one or more processors may be further configured to: receive from a client device a processing action pin request associated with a user identifier associated with the client device, a group-based communication channel identifier, and a processing action associated with an application of a plurality of applications, wherein the user identifier may be associated with access rights to a group-based communication channel associated with the group-based communication channel identifier, and wherein the processing action pin request may be further associated with a processing action identifier and an application identifier associated with the application; associate the group-based communication channel identifier and the processing action identifier associated with the processing action pin request; and render for display within a group-based communication channel interface associated with the group-based communication channel identifier an executable processing action element corresponding to the processing action identifier associated with the processing action pin request. In some examples, the one or more processors are further configured to: receive from the client device associated with the processing action pin request a secondary processing action accessibility data associated with the group-based communication channel identifier and the processing action identifier, wherein the secondary processing action accessibility data may comprise an executable instruction configured to at least partially restrict access to the processing action within the group-based communication channel. The present disclosure more fully describes various examples with reference to the accompanying drawings. It should be understood that some, but not all embodiments are shown and described herein. Indeed, the embodiments may take many different forms, and accordingly this disclosure should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. Overview As discussed herein, some examples of the present disclosure are directed to systems and methods of providing application data associated with an application within a group-based communication interface. Such application data may be consolidated within the group-based communication interface so as to facilitate search and/or access to various application data (e.g., across a plurality of applications). The group-based communication system is configured to implement various functionalities of a plurality of applications within the group-based communication system. Such configurations enable users of the group-based communication platform to initiate various actions within various systems (e.g., either external applications or the group-based communication system) based on commands and/or processes performed within a group-based communication interface of the group-based communication system. As used herein, a user may include an individual, a group of individuals, business, organization, and the like. Users may access a group-based communication or messaging system using client devices. In various examples, to facilitate a meaningful interaction between a user and an application implemented within the group-based communication system, a singular application home interface configured to display various information associated with the application may be generated within the group-based communication interface upon a user's request. For example, the application home interface may consolidate various application-specific informational data (e.g., a title, brief description, application version/update information, and/or the like), contextual data (e.g., information about the various group-based communication channels, workspaces, and/or the like throughout the group-based communication system at which the application is installed, application usage information specific to a particular use), and various application settings menus may be provided within a single interface. Further, the interface may provide a messaging pane and/or a developer-configured user engagement pane, each configured to facilitate a user's direct communication to the developer of the application and engagement thereof. Additionally, the application home interface may comprise a list of each of the processing actions associated with the application that are available for a user to initialize. The application home interface functions as a convenient interface through which a user may initialize a processing action. Moreover, the comprehensive itemization of each of the processing actions associated with an application can facilitate user interaction with the application by introducing the user to various functionalities with which the user was previously unaware. Moreover, the present invention allows an application developer to personalize the informational content provided to a user at the centralized interactive interface. Such a configuration enables a developer to display prioritized application functionalities based on perceived growth potential and may facilitate the promotion of the developer's application throughout the group-based communication system. Various examples of the present invention are directed to providing each developer seeking to implement an application within the group-based communication system with an interactive developer interface. As described herein, each interactive interface may comprise various fillable fields configured to not only direct a developer through the application implementation process, but to function as systematic structures within which each application and processing action must be implemented so as to ensure the seamless implementation of the application throughout the group-based communication system. The interactive developer interface is be configured to present a consistent interface layout to each developer so as to establish a processing action framework within the group-based communication system that is defined by application- and/or action-specific data organized throughout a plurality of server-defined trans-application data arrays so as to ensure system operability and functional consistency. Further, various examples of the present invention utilize the uniform configuration and internalized data collection associated with the application and/or processing action implementation process to enable a group-based communication system wherein each of the various processing actions associated with the plurality of applications may be indexed based on one or more characteristics. As described herein, indexing each of the plurality of processing actions may facilitate a broader, more robust, user-friendly searching functionality within the group-based communication system. For example, as described herein, various examples of the present invention are configured to generate a list of available processing actions being presented to a user based on a user search of an application name, description of the processing action, processing action parameter, processing action type, and/or the like. Accordingly, the present disclosure provides a technological improvement that results in an improved search functionality within a group-based communication system. The improved search functionality may reduce a number of search queries a user may submit to receive a relevant response. A reduction in the number of search queries may result in a decreased amount of processing power and/or memory used by a user computing device and/or a server computing device configured to provide a response to a search query. As such, the techniques described herein may improve both the functioning of the user computing device and the server computing device. Additionally, the reduction in a number of search queries submitted may result in a total amount of data transmitted via a network, such as from the user computing device to the server computing device configured to provide the response to the search query. Accordingly, the techniques described herein may result in additional network bandwidth being available for other functions. Furthermore, the present disclosure provides a technological improvement for providing an interactive developer interface of a group-based communication system. The interactive developer interface can enable a user to submit feedback to an application developer, such as to improve the application. Example System Architecture Methods, apparatuses, and computer program products of the present invention may be embodied by any of a variety of devices. For example, the method, apparatus, and computer program product of an example embodiment may be embodied by a network device, such as a server or other network entity, configured to communicate with one or more devices, such as one or more client devices. In some preferred and non-limiting examples, the computing device may include fixed computing devices, such as a personal computer or a computer workstation. Still further, example embodiments may be embodied by any of a variety of mobile devices, such as a portable digital assistant (PDA), mobile phone, smartphone, laptop computer, tablet computer, wearable device, or any combination of the aforementioned devices. FIG.1illustrates an example computing system100within which examples of the present invention may operate. Users may access a group-based communication system118via a communication network108using client devices102-106. The client devices102,104, and106(collectively referred to as client devices102-106) may include computer hardware and/or software that is configured to access a service made available by a server. The server is often (but not always) on another computer system, in which case the client device accesses the service by way of a network. In at least one example, the client devices102-106include computer hardware and/or software that is configured to access the group-based communication system118. The term “group-based” as used herein refers to a system, channel, message, or virtual environment that has security sufficient such that it is accessible only to a defined group of users. The group may be defined by common access credentials such as those of an organization or commercial enterprise. Access may further be facilitated by a validated request to join or an invitation to join transmitted by one group member user to another non-member user. Group identifiers may be used to associate data, information, objects, messages, etc., with specific groups. The group-based communication system118may include a platform through which client devices102-106may communicate and interact in a group-based setting. The group-based communication system118may comprise a collection of computing services that are accessible to one or more client device102-106, and that are operable to provide access to a plurality of software applications related to operations of databases. The client device102-106may access the group-based communication system118during a connection session in which the client device102-106maintains an active connection with the group-based communication system118. A single connection session may encompass a continuous time period during which the client device102-106maintains a connection with the group-based communication system118(e.g., between consecutive interruptions in connection, between consecutive occurrences of establishing and ending a connection, and/or the like). It should be understood that in some examples, a connection session may continue between consecutive occurrences for establishing and ending a connection between a client device102-106and the group-based communication system118, despite the inclusion of one or more short-duration interruptions, during which the client device102-106and/or the group-based communication system118is configured to cache any data to be exchanged which is generated and/or retrieved during the short-duration interruption. While a connection session remains active, it may be referred to as a “current connection session.” Once a current connection session ends (e.g., by termination of the connection between the client device and the group-based communication system) the current connection session becomes a prior connection session. In various examples, each connection sessions may have associated therewith a session identifier (e.g., alphanumeric string, symbol, etc.), which uniquely identifies a particular connection session, thereby enabling a client device102-106and/or the group-based communication system118to distinguish between a current connection session and prior connection sessions. In some examples, the group-based communication system118may take the form of one or more central servers disposed in communication with one or more additional servers running software applications and having access to one or more databases storing digital content items, application-related data, and/or the like. The group-based communication system118may also support client retention settings and other compliance aspects. Further, the group-based communication system118may provide comprehensive third-party developer support that grants appropriate access to the data and allows third parties to build applications and bots to integrate with customer's workflows. Users of the group-based communication system118may be organized into organization groups (e.g., employees of each company may be a separate organization group) and each organization group may have one or more communication channels (e.g., group-based communication channels) to which users may be assigned or which the users may join (e.g., group-based communication channels may represent departments, geographic locations such as offices, product lines, user interests, topics, issues, and/or the like). Each group of the group-based communication system118may have associated therewith a group identifier usable to facilitate access control for a message (e.g., access to the message, such as having the message return as part of search results in response to a search query, may be restricted to those users having the group identifier associated with their user profile). In some examples, the group identifier may be used to determine context for the message (e.g., a description of the group, such as the name of an organization and/or a brief description of the organization, may be associated with the group identifier). The users of the group-based communication system118may join and/or create communication channels (e.g., group-based communication channels). Some group-based communication channels may be globally accessible to those users having a particular organizational group identifier associated with their user profile (i.e., users who are members of the organization). A user profile (e.g., associated with a user account, including user account details) may include information associated with a user, including, but not limited to, a user identifier, one or more group-based communication channel identifiers associated with group-based communication channels that the user has been granted access to, one or more group identifiers for groups with which the user is associated, an indication as to whether the user is an owner of any group-based communication channels, an indication as to whether the user has any group-based communication channel restrictions, a plurality of messages, a plurality of emojis, a plurality of conversations, a plurality of conversation topics, a plurality of historical conversation primitives associated with the user profile, an avatar, an email address, a real name (e.g., John Doe), a username (e.g., j doe), a password, a real name, a time zone, a status, conversation segments associated with the user, metadata indicating historical messages with same conversation primitive shared with other user profiles, a digital signature data structure, and the like. The user account details can include a subset designation of user credentials, such as, for example, login information for the user including the user's username and password. Access to some group-based communication channels may be restricted to members (e.g., users) of specified groups, whereby the group-based communication channels are accessible to those users having a particular group identifier associated with their user profile. The group-based communication channel identifier may be used to facilitate access control for a message (e.g., access to the message, such as having the message return as part of search results in response to a search query, may be restricted to those users having the group-based communication channel identifier associated with their user profile, or who have the ability to join the group-based communication channel). The group-based communication channel identifier may be used to determine context for the message (e.g., a description of the group-based communication channel, such as a description of a project discussed in the group-based communication channel, may be associated with the group-based communication channel identifier). In various examples, one or more computing devices associated with the group-based communication system118, the user devices102-106and/or one or more third-party computing devices, such as one or more computing devices associated with one or more application systems112-116, may be configured to communicate via the communication network108(e.g., one or more networks). The communication network108may include any wired or wireless communication network including, for example, a wired or wireless local area network (LAN), personal area network (PAN), metropolitan area network (MAN), wide area network (WAN), or the like, as well as any hardware, software and/or firmware required to implement it (such as, e.g., network routers, etc.). For example, communication network108may include a cellular telephone, an 802.11, 802.16, 802.20, and/or WiMax network. Further, the communication network108may include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. As discussed herein, the networking protocol is configured to enable data transmission via websocket communications. For instance, the networking protocol may be customized to suit the needs of the group-based communication system. In some examples, the protocol is a custom protocol of JSON objects sent via a websocket channel. In some examples, data may be transmitted via a plurality of protocols, such as JSON over RPC, JSON over REST/HTTP, and the like. In the illustrated embodiment, the group-based communication system118includes at least one group-based communication server(s)110accessible via the communication network108. The at least one group-based communication server(s)110may be configured to interact with various client devices102-106for receiving and/or disseminating messages for distribution within the communication channels described above and/or one or more direct messaging instances (e.g., private mode of communication between a user and one or more other users). The functionality of the group-based communication server(s)110may be provided via a single server or collection of servers having a common functionality, or the functionality of the group-based communication server(s)110may be segmented among a plurality of servers or collections of servers performing subsets of the described functionality of the group-based communication servers. For example, a first subset of group-based communication servers may be configured for receiving messages from client devices102-106and/or for transmitting messages to client devices102-106(e.g., via one or more interface servers). The group-based communication server(s)110may be in communication with a second subset of group-based communication server(s)110configured for collecting messages distributed within communication channels and for storing those messages within a message repository database for indexing and archiving. In at least one example, the group-based communication system118encompasses one or more group-based communication repositories120, which may define one or more cache memory storage areas and/or one or more long term storage areas, such as for storing historical data utilized for executing one or more models, as discussed herein. In some examples, the historical data may include environmental contextual data (and/or routing data) associated with a previous execution of a processing action (e.g., during a prior connection session). In some examples, the historical data may include environmental contextual data deemed relevant for the execution of the processing action under similar circumstances to those associated with a current connection session. For example, the historical data may indicate which processing actions are generally selected by one or more users during similar circumstances; which environmental contextual data is relevant for a processing action under similar circumstances; and/or the like. The historical data may encompass user historical data that is unique to a particular user and identifies how that particular user has interacted with the group-based communication system118in the past. In some examples, the historical data encompasses universal historical data, which identifies how a plurality of users have generally interacted with the group-based communication system118in the past under similar circumstances. In accordance with some examples, the similar circumstances are determined and/or monitored via artificial intelligence and/or machine learning algorithms which monitor generated environmental contextual data and a user's resulting interaction with the group-based communication system under the circumstances of the generated environmental contextual data. In some examples, the historical data may be consolidated and/or summarized into characteristics of the processing action and/or environmental contextual data associated with a particular user under the particular circumstances. In at least one example, the at least one group-based communication server(s)110is configured to receive messages transmitted from one or more client devices102-106, store the messages within a group-based communication repository120for individual communication channels, and/or transmit messages to appropriate client devices102-106, such as based on the communication channel, direct messaging instance, or the like. The messages (e.g., group-based communication messages) may include any electronically generated digital content object provided by a user that has security sufficient such that it is accessible only to a defined group of users and that is configured for display within a group-based communication channel. The messages may include any text, image, video, audio, or combination thereof provided by a user (using a client device). For instance, the user may transmit a first message that includes text as well as an image and a video. In such a case, the text, image, and video would comprise the message or digital content object. Each message sent or posted to a group-based communication channel of the group-based communication system includes metadata including one or more of a timestamp associated with post of the message, a sending user identifier, a message identifier, message contents, a group identifier, a group-based communication channel identifier, a thread identifier, and/or other data associated with the message. Each of the foregoing identifiers may comprise ASCII text, a pointer, a memory address, and the like. The group-based communication repository120may include a computing location where data is stored, accessed, modified and otherwise maintained by the group-based communication system118. The stored data includes information that facilitates the operation of the group-based communication system118. The group-based communication repository120may be embodied as a data storage device or devices, as a separate database server or servers, or as a combination of data storage devices and separate database servers. Further, in some examples, the group-based communication repository120may be embodied as a distributed repository such that some of the stored data is stored centrally in a location within the group-based communication system118and other data stored in a single remote location or a plurality of remote locations. Alternatively, in some examples, the group-based communication repository120may be distributed over a plurality of remote storage locations only The client devices102-106may be any computing device as defined above. Electronic message data exchanged between the group-based communication server(s)110and the client devices102-106may be provided in various forms and via various methods. In some preferred and non-limiting examples, one or more of the client devices102-106are mobile devices, such as smartphones or tablets. The one or more client devices102-106may execute an application (“app”) to interact with the group-based communication server(s)110. Such apps are typically designed to execute on mobile devices, such as smartphones or tablets. For example, an app may be provided that executes on mobile device operating systems such as Apple Inc.'s iOS®, Google Inc.'s Android®, or Microsoft Inc.'s Windows 10 Mobile®. These platforms typically provide frameworks that allow apps to communicate with one another, and with particular hardware and software components of mobile devices. For example, the mobile operating systems named above each provide frameworks for interacting with location services circuitry, wired and wireless network interfaces, user contacts, and other applications. Communication with hardware and software modules executing outside of the app is typically provided via application programming interfaces (APIs) provided by the mobile device operating system. Thus, via the app executing on the client devices102-106, these client devices102-106are configured for communicating with the group-based communication system118. In some preferred and non-limiting examples, the client devices102-106may interact with the group-based communication server(s)110via a web browser. The client devices102-106may also include various hardware or firmware designed to interact with the group-based communication server(s)110. Again, via the browser of the client devices102-106, the client devices102-106are configured for communicating with the group-based communication system118. In some examples of an exemplary group-based communication system118, a message or messaging communication may be sent from a client device102-106to a group-based communication system118. In various implementations, messages may be sent to the group-based communication system118over communication network108directly by one of the client devices102-106. The messages may be sent to the group-based communication system118via an intermediary such as a message server, and/or the like. For example, a client device102-106may be a desktop, a laptop, a tablet, a smartphone, and/or the like that is executing a client application (e.g., a group-based communication app). In one implementation, the message may include data such as a message identifier, sending user identifier, a group identifier, a group-based communication channel identifier, message contents (e.g., text, emojis, images, links), attachments (e.g., file objects), message hierarchy data (e.g., the message may be a reply to another message), third-party metadata, and/or the like. In one embodiment, the client device102-106may provide the following example message, substantially in the form of a (Secure) Hypertext Transfer Protocol (“HTTP(S)”) POST message including eXtensible Markup Language (“XML”) formatted data, as provided below: POST /authrequest.php HTTP/1.1Host: www.example.comContent-Type: Application/XMLContent-Length: 667<?XML version =“1.0” encoding = “UTF-8”?><auth_request><timestamp>2020-12-31 23:59:59</timestamp><user_accounts_details><user_account_credentials><user_name>ID_user_1</user_name><password>abc123</password>//OPTIONAL <cookie>cookieID</cookie>//OPTIONAL    <digital_cert_link>www.mydigitalcertificate.com/[email protected]/mycertifcate.dc</digital_cert_link>//OPTIONAL <digital_certificate>_DATA_</digital_certificate></user_account_credentials></user_accounts_details><client_details>//iOS Client with App and Webkit//it should be noted that although several client details//sections are provided to show example variants of client//sources, further messages will include only on to save//space<client_IP>10.0.0.123</client_IP><user_agent_string>Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X)AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53</user_agent_string><client_product_type>iPhone6,1</client_product_type><client_serial_number>DNXXX1X1XXXX(</client_serial_number><client_UDID>3XXXXXXXXXXXXXXXXXXXXXXXXD</client_UDID><client_OS>iOS</client_OS><client_OS_version>7.1.1</client_OS_version><client_app_type>app with webkit</client_app_type><app_installed_flag>true</app_installed_flag><app_name>MSM.app</app_name><app_version>1.0 </app_version><app_webkit_name>Mobile Safari</client_webkit_name><client_version>537.51.2</client_version></client_details><client_details> //iOS Client with Webbrowser<client_IP>10.0.0.123</client_IP><user_agent_string>Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X)AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53</user_agent_string><client_product_type>iPhone6,1</client_product_type><client_serial_number>DNXXX1X1XXXX(</client_serial_number><client_UDID>3XXXXXXXXXXXXXXXXXXXXXXXXD</client UDID><client_OS>iOS</client_OS><client_OS_version>7.1.1</client_OS_version><client_app_type>web browser</client_app_type><client_name>Mobile Safari</client_name><client_version>9537.53</client_version></client_details><client_details> //Android Client with Webbrowser<client_IP>10.0.0.123</client_IP><user_agent_string>Mozilla/5.0 (Linux; U; Android 4.0.4; en-us; Nexus S Build/IMM76D)AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30</user_agent_string><client_product_type>Nexus S</client_product_type><client_serial_number>YXXXXXXXXZ</client_serial_number><client_UDID>FXXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX</client_UDID><client_OS>Android</client_OS><client_OS_version>4.0.4</client_OS_version><client_app_type>web browser</client_app_type><client_name>Mobile Safari</client_name><client_version>534.30</client_version></client_details><client_details> //Mac Desktop with Webbrowser<client_IP>10.0.0.123</client_IP><user_agent_string>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3)AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14</user_agent_string><client_product_type>MacPro5,1</client_product_type><client_serial_number>YXXXXXXXXZ</client_serial_number><client_UDID>FXXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX</client_UDID><client_OS>Mac OS X</client_OS><client_OS_version>10.9.3</client_OS_version><client_app_type>web browser</client_app_type><client_name>Mobile Safari</client_name><client_version>537.75.14</client_version></client_details><message><message_identifier>ID_message_10</message_identifier><team_identifier>ID_team_1</team_identifier><channel_identifier>ID_channe1_1</channel_identifier><contents>That is an interesting invention. I have attached a copy our patent policy.</contents><attachments>patent_policy.pdf</attachments></message></auth_request> In the illustrated embodiment, the group-based communication system118comprises a plurality of group-based communication server(s)110configured to receive messages transmitted between a plurality of client devices102-106within a group-based communication channel identified by a channel identifier and/or a group identifier, and to facilitate dissemination of those messages among client devices102-106that collectively form the membership of the group-based communication channel. As used herein, a group-based communication channel represents a virtual communications environment or feed that is configured to display messaging communications posted by channel members (e.g., validated users accessing the environment using client devices) that are viewable only to the members of the group. The format of the group-based communication channel may appear differently to different members of the group-based communication channel; however, the content of the group-based communication channel (i.e., messaging communications) may be displayed to each member of the group-based communication channel. For instance, in one embodiment, a common set of group-based messaging communications will be displayed to each member of the respective group-based communication channel such that the content of the group-based communication channel (i.e., messaging communications) will not vary per member of the channel. However, in another embodiment, a member may join a group-based communication channel and only be able to view subsequent group-based messaging communications (as opposed to historical group-based messaging communications). The group-based communication channels are generally topic-oriented, long-lasting channels as opposed to ad hoc ephemeral conversations in conventional messaging apps. Each group-based communication channel may have associated therewith at least one group-based communication channel identifier including items of data by which the group-based communication channel may be uniquely identified (e.g., ASCII text, a pointer, a memory address, etc.). The group-based communication channels discussed herein may be private or public communication channels. A private group-based communication channel refers to a group-based communication channel with restricted access such that it is not generally accessible and/or searchable by other members of the group-based communication system118. For example, only those users or administrators who have knowledge of and permission to access (e.g., a group-based communication channel identifier for the private group-based communication channel is associated with their user profile after the user has been validated/authenticated) the private group-based communication channel may view content of the private group-based communication channel. In various examples, a private group-based communication channel may be associated with a group-based communication channel identifier. A public group-based communication channel refers to a group-based communication channel that may be accessible to any member of an organization associated therewith. For example, a member of an organization may search for and access a public group-based communication channel, regardless of the member's permissions. The member may then be able to transmit, receive, and/or react to messages via the public group-based communication channel. In some examples, data indicating responses to a message may be associated with the message. For example, responses to the message by other users may include reactions (e.g., selection of an emoji associated with the message, selection of a “like” button associated with the message), clicking on a hyperlink embedded in the message, replying to the message (e.g., posting a message to the group-based communication channel interface in response to the message), downloading a file associated with the message, sharing the message from one group-based communication channel to another group-based communication channel, pinning the message, starring the message, and/or the like. In one implementation, data regarding responses to the message by other users may be included with the message, and the message may be parsed (e.g., using PHP commands) to determine the responses. In another implementation, data regarding responses to the message may be retrieved from a database. For example, data regarding responses to the message may be retrieved via a MySQL database command similar to the following:SELECT messageResponsesFROM MSM_MessageWHERE messageID=ID_message_10. For example, data regarding responses to the message may be used to determine context for the message (e.g., a social score for the message from the perspective of some user). In another example, data regarding responses to the message may be analyzed to determine context regarding the user (e.g., the user's expertise in a topic may be determined based on the responses to the user's message regarding the topic). In some examples, attachments may be included with the message. If there are attachments, file objects may be associated with the message. In one implementation, the message may be parsed (e.g., using PHP commands) to determine file names of the attachments. For example, file contents may be analyzed to determine context for the message (e.g., a patent policy document may indicate that the message is associated with the topic “patents”). In some examples, third-party metadata may be associated with the message. For example, third-party metadata may provide additional context regarding the message or the user that is specific to a company, group, group-based communication channel, and/or the like. In one implementation, the message may be parsed (e.g., using PHP commands) to determine third-party metadata. For example, third-party metadata may indicate whether the user who sent the message is an authorized representative of the group-based communication channel (e.g., an authorized representative may be authorized by the company to respond to questions in the group-based communication channel). In some examples, a conversation primitive may be associated with the message. In one implementation, a conversation primitive is an element used to analyze, index, store, and/or the like messages. For example, the message may be analyzed by itself, and may form its own conversation primitive. In another example, the message may be analyzed along with other messages that make up a conversation, and the messages that make up the conversation may form a conversation primitive. In one implementation, the conversation primitive may be determined as the message, a specified number (e.g., two) of preceding messages and a specified number (e.g., two) of following messages. In another implementation, the conversation primitive may be determined based on analysis of topics discussed in the message and other messages (e.g., in the channel) and/or proximity (e.g., message send order proximity, message send time proximity) of these messages. In some examples, various metadata, determined as described above, and/or the contents of the message may be viewable by members of the associated group-based communication channel via a group-based communication channel interface. The group-based communication channel interface may include a virtual communications environment or feed that is configured to display messaging communications posted by channel members (e.g., validated users accessing the environment using client devices) that are viewable only to the members of the group. The format of the group-based communication channel interface may appear differently to different members of the group-based communication channel; however, the content of the group-based communication channel interface (i.e., messaging communications) will be displayed to each member of the group-based communication channel. For instance, a common set of group-based messaging communications will be displayed to each member of the respective group-based communication channel such that the content of the group-based communication channel interface (i.e., messaging communications) will not vary per member of the group-based communication channel. In some examples, messages may be transmitted via a direct messaging instance. In such examples, the messages may be viewable to members associated with the direct messaging instance via a direct messaging instance interface. In some examples, various metadata, determined as described above, and/or the contents of the message may be used to index the message (e.g., using the conversation primitive) and/or to facilitate various facets of searching (i.e., search queries that return results from the group-based communication servers110). Metadata associated with the message may be determined and the message may be indexed in the group-based communication server(s)110. In one embodiment, the message may be indexed such that a company's or a group's messages are indexed separately (e.g., in a separate index associated with the group and/or company that is not shared with other groups and/or companies). In one implementation, messages may be indexed at a separate distributed repository (e.g., to facilitate data isolation for security purposes). If there are attachments associated with the message, file contents of the associated files may be used to index such files in the group-based communication server(s)110to facilitate searching. In one embodiment, the files may be indexed such that a company's or a group's files are indexed at a separate distributed repository. Similarly, as discussed herein, app data associated with various application systems and/or processing actions may be stored in association with a particular group's messages, such that app data associated with a plurality of groups are stored separately. Examples of electronic message exchange among one or more client devices102-106and the group-based communication system118are described below in reference toFIG.1. As shown inFIG.1, the group-based communication system118enables individual client devices102-106to exchange objects (e.g., messages) with one another and to interact with one or more application systems112-116. To exchange messages and/or other objects between client devices102-106, individual client devices102-106transmit messages (e.g., text-based messages, file objects, video and/or audio streams, and/or the like) to the group-based communication system118. Those messages are ultimately provided to one or more group-based communication server(s)110, which indexes the messages and distributes those messages to the intended recipients (e.g., client devices102-106) of the message. In accordance with the embodiment shown inFIG.1, the client devices102-106are configured to display the received messages in a contextually-relevant user interface available to the user of the client device. For example, messages transmitted from a first client device102as a part of a group-based communication channel are displayed in a user interface display on client devices102-106associated with other members of the group-based communication channel. In at least one example, the user interface may include a group-based communication interface including of a virtual communications environment configured to facilitate user interaction with the group-based communication system118. Each group-based communication interface may be accessible and viewable to a select group of users, such as a group of employees of a business or organization. The group-based communication interface may include a plurality of workspaces (e.g., one or more communication channels associated with a particular identifier), group-based communication channels (e.g., a marketing channel, sales channel, accounting channel, etc.), direct messaging interfaces, or the like. As discussed in greater detail herein, messages, other objects, and/or other data may be provided to application systems112-116to initiate one or more processing actions executable within the respective application systems. The application systems112-116may include a software programs, applications, platforms, or services that are configured to communicate with the group-based communication system118and which service, manage, and/or perform actions that form various functions of an application that is accessible to a client device via a group-based communication interface. An application system112,114, and/or116, such as application system114, may operate on a compiled code base or repository that is separate and distinct from that which supports the group-based communication system118. The application system112,114, and/or116may comprise additional storage repositories (e.g., databases) associated with tasks, functions, and/or actions that may be performed via the application system112,114, and/or116. In some examples, the application system112,114, and/or116may communicate with the group-based communication system118, and vice versa, through one or more application program interfaces (APIs). In some examples, the application system112,114, and/or116receives tokens or other authentication credentials that are used to facilitate secure communication between the application system112,114, and/or116and the group-based communication system in view of group-based communication system network security layers or protocols (e.g., network firewall protocols). As various examples, an application system112,114, and/or116may be configured for executing a calendaring/scheduling app, a to-do list app, a service provider app, a software testing app, a storage repository app, and/or the like. As described herein, it should be understood that the term “application” may be user to refer to either an external application or an internal application (i.e. an application hosted within the group-based communication server(s)110). The processing action(s) may include any executable action performed by an application system112-116. The processing action may be embodied as a data generation process, a data manipulation process, and/or the like that is performed based at least in part on data included within a processing action execution data packet (e.g., data generated in response to user input received via a client device defining a configuration of one or more processing action parameters in order to execute the processing action) provided from the group-based communication system118to the application system112-116. In some examples, the processing action execution data packet may be associated with a processing action identifier, a user identifier, and/or a client device102,104, or106. As used herein, a data packet may include a collection of individual data elements that may be transmitted between a plurality of computing entities collectively, such that the included data remains associated therewith. The data packet may be configured to store data (e.g., routing data) therein with a standardized formatting, such that computing entities may be configured to automatically determine the type of data stored within the data packet. For example, a data packet may comprise substantive data to be passed between computing entities stored within a payload of the data packet, and the data packet may comprise metadata associated with the generation of the data packet that is stored within a routing data portion of the data packet. In some examples, the processing action data packet provided for execution of a processing action may comprise routing data (e.g., which identifies the application system and/or the processing action to be executed) and payload data, which encompasses substantive data for which the processing action is executed. In some examples, the payload data comprises a message, processing action execution data, or other object and/or encompasses environmental contextual data (e.g., data indicative of a user's interaction with a group-based communication interface and/or an application system at a time of or within a threshold time prior to the request of a processing action), as described herein, to be utilized in executing the processing action. However, it should be understood that in some examples, the payload data may be minimal (or empty), for processing actions not requiring an input for execution. As various examples, a processing action may be the creation of a calendar object (e.g., via a scheduling app), the creation of a “to-do” item (e.g., via a productivity app), the creation of a service ticket (e.g., via a service app), the creation of a bookmark (e.g., via a link compilation app), the creation of a file (e.g., via a document editing app), the initiation of a call (e.g., via a video conferencing app), and/or the like. In some examples, processing actions associated with an application may be configured at least in part based on processing action data provided by a developer associated with the application. In some examples, the processing actions may be embodied as one of a plurality of processing action types. The processing action type of a processing action may be one of the processing action characteristics by which the processing action may be defined. For example, processing actions may comprise global processing actions, channel processing actions, message processing actions, object processing actions, and/or the like. As used herein, global processing actions are defined by a developer user associated with the global processing action as not being dependent on a particular channel, message, object, and/or the like. Global processing actions may be requested, for example, via an application home interface or a group-based communication interface menu (e.g., a file menu associated with the group-based communication interface displayed via a client device). As a non-limiting example, a global processing action may be the generation of a task item that is personal to a user. In various examples, the processing actions may include channel processing actions, message processing actions, and/or object processing actions. A channel processing action may include a processing action that is dependent on the content of a particular group-based communication channel. For example, channel processing actions may be utilized for disseminating the result of a processing action to all (or some portion) of the members of a particular group-based communication channel. As another example, channel processing actions may utilize multiple group-based messages exchanged within a group-based communication channel as input (e.g., as payload data of a request for initiation of the processing action). Channel processing actions may be requested, for example, via a channel-specific menu (e.g., a menu adjacent to and/or associated with a user input portion for sharing messages and/or other objects within the channel). A message processing action may include a processing action that is requested for initiation with respect to a particular group-based message. The group-based message associated with particular message processing actions may be provided as payload data with the request. As an example, message processing actions may encompass generating a task item for a particular user to address some content of a particular message. Message processing actions may be requested, for example, via a message-specific menu (e.g., a menu accessible via a graphical user interface element located adjacent a particular message). An object processing action may include a processing action that is requested for initiation with respect to a particular group-based communication object. The group-based communication object associated with the particular object processing action may be provided as payload data with the request. As an example, object processing actions may encompass generating a task item for a particular user to address some attribute of a particular object. Object processing actions may be requested, for example, via an object specific menu (e.g., a menu accessible via a graphical user interface element located adjacent a particular object). In some examples, each processing action type may be requested via discrete user interface elements presented to a user as a part of a group-based communication interface. In some examples, user interface elements associated with each of the processing action types may be presented simultaneously to the user as a part of the group-based communication interface. For example, a first user interface element corresponding to global processing actions may be presented to a user as a part of a global menu bar; a second user interface element corresponding to channel processing actions may be presented to a user as a part of a channel menu bar; a third user interface element corresponding to message processing actions may be accessible at and/or adjacent to each displayed message; a fourth user interface element corresponding to object processing actions may be presented at and/or adjacent to each displayed object. In various examples, the processing actions may include processing action data, which may include a collection of data associated with a processing action that is capable of being transmitted, received, and/or stored. Processing action data may include data which defines the functionality of the processing action. For example, processing action data comprises data corresponding to each of the plurality of processing action characteristics (e.g., application identifier, processing action identifier, processing action description, one or more processing action parameters, processing action type, and/or the like). In some examples, processing action data may be configured by a developer associated with an application and/or a group-based communication server. In some examples, processing action data may be associated with an application identifier, a processing action identifier, one or more processing action characteristic identifiers, an executable processing action element, and/or an application home interface. In some examples, the processing action data may include at least a portion of application data associated with an application. The application data may include a collection of data associated with the application that is capable of being transmitted, received, and/or stored. In various examples, the application data may include data associated with an application system which defines the implementation and/or functionality of the application within a group-based communication system. For example, application data may comprise processing action data, application informational data, application settings data, application home interface configuration data, application contextual data (e.g., application home interface contextual data), and/or the like. In some examples, application data may be configured by a developer associated with an application and/or a group-based communication server. As discussed above, the processing action data may include one or more processing action characteristics. The processing action characteristic(s) may include data which describes and/or defines, at least in part, one or more aspects of a processing action. For example, a processing action characteristic may comprise an application identifier, a processing action identifier, a processing action description, one or more processing action parameters, a processing action type, and/or the like. In some examples, a processing action parameter may define at least a part of the executable instructions associated with the processing action that may enable the execution of the processing action. In some examples, a processing action parameter may be represented as an input variable associated with the processing action that is required to facilitate the execution of the processing action based at least in part on corresponding user input. In some examples, a processing action parameter may be designated as either required or optional in relation to the execution of the processing action. In various examples, a processing action identifier may include one or more items of data by which a processing action may be uniquely identified. For example, a processing action identifier may comprise ASCII text, a pointer, a memory address, and the like. Further, as described herein, a processing action characteristic identifier may include one or more items of data by which a processing action characteristic may be uniquely identified. For example, a group-based processing action characteristic identifier may comprise ASCII text, a pointer, a memory address, and the like. In some examples, processing actions may be made available to client devices102-106on a group-basis (e.g., such that individual processing actions are available to every member of a particular group), on a communication channel basis (e.g., such that individual processing actions are available to every member of a particular communication channel), on an individual basis (e.g., such that individual processing actions are available to certain individual client devices102-106), on a sending user identifier basis (e.g., such that individual processing actions are available only for certain messages transmitted by particular users, such that the message is associated with a particular sending user identifier), and/or the like. As an added limitation, certain processing actions may only be executable via client devices102-106that are directly authenticated with a particular application system configured to execute the processing action (as indicated by the dashed lines between the individual client devices102-106and example application system112-116). In various examples, the processing actions may be made available to client devices102-106via an executable processing action element. The executable processing action element may include one or more discrete user interface elements (e.g., a selectable button) corresponding to a processing action that is presented to a user as a part of a group-based communication interface. In some examples, an executable processing action element may be configured to initialize a processing action associated therewith upon (e.g., responsive to) being selected via user input from a client device102-106. In some examples, an executable processing action element may be selectively presented throughout a group-based communication interface based at least in part on the processing action type of the processing action with which the element is associated. For example, an executable processing action element corresponding to global processing actions may be presented to a user as a part of a global processing action menu, an application home interface, and/or a quick-access global processing action element displayed within the group-based communication interface. In some examples, an executable processing action element corresponding to a channel processing action may be presented to a user as a part of a channel processing action menu displayed within a group-based communication channel interface. In some examples an executable processing action element corresponding to a message processing action may be accessible at and/or adjacent to each displayed message. The processing actions may be made available by the application system112-116based on developer interaction with the group-based communication system118setting access limitations for the processing actions. Those processing actions may comprise one or more discrete functions provided by the application system. For example, a single function of the application system may be called via the processing actions, or a plurality of processing actions, collectively considered a workflow characterized by passing input and/or output between each of the plurality of functions, may define a processing action. In some examples, a workflow may rely on one or more functions performed by the group-based communication system to begin a workflow, to end a workflow, and/or between other functions of a workflow. For example, a workflow may comprise functions performed by the application system112-116to generate an output passed back to the group-based communication system118, that output causing the group-based communication system112-116to execution one or more additional functions, which may be utilized by one or more additional functions of the application system112-116. In some examples, a developer associated with the application system112-116may provide user input to the group-based communication system identifying the availability of one or more processing actions and/or identifying the processing action type to thereby enable the group-based communication system to determine how the processing action is to be made available to users. For example, user input may specify that a particular processing action is a message processing action, to be made available to users via message-specific menus. Example Apparatuses Utilized with Various Embodiments Each group-based communication server(s)110may be embodied by one or more computing systems, such as apparatus200shown inFIG.2. The apparatus200may include processor202, memory204, input/output circuitry206, communications circuitry208, and group-based communication circuitry210, application home interface circuitry212, and application processing action circuitry214. The apparatus200may be configured to execute the operations described herein with respect toFIGS.9A-11and14A-14C. Although these components202-214are described with respect to functional limitations, it should be understood that the particular implementations necessarily include the use of particular hardware. It should also be understood that certain of these components202-214may include similar or common hardware. For example, two sets of circuitries may both leverage use of the same processor, network interface, storage medium, or the like to perform their associated functions, such that duplicate hardware is not required for each set of circuitries. In some examples, the processor202(and/or co-processor or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory204via a bus for passing information among components of the apparatus. The memory204is non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory204may be an electronic storage device (e.g., a computer-readable storage medium). The memory204may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus200to carry out various functions in accordance with example embodiments of the present invention. The processor202may be embodied in a number of different ways and may, for example, include one or more processing devices configured to perform independently. In some preferred and non-limiting examples, the processor202may include one or more processors configured in tandem via a bus to enable independent execution of instructions, pipelining, and/or multithreading. The use of the term “processing circuitry” may be understood to include a single core processor, a multi-core processor, multiple processors internal to the apparatus, and/or remote or “cloud” processors. In some preferred and non-limiting examples, the processor202may be configured to execute instructions stored in the memory204or otherwise accessible to the processor202. In some preferred and non-limiting examples, the processor202may be configured to execute hard-coded functionalities. As such, whether configured by hardware or software methods, or by a combination thereof, the processor202may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Alternatively, as another example, when the processor202is embodied as an executor of software instructions, the instructions may specifically configure the processor202to perform the algorithms and/or operations described herein when the instructions are executed. As just one example, the processor202may be configured to maintain one or more communication channels connecting a plurality of client devices102-106to enable message sharing therebetween. The processor202ensures that messages intended for exchange between the client devices102-106within the particular communication channel are properly disseminated to those client devices102-106for display within respective display windows provided via the client devices102-106. Moreover, the processor202may be configured to synchronize messages exchanged on a particular communication channel with a database for storage and/or indexing of messages therein. In some examples, the processor202may provide stored and/or indexed messages for dissemination to client devices102-106. In some examples, the apparatus200may include input/output circuitry206that may, in turn, be in communication with processor202to provide output to the user and, in some examples, to receive an indication of a user input. The input/output circuitry206may comprise a user interface and may include a display, and may comprise a web user interface, a mobile application, a client device, a kiosk, or the like. In some examples, the input/output circuitry206may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. The processor and/or user interface circuitry comprising the processor may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor (e.g., memory204, and/or the like). The communication circuitry208may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device, circuitry, or module in communication with the apparatus200. In this regard, the communication circuitry208may include, for example, a network interface for enabling communications with a wired or wireless communication network. For example, the communication circuitry208may include one or more network interface cards, antennae, buses, switches, routers, modems, and supporting hardware and/or software, or any other device suitable for enabling communications via a network. Additionally or alternatively, the communication circuitry208may include the circuitry for interacting with the antenna/antennae to cause transmission of signals via the antenna/antennae or to handle receipt of signals received via the antenna/antennae. Group-based communication circuitry210includes hardware configured to support a group-based communication system118. The group-based communication circuitry210may utilize processing circuitry, such as the processor202, to perform these actions. The group-based communication circuitry210may send and/or receive data from group-based communication repository120. In some implementations, the sent and/or received data may be of digital content objects organized among a plurality of group-based communication channels. It should also be appreciated that, in some examples, the group-based communication circuitry210may include a separate processor, specially configured field programmable gate array (FPGA), or application specific interface circuit (ASIC). The group-based communication circuitry210may be implemented using hardware components of the apparatus200configured by either hardware or software for implementing these planned functions. The application home interface circuitry212may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to generate an application home interface based at least in part on application data and/or application home interface configuration data. In various examples, the application home interface configuration data may include a collection of data generated based at least in part on user input received from a developer client device associated with an application. In various examples, application home interface configuration data may be generated based on user input received from the developer via an interactive developer interface. In some examples, application home interface configuration data may comprise one or more executable instructions configured to facilitate the generation of an application home interface and/or the display of developer-provided information therein. In some examples, application home interface configuration data may comprise application informational data and/or application settings data. In various examples, application informational data may include data providing various information about an application. For example, application informational data may comprise a detailed description of the application, the developer, various application functionalities, application reviews, and/or application history. In some examples, application informational data may include at least a portion of application contextual data associated with the application such that the application informational data may detail various application contextual data (e.g., usage data associated with engagement of the application within the group-based communication system (e.g., the number and/or name of group-based communication channels in which the application is associated), and/or the like). In some examples, at least a portion of the application informational data may be generated at least in part by user input from a developer associated with the application. In some examples, at least a portion of the application settings data may be rendered in the group-based communication interface (e.g., within an application home interface). In some examples, application settings data may include data defining at least a portion of a settings framework associated with an application, which may be defined at least in part by a developer associated with the application. For example, the application settings data may define which application settings are available to be configured by a user. The application settings data may be selectable and/or configurable by a developer of the application at the time of application integration into a group-based communication system or any time thereafter. In some examples, the application settings data may define one or more of the interactive settings pane inputs rendered at the interactive settings pane. In some examples, application settings data may be generated by a group-based communication server. In some examples, at least a portion of the application settings data may be rendered in an interactive settings pane within a group-based communication interface (e.g., within an application home interface). In various examples, the applications settings data may include application settings preferences data including one or more preferences associated with a user identifier and generated based at least in part on user input at a client device corresponding to at least one of the application settings. In some examples, the application settings preference data may represent a user-preferred method by which one or more functionalities of an application are to be executed. In certain circumstances, application settings preference data may comprise a default setting configuration as set by a developer or the group-based communication server. In various examples, the application home interface circuitry212may be configured to process one or more executable instructions generated based at least in part on user engagement by a client device with an element within an application home interface. The application home interface circuitry212may utilize processing circuitry, such as the processor202, to perform these actions. The application home interface circuitry212may send and/or receive data from group-based communication repository120. In some implementations, the sent and/or received data may be application data, processing action data, application home interface configuration data, and/or other data of a group-based communication data corpus (e.g., collection of data (e.g., group-based communication data work objects, group-based communication messages, group-based communication channels, and user profiles associated with the group-based communication system, etc.) that is received by a group-based communication system through the group-based communication interfaces). It should also be appreciated that, in some examples, the application home interface circuitry212may include a separate processor, specially configured field programmable gate array (FPGA), or application specific interface circuit (ASIC). The application processing action circuitry214may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to, upon detecting user engagement by a client device with an executable processing action element associated with a processing action of an application, initialize the processing action and facilitate the execution of the processing action by the application. The application processing action circuitry214may utilize processing circuitry, such as the processor202, to perform these actions. The application processing action circuitry214may send and/or receive data from group-based communication repository120. In some implementations, the sent and/or received data may be processing action data, environmental contextual data, or other data of a group-based communication data corpus. It should also be appreciated that, in some examples, the application processing action circuitry214may include a separate processor, specially configured field programmable gate array (FPGA), or application specific interface circuit (ASIC). In various examples, the environmental contextual data may include contextual data (e.g., data indicative of a user's interaction with group-based communication interface and/or an application system) that is indicative of a user's interaction with a group-based communication interface at the time of, or immediately prior to the request of a processing action. In some examples, the environmental contextual data may refer to a graphical interface from which the user requests execution of a processing action (e.g., a graphical interface associated with a particular communication channel, a graphical interface associated with a particular group, a graphical interface associated with a particular application system, and/or the like); a history of graphical interfaces visited (e.g., during a current connection session or spanning multiple connection sessions) prior to arriving at the current graphical interface from which the user requests execution of the processing action, and the route taken between those graphical interfaces before the user requests execution of a processing action; the identity and/or characteristics of users, messages, and/or other objects visible to the user in a graphical user interface from which execution of a processing action is requested (e.g., the content of the graphical interface displayed to the user), and/or the like. In various examples, the environmental contextual data may include a listing, table, or other data structure comprising one or more user identifiers, session identifiers, active group-based communication channel identifiers (indicative of a channel currently being viewed by a user (via a client device), active group identifiers (indicative of a group currently being viewed by a user (via a client device)), one or more prior group-based communication channel identifiers (indicative of one or more channels viewed during the current connection session and/or prior connection sessions), one or more application identifiers, one or more processing action search identifiers, and/or the like. The environmental contextual data may comprise additional identifying data as well, such as time stamps, dates, user identifiers with whom the user has corresponded with (e.g., a most-recently contacted user), and/or the like. Environmental contextual data may be generated and/or collected by the group-based communication system and/or a client device. The environmental contextual data may be stored in a cache memory storage area in some examples, which may be cleared upon the occurrence of certain events (e.g., the elapsing of a defined period of time from the generation of the environmental contextual data; the generation of a defined amount of newer-generated environmental contextual data, closing an application on the client device associated with the group-based communication system, and/or the like). It is also noted that all or some of the information discussed herein can be based on data that is received, generated and/or maintained by one or more components of apparatus200. In some examples, one or more application systems (such as a remote cloud computing and/or data storage system) may also be leveraged to provide at least some of the functionality discussed herein. The term “circuitry” should be understood broadly to include hardware and, in some examples, software for configuring the hardware. With respect to components of each apparatus200, the term “circuitry” as used herein should therefore be understood to include particular hardware configured to perform the functions associated with the particular circuitry as described herein. For example, in some examples, “circuitry” may include processing circuitry, storage media, network interfaces, input/output devices, and the like. In some examples, other elements of the apparatus200may provide or supplement the functionality of particular circuitry. For example, the processor202may provide processing functionality, the memory204may provide storage functionality, the communication circuitry208may provide network interface functionality, and the like. As will be appreciated, any such computer program instructions and/or other type of code may be loaded onto a computer, processor or other programmable apparatus's circuitry to produce a machine, such that the computer, processor or other programmable circuitry that execute the code on the machine creates the means for implementing various functions, including those described herein. As described above and as will be appreciated based on this disclosure, examples of the present invention may be configured as methods, mobile devices, backend network devices, and the like. Accordingly, examples may comprise various means including entirely of hardware or any combination of software and hardware. Furthermore, examples may take the form of a computer program product on at least one non-transitory computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including non-transitory hard disks, CD-ROMs, flash memory, optical storage devices, or magnetic storage devices. Moreover, although not shown, various examples of a group-based communication system118may comprise one or more databases configured for storing and/or indexing messages exchanged within various group-based communication channels. Application Home Interface Configuring an Exemplary Application Home Interface The group-based communication server110may be configured to generate an application home interface for rendering within a group-based communication interface. An example application home interface300configuration is presented inFIG.3. In various examples, the application home interface300may represent a centralized interactive virtual environment within a group-based communication interface configured to provide a user with application data associated with an application so as to facilitate user interaction with the application within the group-based communications system. The application home interface300may be configured to render application data associated with an application within a group-based communication interface so as to facilitate user interaction with the application within the group-based communications system. The application home interface300may be associated with an application identifier (e.g., items of data by which an application may be uniquely identified (e.g., ASCII text, a pointer, memory address, etc.)). The application home interface300may represent a centralized surface within a group-based communication interface configured to provide a user with information regarding each processing actions associated with an application, the basic functionality of the application, a high-level description about the application, one or more notifications and/or messages received from an application system associated with the application, application usage data (e.g., application contextual data); application settings data, and/or the like. In some examples, an application home interface300may comprise one or more application home interface pages, one or more executable processing action elements, a welcome pane, and/or a user engagement pane. In various examples, each application home interface associated with each of the plurality of applications may comprise at least substantially similar structural layouts (e.g., with respect to the organization of the one or more application home interface elements within the interface) as defined by a group-based communication server. Conversely, the content of each application home interface (e.g., the information displayed within each of the application home interface elements) may be configured based on user input provided by a developer associated with the application. In some examples, an application home interface300may be configured at least in part based on application home interface configuration data provided by a developer via an interactive developer interface. For example, an application home interface may be configured to provide information regarding each processing action associated with an application, the basic functionality of the application; a high-level description about the application, one or more notifications and/or messages received from an application system associated with the application, application contextual data, application settings data, and/or the like. The application contextual data may include data indicative of a user engagement with an application with a group-based communication system. In some examples, the application contextual data may include usage data (e.g., historical data, usage rate data, a favorite application identifier, and/or the like) associated with one or more user identifiers. of the group-based communication system. For example, application contextual data may comprise the number and/or name of each group-based communication channel with which the application is associated. In some examples, the application contextual data may include application home interface contextual data. In such examples, the application home interface contextual data may include data indicative of user engagement with an application home interface within a group-based communication interface. In some examples, the application home interface contextual data may be associated with an application identifier, an application home interface identifier, a client device identifier, and/or a user identifier. In various examples, the application home interface contextual data may refer to usage data (e.g., historical data, usage rate data, and/or the like) associated with one or more user identifiers. For example, application home interface contextual data may comprise a previously visited indicator, a previously unvisited indicator, unread message indicator, and/or an abandoned page indicator associated with an application home interface identifier and a user identifier. As used herein, an indicator may include one or more items of data associated with one or more elements of a group-based communication system that indicates that a particular condition corresponding to the one or more elements associated therewith is present. In various examples, an indicator may comprise a textual or graphical statement generated as a representation that a given condition is present. For example, an indicator may be a data structure comprising a flag, or a record of a data structure whereby a logical “1” indicates that given condition is present and a logical “0” indicates that the given condition is not present. As depicted inFIG.3, the application home interface300may comprise various elements such as, for example, an application title element302, an application home interface search bar303, an application home interface welcome pane310, one or more executable processing action elements320, a user engagement pane330, and one or more application home interface pages340. In various examples, the group-based communication server110may be configured to generate the application home interface300responsive to receiving a selection of an application identifier element301associated with the application displayed within the group-based communication interface. As shown inFIG.3, the application title element302may be rendered within the application home interface300and may be associated with an application identifier associated with the application and configured so as to display the title of the application. Further, in various examples, the application title element302may be further configured to display an image associated with the application (e.g., an application logo). In various examples, the application home interface search bar303is configured to receive user input and facilitate searching for otherwise unlisted processing actions not associated with the one or more executable processing action elements rendered within the application home interface. In various examples, the group-based communication server110may be configured to render a contextual list, as described herein, within a graphical interface component at the application home interface in response to user input at the application home interface search bar303received from the client device102. In various examples, based at least in part on environmental contextual data associated with the client device102, the contextual list may include one or more of the processing actions associated with the application112associated with the application home interface. In various examples, the application home interface welcome pane310may be configured to display various informational data configured to represent a high-level description about the application and one or more functionalities thereof within the group-based communication system118. In various examples, the application home interface welcome pane310may be further configured to display an image associated with the application (e.g., an application logo). The application home interface welcome pane310may be configured based at least in part on user input received from a developer client device associated with a developer user identifier associated with the application. For example, the application home interface welcome pane310may be configured based at least in part on application home interface configuration data. Further, the application home interface300may comprise one or more executable processing action elements320, each corresponding to a respective processing action of the application. An executable processing action element of the one or more executable processing action elements320may comprise a discrete user interface element (e.g., a selectable button) configured, for example, to initialize the processing action associated therewith responsive to being selected via user input from a client device. In various examples, each executable processing action element may be configured to display at least one of the processing action characteristics (e.g., processing action title and/or processing action description) of the processing action associated therewith. In some examples, the one or more executable processing action elements320may be selectively presented throughout a group-based communication interface (e.g., within an application home interface300) based at least in part on the respective processing action types of each of the one or more processing actions with which the one or more elements320are associated. For example, the application home interface300may be configured to display executable processing action elements320associated with global processing actions (i.e. those processing actions wherein a message identifier and/or channel identifier are not required as processing action parameters in order for the processing action to be executed), as described herein. In various examples, the one or more executable processing action elements320may be selectively displayed and/or spatially arranged relative to one another based at least in part on application home interface configuration data comprising an executable processing action element priority order, wherein the executable processing action element priority order defines the organization of each of the one or more executable processing action elements relative to the other executable processing action elements. In various examples, the executable processing action element priority order may be based at least in part on user input received from the developer client device associated with the developer user identifier such that the one or more executable processing action elements320are organized according to a priority rank (e.g., the executable processing action element associated with the highest priority processing action being arranged first) as established by the developer user. Further, in various examples, the executable processing action element priority order may be based at least in part on environmental contextual data and/or application contextual data associated with the one or more processing actions and/or a user identifier. For example, the executable processing action element priority order may be configured such that the processing action that is most frequently executed by application (i.e. the most popular) or the processing action which is most frequently initialized by a user associated with the user identifier (i.e. the user's favorite processing action) may be associated with the highest relative priority, and thus arranged in a corresponding first position within the application home interface300. In various examples, the one or more executable processing action elements320may correspond to only a subset of a plurality of processing actions associated with an application. In such a circumstance, an application home interface may be configured to render an interactive element corresponding to an option for searching for other processing actions associated with the application (for example, the application home interface300comprises an interactive interface element with an option for selecting “More . . . ”). As various examples, a processing action may be the creation of a calendar object (e.g., via a scheduling app), the creation of a “to-do” item (e.g., via a productivity app), the creation of a service ticket (e.g., via a service app), the creation of a bookmark (e.g., via a link compilation app), the creation of a file (e.g., via a document editing app), the initiation of a call (e.g., via a video conferencing app), and/or the like. In various examples, the application home interface300may comprise a user engagement pane330comprising a user engagement interface configured to display user engagement pane data so as to reflect the execution of one or more user engagement pane instructions associated with the application. As used herein, a pane may include a defined area within a group-based communication interface configured for rendering various data as determined and described herein. The pane may be embodied as a container, which may be populated with data received from an external data source. In various examples, the user engagement pane330may be configured to render user engagement pane data and receive user input associated with the user engagement pane data. In various examples, a developer associated with the application may configure the user engagement interface with block kits, as described herein, provided by the group-based communication server110via transmitting block data to the server. The block data may include any data, data set, or data packet that is sent from an application and may be used by a group-based communication server of a group-based communication system for rendering a user engagement interface within a group-based communication interface associated with a client device. For example, the block data may include multiple “block arrays,” each block array being associated with a respective block to be rendered for display within the user engagement interface. In various examples, the user engagement interface may comprise one or more user engagement blocks331and may be configured to display block data including one or more block arrays, each block array being respectively associated with a user engagement block of the one or more user engagement blocks331. A block array may comprise a plurality of block element values—electronically generated values associated with a respective attribute of a block element that may be used to define how the block element may be displayed within a user engagement block so as to reflect the block element configuration defined by a developer—associated with a user engagement block type. In various examples, the user engagement blocks331may include one or more designated sections or areas within a group-based communication interface that is used for displaying at least a portion of user engagement pane data. In some examples, a user engagement blocks331may be configured to reflect execution of one or more user engagement pane instructions corresponding to one or more functionalities of an application. As described herein, a block array may comprise a plurality of block element values associated with a particular user engagement block type. In various examples, a block element value may be an electronically generated value associated with an attribute of a block elements331A-B that may be used to define how the block elements331A-B may be displayed within a user engagement block331so as to reflect the block element configuration defined by a developer. In various examples, each user engagement block331may comprise a user engagement block type of a plurality of user engagement block types available for rendering within the group-based communication interface, the plurality of user engagement block types comprising one or more of a text block type, a thumbnail block type, a divider block type, an image block type, a video block type, a meta block type, an action block type, a poll block type, a file block type, a call block type, a combination thereof, and/or the like. In various examples, the user engagement interface may be defined by a customizable block configuration comprising the one or more user engagement blocks331—as well as the one or more block elements331A-B defined therein. In various examples, the customizable block configuration may be based at least in part on user input generated from a developer client device associated with the developer user identifier. In various examples, each user engagement block331may be associated with one or more user engagement pane instructions and configured to reflect the execution of the one or more user engagement pane instructions associated therewith by the application system112-116. In various examples, the one or more user engagement pane instructions may correspond to one or more functionalities of the application. In various examples, as shown inFIG.3, while the customizable block configuration (e.g., the various block types associated with each of the user engagement blocks and their respective functionalities) may be defined based at least in part on user input generated by the developer associated with the application, at least a portion of the block data displayed within the one or more user engagement blocks may be generated based at least in part on the user identifier and/or user input generated by the client device102associated with the user identifier. For example, the block type of user engagement block331, the display of a textual block element331A corresponding to a user engagement pane instruction, and the display of an overflow menu button element331B within the user engagement block331may be based on developer input, while the particular block data to be displayed at block element331A may be based at least in part on user input generated by the client device102and/or application data associated with the user identifier. The group-based communication server110may be configured to generate a user engagement pane within a group-based communication interface (e.g., an application home interface) at least partially in accordance with the systems and methods described in U.S. patent application Ser. No. 15/978,013, filed on May 11, 2018 the contents of which are incorporated herein by reference in their entirety. In various examples, the application home interface300may comprise one or more application home interface pages340configured to display at least a portion of application data associated with an application. In certain circumstances, an application home interface may comprise a plurality of application home interface pages, which may each be configured for alternative display within the application home interface. The application home interface page340may include an interface element renderable to display an area by which at least a portion of application data associated with an application may be displayed. For example, an application home interface page340may be configured to display application message data associated with an application system, application informational data, application settings data, and/or the like. In some examples, an application home interface may comprise a plurality of application home interface pages, which may each be configured for alternative display within the application home interface. In examples wherein the application home interface300comprises a plurality of application home interface pages340, each page may be configured to display different application data. The group-based communication server110may be configured to display an application home interface page responsive to receiving a selection of an application home interface page identifier element associated therewith. In some examples wherein the application home interface comprises a plurality of application home interface pages, each page may be configured to display different application data. For example, the one or more application home interface pages340may be configured to display various processing action data, application message data associated with an application system, application informational data, application settings data, and/or the like. In some examples, the one or more application home interface pages340may comprise an application home interface home page, an application home interface message page, an application home interface about page, and/or an application home interface settings page. For example,FIG.3shows an exemplary embodiment of an application home interface300rendering an application home interface home page341. An operational example of an application home interface400is presented inFIG.4. As shown inFIG.4, application home interface400is associated with application, wherein the application title element402is rendered within the application home interface400and associated with an application identifier associated with the application so as to display the title of the application, “Reminders.” Further, the application title element402is configured to display the application logo associated with the exemplary application “Reminders.” As shown, the group-based communication interface comprises an application identifier element401associated with an application identifier associated with the exemplary “Reminders” application such that the group-based communication server110may be configured to generate the application home interface400associated with “Reminders” responsive to receiving a selection thereof. As illustrated inFIG.4, the exemplary application home interface400comprises an application home interface welcome pane410, one or more executable processing action elements420, and a user engagement pane430. The application home interface welcome pane410is configured to display various informational data associated with the “Reminders” application. As shown, in various examples, the application home interface welcome pane410may comprise a hyperlink configured to direct a user to a portion of the group-based communication interface (e.g., within the application home interface) configured to display at least a portion of the application information data associated with the application. The one or more executable processing action elements420of the exemplary application home interface400comprise a first executable processing action element421and a second executable processing action element422, each corresponding to a respective processing action of the “Reminders” application. For example, the first executable processing action element421is associated with a processing action of the “Reminders” application titled “Create a Reminder,” and the second executable processing action element422is associated with a processing action of the “Reminders” application titled “Delete All Completed Reminders.” Each of the one or more executable processing action elements420are configured to display, within the rendered element, a plurality of processing action characteristics associated with the respective processing actions. For example, the first executable processing action element421is configured to display both the processing action title and an associated processing action description, indicating that the first executable processing action element421may function to create a new reminder upon execution. As a further example, the second executable processing action element422is configured to display both the processing action title and the associated processing action description, indicating that the second executable processing action element422may function to delete all reminders that are completed upon execution. The exemplary application home interface400further comprises a user engagement pane430configured to render a user engagement interface. As shown, the one or more user engagement blocks comprise a first user engagement block431, a second user engagement block432, and a third user engagement block433, each corresponding to a respective functionality of the “Reminders” application. For example, the first, second, and third user engagement blocks431,432,433are configured to display today's reminders, upcoming reminders, and reminders assigned by the user associated with the user identifier, respectively, each user engagement block corresponding to one or more user engagement pane instructions. As shown, the one or more block elements of the first user engagement block431may include block elements431A-431E. Block elements431A and431C, for example, comprise text elements generated based at least in part on application data associated with the user identifier and/or user input received from the client device102associated with the user identifier. The block elements431A,431C may be configured to display application data corresponding to the application functionality and the user engagement pant instructions associated with user engagement block431. As shown, for example, block elements431A and431C are each configured to display a reminder (e.g., “Pick up dry cleaning from It's Dry Cleaning Time,” “Present findings on dashboards”) associated with the user identifier. In various examples, the application data displayed at each block element may be associated with a block element identifier. Further, in various examples, the block elements431B and431D may comprise meta elements corresponding respectively to the block elements431A and431C, each being configured to display data (e.g., metadata) associated with a respective block element identifier of each block element431A,431C. For example, where block elements431A,431C are configured to display respective reminders associated with the user identifier, block elements431B and431D may be configured to display data associated with a reminder, such as a reminder deadline and/or reminder time (e.g., “Due at 9 AM,” “Due at 3 PM”). FIG.5shows another example group-based communication interface providing an application home interface. Specifically,FIG.5shows an exemplary embodiment of an application home interface500rendering an application home interface message page542. In various examples, an application home interface500may be configured to display application message data associated with an application and a user identifier received from an application system112-116and/or a client device102associated with the user identifier. In various examples, the application home interface500may be configured to receive and/or display the application message data551within an application messaging pane550. In various examples, the application messaging pane550may comprise a message bar552configured to receive message content generated at the client device102associated with the user identifier. In various examples, application message data551may comprise message content generated by the application system112-116corresponding to one or more functionalities of the application within the group-based communication system118. Further, in various examples, application message data551may comprise message content generated by the client device102at the message bar552corresponding to one or more functionalities of the application within the group-based communication system118. Message content displayed within the application home interface500may be representative of a correspondence between a user associated with the user identifier and the application system112-116. In various examples, as described herein with respect to group-based communication messages, application message data551may further include data such as a message identifier, sending user identifier, application identifier, an application home interface identifier, attachments (e.g., files), message hierarchy data (e.g., the message may be a reply to another message), third-party metadata, an unread message identifier, and/or the like. In various examples, the group-based communication server110may be configured to receive, generate, and display application message data551associated with an application home interface500such that the application home interface500may function as a private group-based communication channel interface, as described herein, wherein both the user identifier associated with the client device and a user identifier associated with a user profile associated with the application system are associated with access rights to the group-based communication channel (i.e. the application home interface). In various examples, as described herein, the group-based communication server110may be configured to display an unread message indicator within the application home interface500. For example, an unread message indicator may be rendered within an application home interface500proximate an application home interface page identifier542and/or the application message data comprising the unread message content (e.g., application message data551). FIG.6shows another example group-based communication interface providing an application home interface. Specifically,FIG.6shows an exemplary embodiment of an application home interface600rendering an application home interface about page643. In various examples, an application home interface600may be configured to display at least a portion of application data comprising application informational data and/or application contextual data. As described herein, application informational data may comprise identifying information about the application (e.g., the title and/or subtitle of the application) and a detailed description of the application, the developer, various application functionalities, application reviews, and/or application history (e.g., application update history, currently installed application version identifier, application installation date, and/or the like). In some examples, at least a portion of the application informational data may be generated at least in part by user input from a developer associated with the application. Further, as described herein, application contextual data may comprise data indicative of user engagement with the application within a group-based communication system. For example, application contextual data may comprise to usage data (e.g., historical data, usage rate data, one or more favorite application identifiers, and/or the like) associated with one or more user identifiers of the group-based communication system118. In various examples, the at least a portion of the application data configured for display within the application home interface600may be stored within a group-based communication repository120, from which the group-based communication server110may be configured to retrieve the data. As illustrated inFIG.6, the exemplary application home interface600is configured to render both application informational data and application contextual data within the application home interface600. For example, the application home interface600displays at least a portion of the application contextual data associated with the application at both a first interface element661and a third interface element663. As shown, the application contextual data displayed at the first interface element661provides the number of group-based communication workspaces and the number of group-based communication channels associated with application. In various examples, a group-based communication workspace identifier and a group-based communication channel identifier associated with one or more of the workspaces and/or channels with which the application is associated may be displayed at the first interface element661within the application home interface660. Further, as shown inFIG.6, the application contextual data displayed at the third interface element663may provide one or more group-based communication channel identifiers associated with respective group-based communication channels that each utilize the application (i.e. group-based communication channels that are associated with the application). In various examples, the rendered application contextual data may further comprise data such as, for example, the user identifier associated with a user who added the application to the group-based communication channel. In various examples, the application contextual data may be further associated with a user identifier associated with the client device102, such that the application contextual data displayed within the application home interface600may comprise personalized usage data associated with the user identifier corresponding to the user's interaction with application throughout the group-based communication system118. As a further example, the exemplary application home interface600may be configured to display at least a portion of the application informational data associated with the application at a second interface element662. As shown, the application informational data displayed at the second interface element662comprises a detailed description of the application as provided by a developer and a developer user identifier associated with a developer (e.g., a developing company). In various embodiment, FIG.7shows another example group-based communication interface providing an application home interface. Specifically,FIG.7shows an exemplary embodiment of an application home interface700rendering an application home interface settings page744. In various examples, an application home interface700may be configured to display at least a portion of application data comprising application settings data. As described herein, application settings data may comprise data associated with a user identifier, an application home interface, and an application which defines at least a portion of a settings framework associated with the application. In various examples, the application settings data may be generated at least in part based on user input at a client device associated with a developer associated with the application. For example, the application settings data may define which application settings are available to be configured by a user within the group-based communication system118. The application settings data may be selectable and/or configurable by a developer of the application at the time of application integration into a group-based communication system or any time thereafter. Further, in various examples, at least a portion of the application settings data may be generated by the group-based communication server110. In various examples, the group-based communication server110may be configured to generate application settings data defining one or more default settings for each application of the plurality of application settings implemented within the group-based communication system. As described herein, application settings preference data may represent a user-preferred method by which one or more functionalities of an application are to be executed. As shown inFIG.7, the application home interface700may comprise an interactive settings pane770within which the group-based communication server110may be configured to render at least a portion of the application settings data. In various examples, the interactive settings pane770may be configured for rendering application settings data. In certain examples, the interactive settings pane770may be configured to accept user input at one or more interactive settings pane inputs (e.g., as a selection of one or more of a plurality of available settings options, as freeform input, and/or the like), the user input defining application settings preference data. The one or more interactive settings pane inputs may be defined at least in part by application settings data. In some examples, the interactive settings pane may be further configured for rendering application settings preference data generated in response to the user input. In some examples, the interactive settings pane770may be a defined pane within an application home interface. In various examples, the interactive settings pane770may be determined based on application settings data, which may further define one or more of the interactive settings pane inputs rendered at the interactive settings pane770. In various examples, the application settings data associated with the user identifier and the application may be modified based on user input the application home interface700from the client device102associated with the user identifier. In various examples, each of the one or more interactive settings pane inputs may correspond to a respective application setting. The group-based communication server110may be configured to generate application settings preference data associated with the user identifier based on the user input. For example, application settings preference data may be generated based on user input at the one or more interactive settings pane inputs, such that the application settings preference data corresponds to at least a portion of the application settings data configured by the group-based communication server110and/or the developer user (e.g., at least one of the corresponding to at least one of the application settings). As illustrated inFIG.7, the interactive settings pane770may be configured at least in part by a developer associated with the application. As shown, the interactive settings pane may comprise a block kit configuration, as described herein, comprising blocks771-775that are collectively configured according to a customizable block configuration defined by the developer. Each block771-771is configured to display block data associated with one or more instructions associated with a respective function of one of the group-based communication system118or the application112. As shown, each of the interactive settings pane inputs are embodied as block elementsdefined by a plurality of block element values that define one or more physical attributes of the block elementthat are each configured to display block data corresponding to a particular application setting and/or functionality. For example, the first block771comprises a first text element771A configured to display application settings data (e.g., the connected calendars associated with application112. Conversely, block elements771B,771C, and771D each correspond to a respective functionality of the application112(e.g., “Add a Calendar,” “Remove”). In various examples, one or more of the blocks defined by the interactive settings pane may be based at least in part on based on one or more group-based communication server default settings generated for each application of the plurality of applications implemented within the group-based communication system188. For example, as shown inFIG.7, Block775is configured to display block data associated with an “Uninstall App” functionality. In such a circumstance, each application home interface generated by the group-based communication server110would be configured to display application settings data corresponding to the “Uninstall App” functionality. As described herein with respect toFIGS.3-7, the application home interface may comprise one or more application home interface pages. In various examples wherein the application home interface comprises a plurality of application home interface pages, each of the plurality of pages may be configured for alternative display. FIG.8provides an example group-based communication interface providing an interactive dialog associated with a processing action in accordance with one exemplary embodiment. As used herein, the interactive dialog may include a user interface element configured to accept user input (e.g., as a selection of one or more of a plurality of available options, as freeform input, and/or the like). The interactive dialog may be presented as a pop-up or an overlaid display element displayed visually over another portion of a user interface, or the dialog may be presented as a portion of a larger user interface element. In some examples, an interactive dialog may comprise one or more interactive dialog inputs. As illustrated inFIG.8, interactive dialog800is embodied as a fillable form associated with a processing action. The group-based communication server110may be configured to generate interactive dialog800for rendering at a client device102responsive to receiving a selection of an executable processing action element associated with the processing action from the client device102. As shown, the interactive dialog is associated with the “Give Praise” processing action. In various examples, the interactive dialog800may comprise a processing action title element801, which may be configured so as to display the processing action title of the processing action. Further, in various examples, the processing action title element801may be further configured to display an image associated with the application (e.g., an application logo) associated with the processing action. An interactive dialog may comprise one or more dialogs based at least in part on processing action data, each corresponding to a respective processing action parameter of associated with the processing action, wherein each of the one or more processing action parameters comprises information needed by the application system112-116for executing the particular processing action. In various examples, each of the one or more dialogs may comprise a fillable field, drop-down, checkbox, and/or the like usable by a user to input a processing action parameter. As described herein, each of the one or more processing action parameters associated with a processing action may be designated as either an optional parameter or a required parameter. Optional parameters are not required in order to for an application system to execute the processing action associated therewith, but input corresponding thereto may facilitate additional and/or more particularized functionality of the associated processing action. The group-based communication server110may be configured so as to initialize a processing action only responsive to receiving information corresponding to each required parameter associated with the processing action. As shown, the exemplary interactive dialog800associated with the “Give Praise” processing action comprises a first dialog802for selecting the group-based communication channel in which the particular message input (i.e. the exemplary praise message) is to be posted. In various examples, where the first interactive dialog802identifies a group-based communication channel identifier a processing action parameter of the “Give Praise” processing action, and where the channel identifier is designated as a required parameter, the “Give Praise” processing action should be understood to comprise a channel processing action, as described herein. In various examples, where the group-based communication channel identifier is designated as an optional parameter, the “Give Praise” processing action should be understood to comprise a global processing action, as described herein. Further, the interactive dialog800comprises a second dialog803and a third dialog804for selecting a user (e.g., a user identifier associated with a user) associated with the group-based communication system118to praise and for entering message input (i.e. the exemplary praise message) to be posted within the group-based communication channel associated with the channel identifier input at the first dialog801. In various examples, the data input at each of the first, second, and third dialogs802,803,804may be respectively generated based at least in part on user input received at a client device102and/or contextual data retrieved from a group-based communication repository120, as described herein. Responsive to detecting input data corresponding to each required processing action parameters associated with the processing action (e.g., at each of the corresponding dialogs within the interactive dialog800), the group-based communication server110may be configured so as to initialize the processing action responsive to receiving a selection of a processing action input confirmation element805. The group-based communication server110may be configured to generate an interactive dialog embodied as a fillable form partially filled with environmental contextual data at least partially in accordance with the systems and methods described in U.S. patent application Ser. No. 16/399,730, filed Apr. 30, 2019, the contents of which are incorporated herein by reference in their entirety. Example Data Flows FIGS.9A-9Dillustrate a lane diagram showing functionality of various components associated with an exemplary application home interface in accordance with various examples. As noted herein, application data is data associated with an application system112-116which defines the implementation and/or functionality of an application within a group-based communication system118. For example, application data may comprise processing action data, application informational data, application settings data, application home interface configuration data, application contextual data (e.g., application home interface contextual data), user engagement pane data, and/or the like. In some examples, application home interface configuration data may comprise one or more executable instructions configured to facilitate the generation of an application home interface and/or the display of developer-provided information therein. For example, in some examples, application data may be generated by a developer associated with an application and the application system112-116, such that the group-based communication server110may receive the application data from a client device associated with the developer and/or the application system112-116, as reflected at Block901ofFIG.9A. In various examples, the application received by the group-based communication server110from the application system112-116may comprise a request URL relating to the application system112-116. In various examples, the application data may comprise a plurality of request URLs relating to the application system112-116, each request URL of the plurality of URLs being associated with a respective processing action of the application system112-116. In various examples, each URL enables communication between the group-based communication server110and the application system112-116by identifying a location to which data (e.g., routing data and/or payload data included within a data packet, additional data provided from a client device102-106in response to an interactive dialog, and/or the like) may be provided by the group-based communication server110(e.g., from the group-based communication repository120) to the application system112-116. As reflected at Block902, the application data received from the application system112-116by the group-based communication server110may be associated with an application identifier comprising one or more items of data by which an application112-116may be uniquely identified. Once the application data is associated with a corresponding application identifier, the group-based communication server110transmits the application data to the group-based communication repository120for storage as indicated at Blocks903and904. In some examples, the group-based communication server may be configured to generate an application identifier element associated with the application based at least in part on the application data as shown at Block905. As shown at Block906, the group-based communication server110may then transmit the application identifier element to one or more client devices for rendering within a group-based communication interface displayed via a respective display device of at least one of the one or more client devices102-106. In some examples, the application identifier element may comprise an executable element (i.e. a selectable button) configured to be displayed within the group-based communication interface. As shown at Block907, a user selection of the application identifier element may be received at the client device102and transmitted to the group-based communication server110, as shown at Blocks908and909. In some examples, the selection of the application identifier element may represent a user request to generate an application home interface associated with the application associated with the selected application identifier element. Responsive to receiving of the selection of the application identifier element, the process proceeds to Blocks910and911, at which the group-based communication server110may retrieve, from the group-based communication repository120, the application data relevant to the application home interface (e.g., application home configuration data, application home contextual data, processing action data, user engagement pane data, and/or the like). In various examples, the application data retrieved from the group-based communication repository120by the group-based communication server110may comprise the same application data received by the server110from the application112-116at Block901. In various examples, the application data retrieved from the group-based communication repository120by the group-based communication server110may comprise application data that has been modified (i.e. updated) by the developer associated with the application112-116via an interactive developer interface, as described herein subsequent to the transmission of the application data received by the group-based communication server110at Block901. In such circumstance, the group-based communication server110may provide the modified application data associated with the application112-116to the group-based communication repository120to replace and/or supplement the preexisting application data, such that the application data retrieved from the group-based communication repository120by the group-based communication server110, as shown at Blocks910and911, may comprise the modified application data. Responsive to retrieving the application data relevant to the application home interface, the group-based communication server110may be configured to generate the application home interface associated with the application, as shown at Block912, based at least in part on the application data associated therewith. As shown at Blocks913and914, the application home interface may be transmitted for rendering within the group-based communication interface displayed at the client device102. As described herein, the generated application home interface may comprise an interactive virtual environment configured to render at least a portion of the application data within the group-based communication interface so as to facilitate user interaction with the application within the group-based communications system118. The application home interface generated by the group-based communication server110may be configured according to one or more examples described herein. For example, the application home interface may comprise one or more executable processing action elements each corresponding to a respective processing action of the application and configured to initiate the process of executing the respective processing action at an application112-116associated with the processing action. As described herein, the processing action associated with each of the executable processing action elements may comprise one or more steps for providing data to an application112-116with which the respective processing action is associated. Responsive to receiving a user selection of an executable processing action element rendered within the application home interface, the client device102may transmit the executable processing action element selection to the group-based communication server110, as shown at Blocks915and916. In some examples, the selection of executable processing action element may represent a user request to initialize the processing action associated therewith. Responsive to receipt of the selection of the executable processing action element, the process proceeds to Blocks917and918, at which the group-based communication server110may retrieve at least a portion of the processing action data associated with the selected processing action from the group-based communication repository120. For example, the processing action data may comprise data corresponding to each of a plurality of processing action characteristics (e.g., one or more processing action parameters, and/or the like) that define the processing action. As shown at Block919, the group-based communication server may generate an interactive dialog based at least in part on the processing action data. In various examples, the interactive dialog may comprise one or more dialog inputs configured to receive user input from the user corresponding to one or more processing action parameters associated with the processing action, as described herein. In various examples, the interactive dialog may comprise one or more elements similar to the exemplary interactive dialog shown inFIG.8. Further, in various examples, as shown at Blocks920and921, the group-based communication server110may be configured to retrieve environmental contextual data generated for the user (and/or client device102) from a cache memory storage area associated with the group-based communication repository120(and/or embodied as a memory storage area of the client device102). As described herein, environmental contextual data is indicative of the user's interaction with a group-based communication system118at the time and/or before the user selects the executable processing action element. The environmental contextual data may comprise, for example, the user identifier of the user associated with the client device102from which the selection of the executable processing action element was received, historical data indicative of prior activities of a plurality of users when presented with circumstances similar to current circumstances under which a processing action is requested, data indicative of the initialization of the processing action via the executable processing action element within the application home interface, and/or the like. In various examples, as shown inFIG.9Bat Block922, the group-based communication server110may be configured to prefill at least a portion of the interactive dialog associated with the processing action based at least in part on the environmental contextual data. In various examples, as described herein, at least a portion of the environmental contextual data retrieved by the group-based communication server110may be provided to the application system112with which the selected processing action is associated. As shown at Blocks923and924, the group-based communication server110may transmit the interactive dialog associated with the processing action to the client device102(e.g., in various examples, the group-based communication server110may transmit a processing action request to the client device102corresponding to the processing action data, rather than transmitting a generated interactive dialog, in which case the client device102may receive the processing action request and initialize the interactive dialog based on the processing action data). The client device102may display the interactive dialog associated with the action, so as to facilitate the receipt of user input corresponding to the one or more processing action parameters, as shown in Block925. The client device102may generate processing action execution data based at least in part on the user input received corresponding to each of the dialog inputs at the interactive dialog, which may be transmitted to the group-based communication server110, as shown at Blocks926and927. In various examples, the processing action execution data received from the client device102may be comprise each of the processing action parameters associated with the processing action (e.g., each of the required parameters). As discussed herein, the processing action execution data comprising the one or more processing action parameters may be provided to the one or more dialog inputs of the interactive dialog based at least in part on either user input received at the interactive dialog displayed at the client device or environmental contextual data retrieved by the group-based communication server110from the group-based communication repository120(and/or the client device102). Responsive to receiving the processing action execution data, the group-based communication server110generates a processing action execution data packet comprising processing action routing data and payload data for the requested processing action, as indicated at Block928. As described herein, processing action routing data may comprise data identifying data usable by the application system112-116to identify the requested processing action, to identify the client device requesting the processing action, and/or identifying the message, channel, and/or interface on which the processing action is to be performed. The processing action routing data may be utilized by the group-based communication system118to appropriately route a data packet to an appropriate proxy endpoint to trigger an application system to perform a particular processing action. The proxy endpoint may include a data transfer interface (e.g., API) between unconnected computing systems via a network. In various examples, the proxy endpoint may be accessible over the network via a URL or other type of link. For example, a proxy endpoint may enable data transfer of a data packet (comprising routing data and/or payload data) from a group-based communication system118to an application system112-116associated with an application published and usable via the group-based communication system118. In various examples, the proxy endpoint is defined at least in part by a URL accessible to the application system, wherein the URL may be utilized to direct the application system to a particular dataset (e.g., one or more data packets). As discussed herein, data packets provided to the application system via the proxy endpoint may comprise data formatted to enable usage by the application system to perform a desired processing action. The proxy endpoint enables transfer of the data packet to the application system while maintaining the necessary formatting of the data packet to enable the application system to use the included data. Moreover, in some examples the proxy endpoint may enable real-time transmission of data to the application system (e.g., via push-based message transmission). In other examples, the proxy endpoint may be configured to enable the application system to pull data from the group-based communication system118(e.g., upon the occurrence of a trigger event acting to inform the application system of the presence of data that is ready for transmission). The processing action routing data may be further utilized by the application system112-116to identify the requested processing action to be performed and/or to identify any additional data that should be requested from the client device102-106(e.g., via one or more interactive dialogs presented via a group-based user interface). Moreover, the routing data may identify various characteristics of a message object (e.g., a message, a file, a plurality of messages (e.g., all messages within a communication channel), and/or the like), such as a timestamp indicating when a particular message object was shared via the group-based communication system, a sending-user identifier indicating a user (and/or client device) that initially shared the message object, a client token identifying the client device102-106requesting the processing action, and/or the like. The client token may include an identifier that uniquely identifies a particular client device102,104, or106. The client token may be static (e.g., such that a client device is permanently associated with a particular client token until an affirmative action is taken to change the associated client token) or dynamic (e.g., such that a client token is assigned to a particular client device for a short duration, such as the period of time associated with performing a particular task, the period of time associated with a unique connection session between the client device and a group-based communication system, and/or the like). In some examples, the client token may be encrypted, utilizing any of a variety of encryption methodologies for added security against unauthorized usage of the client token. The payload data may include one or more messages (e.g., message text, files attached to an exchanged message, a plurality of discrete exchanged messages, and/or the like). In some examples, the payload data may comprise processing action execution data generated in response to user input defining a configuration of one or more processing action parameters in order to execute the processing action. In some examples, the payload data may comprise environmental contextual data, and/or other data automatically selected for inclusion within the payload data for use by an application system in executing a processing action. In various examples, the group-based communication server110may configure the processing action execution data packet in accordance with one or more formatting and/or content requirements of the application system112-116, as indicated in the application data associated with the application112-116that is stored at the group-based communication repository120, For example, in various examples, the group-based communication server110may assemble routing data for the data packet to comprise (1) one or more verification tokens (e.g., a group-based communication server110verification token), (2) a group-identifier, (3) a channel identifier, (4) a user identifier (e.g., a client device102-106specific client token identifying the client device102-106that requests the particular processing action), (5) a processing action identifier (e.g., a processing action name, and/or other identifying string, as described herein), (6) an action type defining a processing action type, (7) a trigger defining an interactive dialog to be presented to the client device102-106in response to initialization of the processing action, (8) a response URL enabling the application system112-116to transmit a response (e.g., a confirmation response) back to the requesting client device102-106, (9) a timestamp indicating when the processing action is requested, (10) one or more processing action characteristics associated with the processing action (e.g., application identifier), and/or the like. In various examples, the payload data may comprise processing action execution data (e.g., environmental contextual data), message data, and/or other data selected for inclusion within the payload data for use by the application system112-116in executing the processing action. Specifically, with respect to the exemplary process shown at Block928, the processing action routing data is generated based at least in part on the processing action data and identifies (1) the processing action to be performed by the application system112-116and (2) a client token identifying the client device102that requested the execution of the processing action (i.e. the client device that received the selection of the executable processing action element associated with the processing action). Responsive to generating the processing action execution data packet, the group-based communication server110may provide the data packet via a proxy endpoint to the application system112-116identified with the routing data as shown at Blocks929and930. In various examples a proxy endpoint may provide an API for passing the processing action execution data packet from the group-based communication server110to the application system112-116, thereby enabling the application system112-116to consume the routing data and/or the payload data within the data packet while executing the processing action. The data included within the data packet is passed to the application system112-116, for example, using the API to provide the data to the application system112-116in the appropriate format to execute the requested action. Providing the processing action execution data packet to the application system112-116causes the application system112-116to execute the requested processing action as identified in the processing action execution routing data based at least in part on the payload data (e.g., the processing action execution data), as shown at Block931. Once the application system112-116completes execution of the requested processing action, the application system112-116provides a confirmation response to the group-based communication server110, as indicated at Blocks932and933, and the group-based communication server110provides a confirmation message to the client device102, as indicated at Blocks934and935. In some examples, the confirmation message may be displayed via a dialog rendered within the group-based communication interface displayed at the requesting client device102or, specifically, within the application home interface displayed at the client device102. As shown at Blocks936and937, the client device102may receive a selection of an application home interface page (e.g., an element rendered within the application home interface associated with the selected application home interface page) and may transmit the selection to the group-based communication server110. Responsive to receipt of the selection of the application home interface page, the group-based communication server110may be configured to transmit at least a portion of the application data associated with the selected application home interface page to the client device102for rendering within the application home interface, as shown at Blocks938and939. The client device102may display the application data associated with the application home interface page within the application home interface, as shown inFIG.9Cat Block940. In various examples, for example, the application home interface may comprise a plurality of application home interface pages, each configured for alternative display (i.e. the group-based communication server110may display application data associated with a single application home interface page of a plurality of application home interface pages at a given time). In various examples, the one or more application home interface pages may comprise one or more of an application home interface home page, an application home interface message page, an application home interface about page, an application home interface settings page, and/or the like. As shown at Block941, a client device102may receive, as user input to the application home interface rendered at a display device of the client device102, application message data associated with the application home interface and/or the application identifier associated with the application112-116. In various examples, the application message data may be received via user input an application messaging pane within the application home interface (e.g., at the application home interface message page). The client device102may transmit the application message data associated with the application identifier and/or the application home interface to the group-based communication server110, as shown at Block942. The group-based communication server110may render the application message data for display within the application home interface associated therewith and/or associated with the application identifier associated therewith, as shown at Block943. For example, the application message data may comprise message content and or various metadata as described herein. In various examples, at least a portion of the application message data may be rendered within the application messaging pane. As shown at Blocks944and945, the application system112-116may generate and transmit message data to the group-based communication server110. As shown at Block946, responsive to receiving the message data from the application system112-116, the group-based communication server110may generate an unread message indicator associated with the application message data. In various examples, an unread message indicator may comprise data associated with an application home interface, application message data, a user identifier, and/or a client device identifier that indicates that a message has been received from an application system112-116via an application home interface in order to call a user's attention to the particular message content. In various examples, the unread message indicator may comprise a textual or graphical statement generated as a representation that a message has been received from an application system, but that the message has not yet been engaged by a user (i.e. the message has not been rendered for display at a client device). For example, a notification indicator may be a data structure comprising a flag, or a record of a data structure whereby a logic “1” indicates that a message received from an application system has been rendered for display within a group-based communication interface and a logic “0” indicates that the message has been received from the application system, but not yet rendered for display within the group-based communication interface. As shown at Block947, the group-based communication server110may be configured to render the application message data received from the application system112-116within the application home interface (e.g., within the application messaging pane). As described herein, in various examples, the group-based communication server110may operate according to various processes such that the application home interface (e.g., the application messaging pane) may function as a private group-based communication channel wherein the client device102and the application system112-116(e.g., one or more client devices associated with the application system112-116) are associated with user identifiers with access rights to the group-based communication channel (i.e. the application home interface). In various examples, the unread message indicator may be rendered within the group-based communication interface, for example, proximate the application identifier element, within the application home interface (e.g., proximate the rendered application message data, proximate an element associated with the application home interface message page) and/or the like. Responsive to rendering the unread message indicator within the group-based communication interface, the group-based communication server may detect user engagement with the application message data rendered within the application home interface (e.g., at the application messaging pane), and may selectively disassociate the unread message indicator with the application message data and/or the application home interface, as shown at Blocks948and949. As shown at Block950, the group-based communication server110may retrieve application data associated with the application112from the group-based communication repository120and transmit at least a portion of the application data comprising the application settings data to the client device102for rendering. As described herein, application settings data may comprise data associated with a user identifier, an application home interface, and an application which defines at least a portion of a settings framework associated with the application system112-116. In various examples, the application settings data may be generated at least in part based on user input by developer associated with the application. For example, the application settings data may define which application settings are available to be configured by a user within the group-based communication system118. The application settings data may be selectable and/or configurable by a developer of the application at the time of application integration into a group-based communication system or any time thereafter. For example, in one embodiment, responsive to receiving a selection of an element associated with an application home interface settings page, the group-based communication server110may transmit the application settings data to the client device120for rendering. As shown in Block951, the client device102may display the application settings data within the application home interface (e.g., at an interactive settings pane). In various examples, as described herein, the group-based communication server110may generate at least a portion of the application home interface (e.g., the interactive settings pane) and display there at a client device102the application settings data associated with the application112-116using one or more block kits, as described herein with respect toFIG.7. In such a circumstance, the group-based communication server110may operate to display the application settings data within the application home interface at the client device102according to various processes similar to those described with respect to Blocks958-Block969shown inFIG.9D(e.g., analogous operations for displaying application settings data at a block-kit-configured interactive settings pane and displaying user engagement data at a block-kit configured user engagement pane). In various examples, the client device102may receive user input at the application home interface (e.g., the interactive settings pane) corresponding to one or more interactive settings pane inputs that are rendered within the application home interface. Each of the one or more interactive settings pane inputs may correspond to a respective application setting associated with the application, such that the user input may represent updated application settings data (i.e. application settings preference data) with respect to one or more application settings. In various examples, the updated applications settings data (i.e. application settings preference data) may be associated with the application and the user identifier associate with client device102. As shown at Blocks952and953, the updated application settings data may be transmitted to the group-based communication server110, where it may be rendered for display within the application home interface (e.g., the interactive settings pane), as shown at Block954. As shown in Blocks955and956, the group-based communication server110may transmit the updated application settings data associated with the user identifier, the application home interface, and/or the application to the group-based communication repository120for storage. In various examples, the group-based communication server110may operate to provide the updated application settings data (i.e. application settings preference data) received at the client device102via the application home interface to the application system112-116by providing an application settings data packet to the application system112. In such an exemplary embodiment, the applications settings data packet may comprise application settings routing data and payload data, wherein the application settings routing data identifies configurable application functionalities corresponding to one or more settings inputs defined by application settings data stored by the application system, and the payload data comprises the updated application settings data received at the client device. The application settings routing data may be utilized by the group-based communication system to appropriately route a data packet to an appropriate proxy endpoint to trigger an application system to store the corresponding payload data. The application settings routing data may be further utilized by the application system to identify the requested application setting to be configured and/or to identify any additional data that should be requested from the client device (e.g., via one or more interactive dialogs presented via a group-based user interface). In some examples, application settings routing data may be based at least in part on application settings data received from a developer client device associated with an application. As shown at Block957inFIG.9D, the group-based communication server110may detect a notification trigger event. As described herein, a notification trigger event may comprise an action, incident, collection of steps, or processes executed by one or more of the group-based communication server110the client device102, and the application system112-116that initializes an updating and/or refreshing of a user engagement interface (e.g., user engagement pane data, block data) rendered within an application home interface. A notification triggering event may be detectable by the group-based communication server110and may be associated with one or more client devices102. In various examples, triggering events may be pre-defined (e.g., button clicks, slash commands, etc.) or may be learned by the group-based communication system over time using machine learning models or other similar techniques. As described herein, the notification triggering event may be based at least in part on user engagement with the application home interface at the client device102. For example, the block data displayed with the one or more user engagement blocks may be updated and/or refreshed at each distinct instance in which the user engagement interface is rendered within the application home interface. Thus, a notification triggering event may comprise the receipt of a selection of an application identifier element from a client device102such that the application home interface comprising the user engagement interface is to be rendered for display. Similarly, a selection of an application home interface page configured to display the user engagement pane may comprise a notification triggering event. Responsive to detecting a notification triggering event, the group-based communication server110may retrieve the user engagement pane data associated with the application home interface and the application from the group-based communication repository120, as shown with respect to Blocks958and959. Based at least in part on the user engagement pane data (e.g., the customizable block configuration as defined by the developer, the one or more user engagement pane instructions associated with the one or more user engagement blocks), the group-based communication sever110may generate a customizable block request and transmit the request to the application system112-116, as shown at Blocks960and961. In various examples, as described herein, the customizable block request may comprise one or more tokens, identifiers, or other authentication credentials that may be used to facilitate the communication between the group-based communication server110and the application system112-116. In various examples, the customizable block request may comprise one or more block types to identify what type of block is to be rendered to the client device102associated with the group-based communication interface. Further a customizable block request may include one or more user engagement pane instructions and/or the inputs corresponding to various block data to be displayed at the user engagement interface. Responsive to receiving the customizable block request, the application system112-116may generate block data corresponding to the user engagement pane data. In various examples, the block data may be associated with the user identifier, the application identifier, a group-based communication channel identifier, and the application home interface, and may be configured for display within the one or more user engagement boxes according to the customizable block configuration, as shown at Block962. As shown at Block963, responsive to determining that at least a portion of the block data to be displayed at the one or more user engagement blocks has changed compared to the block data generated at the last connection session, the application system may be configured to generate a notification signal associated with at least one of the one or more user engagement blocks. In various examples, the notification signal may include content that is “pushed” from an application system112-116to a user interface of a client device102-106. For example, a notification signal (i.e., push notification) can be received from an application system112-116by a client device102-106in order to call a user's attention to particular content. By way of further example, the notification signal may be rendered, by the client device102-106, in a user interface within a display of the client device102-106to call the user's attention to particular content. The notification signal may also be in the form of a sound or vibration of a mobile device (with or without rendering within the interface). Examples of notification signals include messages, badges, icons, sounds, vibrations, custom text alerts, and the like. In various examples, the notification signal generated by the application system112-116may be associated with at least a part of the application data associated with the application system112-116. The notification signal may correspond at least in part to the one or more user engagement pane instructions associated with the at least one of the user engagement blocks. The application system112-116may transmit the block data and the notification signal generated in response to the customizable block request to the group-based communication server110, as shown at Blocks964and965. Responsive to receiving the block data from the application system112-116, the group-based communication server110may generate a user engagement interface based at least in part on the user engagement pane data retrieved by the group-based communication server110at Block959, as shown at Block966. In various examples, as described herein, user engagement interface may comprise the block data (e.g., the notification signal) received from the application system112-116. For example, the group-based communication server110may render within the user engagement pane (e.g., within the user engagement interface) the notification signal associated with at least a portion of the application data and the at least one user engagement blocks. In various examples, as described herein, the user engagement interface may be defined by the customizable block configuration comprising the one or more user engagement blocksas well as the block data and the one or more block elements defined therein. The customizable block configuration may be based at least in part on user engagement pane data (i.e. input generated from a developer client device associated with the developer user identifier). As shown at Blocks967,968, and969, the group-based communication server110may be configured to transmit the user engagement interface to the client device102for rendering within a user engagement pane displayed the application home interface. In various examples, the client device102may be associated with the notification triggering event. As discussed herein, in various examples, the block data displayed within the one or more user engagement blocks may be customizable by a client device associated with the application home interface. As shown at Block970, the client device may receive block data associated with the user identifier and the application identifier as user input at the user engagement interface displayed at the client device102and transmit the block data to the group-based communication server110. In various examples, the block data may correspond to at least a portion of the block data displayed within one of the one or more user engagement blocks. The block data transmitted from the client device102to the group-based communication server110may comprise updated user engagement pane data. As shown at Blocks971,972, and973, the group-based communication server110may be configured to receive the updated block data from the client device102and subsequently transmit the updated block data to the group-based communication repository for storage. FIG.10illustrates an exemplary flow diagram for determining an initial page of the one or more application home interface pages for display within an application home interface according to one embodiment of the present disclosure. The method1000begins at Block1001with detecting a trigger event associated with an application via a group-based communication interface. In various examples, a trigger event may comprise an action, incident, collection of steps, or processes executed within the group-based communication system118and/or the application system112-116which may cause the group-based communication server110to generate an application home interface associated with the application112-116and the user identifier. For example, a trigger event may be the receipt by a client device102of a selection of an application identifier element associated with the application, as described herein. Responsive to detecting the trigger event associated with the application via the group-based communication interface, method1000continues, at Block1002, with parsing application home interface contextual data associated with the application home interface to determine an initial page of the one or more application home interface pages for display. In various examples, application home interface contextual data may comprise data indicative of user engagement with an application home interface within a group-based communication interface. In some examples, application home interface contextual data may be associated with an application identifier, an application home interface identifier, a client device identifier, and/or a user identifier. As discussed in greater detail herein, such application home interface contextual data may refer to usage data (e.g., historical data, usage rate data, and/or the like) associated with one or more user identifiers. For example, application home interface contextual data may comprise a previously visited indicator, a previously unvisited indicator, unread message indicator, and/or an abandoned page indicator associated with an application home interface identifier and a user identifier. Responsive to parsing the application home interface contextual data, method1000continues, at Block1003, with determining whether a previously unvisited indicator associated with the application home interface and the user identifier (e.g., the client device identifier) was detected. In various examples, a previously unvisited indicator may comprise one or more items of data associated with an application home interface and one or both of a user identifier and a client device identifier that indicates that the application home interface has not previously been generated and rendered for display at the client device102associated with the user identifier. In various examples, the previously unvisited indicator may comprise a textual or graphical statement that may be generated and associated with an application home interface as a representation that a client device associated with the user identifier has not previously displayed the application home interface. For example, a previously unvisited indicator may be a data structure comprising a flag, or a record of a data structure whereby a logic “1” indicates that an application home interface has not previously been displayed at a client device associated with the user identifier and a logic “0” indicates that the application home interface has previously been generated and accessed by the user associated with the user identifier via a client device. In various examples, a previously unvisited indicator may be generated by a group-based communication server110upon the initial receipt of application data associated with an application (e.g., application home interface configuration data). In various exemplary circumstances wherein a previously unvisited indicator associated with the application home interface and the user identifier (e.g., the client device identifier) was detected, the method1000continues, at Block1004, with associating an initial page identifier with a default page identifier, such that the initial page of the application home interface comprises the application home interface page associated with the default page identifier. For example, an application home interface home page of the one or more application home interface pages may be associated with the default page identifier. Responsive to associating an initial page identifier with a default page identifier, method1000continues, at Block1005, with disassociating the previously unvisited indicator with the application home interface and the user identifier. Disassociating the previously unvisited indicator with the application home interface and the user identifier may represent that the user has previously visited the application home interface associated with the application. In various examples, disassociating the previously unvisited indicator with the application home interface and the user identifier may further comprise associating the application home interface and the user identifier with a previously visited identifier. In various examples, the group-based communication server110may disassociate the previously unvisited indicator with the application home interface and the user identifier responsive to rendering the initial page of the application home interface (i.e. the application home interface page associated with the default page identifier) for display at the client device102. Referring back to Block1003, responsive to parsing the application home interface contextual data in various exemplary circumstances wherein a previously unvisited indicator associated with the application home interface and the user identifier was not detected, the method1000continues, at Block1006, with determining whether an unread message indicator associated with the application home interface and the user identifier was detected. As described herein in greater detail, an unread message indicator may comprise data associated with an application home interface, application message data, the user identifier, and/or a client device identifier that indicates that a message has been received from an application system112-116. In various examples, an unread message indicator may be generated by a group-based communication server110upon the receipt of application message data associated from the application112-116. In various exemplary circumstances wherein an unread message indicator associated with the application home interface and the user identifier was detectedand where a previously unread indicator was not detectedthe method1000continues, at Block1007, with associating an initial page identifier with an application home interface message page so as to select the application home interface message page as the initial page of the application home interface. Responsive to associating an initial page identifier with the application home interface message page, method1000continues, at Block1008, with disassociating the unread message indicator with the application home interface and the user identifier. Disassociating the unread message indicator with the application home interface and the user identifier may represent that the group-based communication server110hasor will at a time at least substantially soon after the disassociation operationrendered the application message data within an application home interface at the client device102. In various examples, the group-based communication server110may disassociate the unread message indicator with the application home interface and the user identifier upon rendering the initial page of the application home interface (i.e. the application home interface message page) for display at the client device102. Referring back to Block1006, responsive to parsing the application home interface contextual data, in various exemplary circumstances wherein an unread message indicator associated with the application home interface and the user identifier was not detectedand where a previously unread indicator was not detectedthe method1000continues, at Block1009, with associating an initial page identifier with an abandoned page indicator, such that the initial page of the application home interface comprises the application home interface page associated with the abandoned page indicator. For example, an application home interface home page of the one or more application home interface pages may be associated with the abandoned page indicator. In various examples, an abandoned page indicator may comprise one or more items of data associated with an application home interface, one or both of a user identifier and a client device identifier, and an application home interface age identifier that indicates that the application home interface page with which it is associated was the application home interface page being displayed at the end of the previous application home interface session (i.e. a subset of a connection session that is defined by the duration that the application home interface is rendered at the client device102). In various examples, an application home interface session may be terminated by the group-based communication server110responsive to detecting user engagement at the client device102with an element of the group-based communication interface to the application home interface. In various examples, the group-based communication server110may associate the abandoned page indicator with the application home interface page identifier of the application home interface page being rendered at the client device102responsive to detecting the user engagement, as described above. In various examples, responsive to associating an application home interface home page of the one or more application home interface pages with the abandoned page indicator, the group-based communication server110may be configured to parse the application contextual data and disassociate each of the other application home interface pages from an abandoned page indicator that may be determined to be associated therewith. Responsive to the selection of the initial page of the application home interface, as described herein, the method1000continues, at Block1010, with transmitting the application home interface associated with the initial page identifier to the client device102for rendering within the group-based communication interface, such that the initial page (i.e. the application home interface page associated with the initial page identifier) is initially displayed at the client device102within the application home interface. Providing an Interactive Developer Interface of a Group-Based Communication System As noted above, systems and methods for providing an interactive developer interface of a group-based communication system according to various examples are discussed herein. The interactive developer interface provided by the group-based communication server110may comprise elements renderable to display an area by which a developer associated with an application112-116may input data to implement and/or configure various aspects of an application (e.g., a processing action, an application home interface) within a group-based communication system118. An exemplary interactive developer interface may be rendered within a group-based communication interface at a client device107associated with the developer. The interactive developer interface may be configured to accept user input from the developer client device (e.g., as a selection of one or more of a plurality of available settings options, as freeform input, and/or the like), the user input defining, at least in part, the application data (e.g., the processing action data, application home interface configuration data, and/or the like). As described herein, the interactive developer interface transmitted to the developer client device107may comprise a plurality of elements and/or fillable forms configured to facilitate the receipt of plurality of application data associated with a plurality of applications, wherein each of the application data is configured in a consistent manner such that the group-based communication server may operably and efficiently implement various functionalities of with the respective applications throughout the group-based communication system118. The systems and methods described herein enable the group-based communication server110to present and/or execute application data that is customizable by a developer associated therewith, in a manner configured by the group-based communication server110that is consistent throughout the group-based communication system118with respect to each of the plurality of applications. FIG.11illustrates a lane diagram showing functionality of various components associated with an exemplary interactive developer interface in accordance with various examples. As shown at Blocks1101and1102, in various examples, an application implementation request may be transmitted from the developer client device107and received by the group-based communication server110. In various examples, the application implementation request may comprise a collection of data transmitted by a developer client device107to the group-based communication server110as a result of a developer associated with an application indicating a desire to implement the application within the group-based communication system118. An application implementation request may be associated with a user identifier associated with the developer and/or a client device associated therewith, and an application identifier. For example, an application implementation request may be transmitted from the client device107responsive to receiving a selection of an element associated with the interactive developer interface at the client device107via the group-based communication interface. Responsive to receiving the application implementation request from the client device107, as shown at Blocks1103and1104, the group-based communication server110may transmit an interactive developer interface to the client device107for rendering at a display device associated therewith. As described herein, the interactive developer interface transmitted to the client device in response to receiving an application implementation request may be embodied as a universal template with one or more pre-defined input parameters (i.e. input fields corresponding to a functionality of the application) configured according to one or more executable instructions defined by the group-based communication server110. For example, the interactive developer interface may comprise a plurality input elements, at least two of which are configured to receive user input from the developer client device107corresponding to processing action data and user engagement pane data, as described herein. As shown at Block1105, the interactive developer interface may be displayed at the client device107within the group-based communication interface. As shown at Blocks1106and1107, the client device107may receive a selection of a processing action creation element rendered within the interactive developer interface at the client device107and may transmit the selection to the group-based communication server110. For example, the processing action creation element may be an interactive “Create New Action” element that may be selected to initiate the process for providing relevant data to the group-based communication server110. Responsive to receiving the selection of the processing action creation element, the group-based communication server110may initiate said process by generating and transferring to the client device107a processing action creation interface, as shown at Blocks1108and1109. As described herein, the processing action creation interface may comprise a secondary interactive developer interface configured to receive developer user input regarding the functionality of a processing action to be made available to users of the group-based communication system118. In various examples, as shown at Block1110, the processing action creation interface may be rendered for display at the client device107within the interactive developer interface. Further, in various examples, the processing action creation interface may be an interactive dialog comprising one or more input dialogs configured to receive user input from the developer that may define one or more functionalities and/or characteristics of the processing action (i.e. processing action data). As shown at Block1111, the client device107may receive processing action data defined by the developer user input at the processing action creation interface. In various examples, each input dialog may correspond to a respective processing action characteristic, such that the processing action data is defined based at least in part on the user input received from the client device107at each of the input dialogs. For example, the processing action creation interface may receive user input at the client device107via the one or more input dialogs that may define one or more processing action parameters (e.g., what the one or more parameters and/or and whether each one is a required parameter or an optional parameter), a processing action title, a processing action description, a processing action type, and/or the like. In various examples, the group-based communication server110may generate one or more additional and/or different interactive dialogs depending on the processing action type designated by the developer user input, each of the interactive dialogs generated being configured to further curate the input dialogs contained therein based on one or more requirements of the particular processing action type. As shown at1112, the client device107may receive application data defined by the developer user input at the interactive developer interface (i.e. at a portion of the interactive developer interface not defined by the processing action creation interface). In various examples, the application data received by the client device107may comprise application home interface configuration data. In various examples, application home interface configuration data may comprise one or more executable instructions configured to facilitate the generation of an application home interface and/or the display of developer-provided information (e.g., application data) therein. In some examples, application home interface configuration data may comprise application informational data, application settings data, application home interface welcome pane data, and/or the like, as described herein. In various examples, the application home interface configuration data may comprise an executable processing action element priority order, which may be configured based on user input from the developer client device107, and which defines the organization of each of the one or more executable processing action elements rendered within the application home interface relative to the other executable processing action elements. In various examples, the application home interface configuration data may comprise various data corresponding to one or more application home interface pages, such as, for example, an indicator generated based on a determination (e.g., by either the group-based communication server110or the developer) that a particular application home interface page is not applicable based at least in part on one or more functionalities of the application, and thus that the application home interface page should not be displayed within the interface. Further, as discussed above, the client device107may receive application data comprising user engagement pane data. User engagement pane data, as described herein, may define a customizable block configuration so as to reflect the developer's desired configuration of the one or more user engagement blocksand the block elements displayed thereinwithin the user engagement interface. In various examples, at least a portion of the user engagement pane data may comprise block data associated with one or more user engagement blocks configured to reflect execution of one or more user engagement pane instructions corresponding to one or more functionalities of an application. As shown at Blocks1113and1114, the client device107may transmit the application data received via the interactive developer interface to the group-based communication server110. The application data received by the group-based communication server110may comprise processing action data and user engagement pane data. Further, in various examples, the application data received by the group-based communication server110may comprise application home interface configuration data. The group-based communication server may selectively associate the application data received from the client device107with an application identifier, as shown at Block1115. In various examples, the group-based communication server110may associate the application data with the application identifier associated with the client device107and/or the user identifier associated with the developer. As shown at Block1116, responsive to receiving processing action data associated with a processing action from a developer client device, the group-based communication server110may verify the processing action type associated with the processing action identifier so as to ensure that the particular requirements associated with the processing action type selected by the developer are satisfied by the one or more processing action parameters (e.g., required parameters) associated with the processing action. For example, where a developer selects at the interactive developer interface that she would like to create a message processing action, the group-based communication server may be configured to verify that at least one of the one or more processing action parameters associated with message processing action is a “message ID.” As shown at Block1117, the group-based communication server may generate a required parameter indicator associated with a processing action (e.g., a processing action parameter) based at least in part on the processing action data received from the developer client device107. The group-based communication server110may prevent a processing action from being initialized and/or transmitted to an application system112-116for execution until each of the required parameters comprises a corresponding input value. As shown in Blocks1118and1119, the group-based communication server110may transmit the application data associated with the application (e.g., the application identifier) to the group-based communication repository120for storage. FIG.12illustrates a wireframe1200of an exemplary interactive developer interface presented to a developer associated with an application112-116(e.g., via a developer client device107associated with a user identifier associated with the developer and the application112-116) enabling the usage of one or more particular processing actions and the generation of an application home interface within a group-based communication system118. As shown inFIG.12, the interactive developer interface is configured to receive developer input specifying a request URL relating to the application system112-116for a particular processing action. The URL identifies the location to which any data (e.g., routing data and/or payload data included within a data packet, additional data provided from a client device102-106in response to an interactive dialog, and/or the like) is provided to the application system112-116to enable the application system112-116to identify the requested processing action and to execute the requested processing action. Moreover, the interactive developer interface is additionally configured to receive user input (e.g., via a client device107) initiating a process for making a particular processing action available to users of the group-based communication system118, as discussed herein. Specifically, as shown in the example wireframe1200ofFIG.12, the user interface includes an interactive “Create New Action” interface element that may be selected to initiate the process for providing relevant data to the group-based communication system118, for example, via the exemplary processing action creation interfaces illustrated as wireframes1300A and1300B inFIGS.13A and13B, respectively. Moreover, as shown inFIG.12, the group-based communication system118may be configured to present within the interactive developer interface a list of all of the processing actions associated with the application, the list may be organized by one or more processing action characteristic (e.g., processing action name, processing action type, processing action description, and/or the like). In various examples, the interactive developer interface, as described herein, may be accessible to a developer and/or one or more user profiles associated with the application, at any point subsequent to the implementation of the application in the group-based communication system118. As mentioned above in reference toFIG.12, the exemplary wireframes1300A and1300B shown inFIGS.13A and13B, respectively, each provide at least a portion of a processing action creation interface embodied as a secondary interactive developer interface configured to receive additional data regarding the functionality of a processing action to be made available to users of the group-based communication system118(e.g., client devices102-106associated with a particular group). As discussed herein, the processing action characteristics associated with a processing action of some examples comprises a processing action name and description to be presented to client devices102106(e.g., via appropriate user interfaces), an icon or other image to be associated with the processing action, and a callback ID that may be included with processing action data packets to identify the relevant processing action to be utilized with the data included in the data packet. In some examples, the executable portions of the processing action are stored locally at the relevant application system, such that the group-based communication system118provides relevant data in an appropriate format (e.g., via an API providing data via the URL specified during setup of the processing action) to the application system, and provides various interactive dialogs and/or other messages relevant to the processing action to a requesting client device102-106. As shown, the exemplary wireframes1300A and1300B shown inFIGS.13A and13B, respectively, highlight two different exemplary types of processing action creation interfaces which the group-based communication server110may be configured to generate based on various user input received from the developer client device107via the interactive developer interface. Indexing Processing Actions Associated with a Plurality of Applications Implemented in a Group-Based Communication System As noted above, systems and methods for indexing processing actions associated with a plurality of applications within a group-based communication system according to various examples are discussed herein. The group-based communication server110may be configured to receive processing data associated with a plurality of processing actions of a plurality of applications implemented within the group-based communication system118and facilitate the execution of each processing action at the application system112-116respectively associated therewith. As described herein, the group-based communication server110may be configured to index each of the processing actions based one or more processing action characteristics associated therewith so as to effectively characterize each of the processing actions and optimize action availability and operability throughout a group-based communication interface. As shown at Blocks1401and1402inFIG.14A, a developer client device associated with a developer user associated with an application system112-116may receive application data generated as user input at the client device107via an interactive developer interface, as described herein, and transmit the application data to the group-based communication server. The application data may comprise processing action data corresponding to one or more processing actions executable by the corresponding application112-116of a plurality of applications implemented within the group-based communication system118. In various examples, each processing action is defined by a plurality of processing action characteristics. As shown at Block1403, the group-based communication server110may generate an application identifier associated with the application112-116. In various examples, an application identifier may comprise one or more items of data by which an application may be uniquely identified within the group-based communication system118. Further, the group-based communication server110may generate one or more processing action identifiers, each respectively associated with a processing action of the one or more processing actions and the application identifier associated with the application112-116, as shown at Block1404. In various examples, a processing action identifier may comprise one or more items of data by which a processing action may be uniquely identified within the group-based communication system118(e.g., a processing action name, and/or other identifying string, as described herein). Further, the group-based communication server110may generate one or more processing action characteristic identifiers, each associated with a processing action characteristic and a processing action identifier associated with a corresponding processing action, as shown at Block1405. In various examples, a corresponding processing action comprises a processing action of the one or more processing actions that is defined at least in part by the processing action characteristic. For example, each processing action may be defined by a plurality of processing action characteristics comprising an application identifier and a processing action identifier. As shown at Blocks1406and1407, the group-based communication server110may associate the application data received from the client device107with application identifier and, further, associate the application identifier with each of the processing action identifiers generated at Block1404such that the application data comprises each of the processing action identifiers. Further, as shown at Block1408, the group-based communication server110may associate the application identifier with each of the one or more processing action characteristic identifiers generated at Block1405, such that the application data comprises each of the processing action characteristic identifiers. As shown at Blocks1409and1410, the group-based communication server110may transmit the application data associated with the application identifier and comprising each of the one or more processing action identifiers and each of the one or more processing action characteristic identifiers to an application table for a group within the group-based communication repository120for storage. As described herein, an application table may identify each of the applications available for a user within the group-based communication system118. In some examples, the application table may identify one or more applications available for a user within a group-based communication system118. In some examples, the application table may include a processing action table and/or one or more processing action characteristics (e.g., included in a processing action characteristic table). In various examples, the processing action table may identify one or more processing actions available for a user. The identified processing actions within processing action tables may be updated under various circumstances, such as when new processing actions are installed/uninstalled and/or otherwise made available/unavailable to a user. As noted herein, contextual processing action lists of processing actions recommended to a particular user may encompass a subset of all those processing actions listed within a processing action table for a particular user. In some examples, the processing action characteristic table may identify one or more processing action characteristics associated with a processing action that is available to a user. In some examples, each processing action may be defined by a plurality of processing action characteristics. In some examples, the group-based communication system118may be configured to recommend one or more processing action characteristics to a user, such as in a contextual list (e.g., contextual processing action characteristic list). In such examples, the recommended processing action characteristic(s) may include a subset of the processing action characteristics associated with the processing action characteristic table. In some examples, the group-based communication system118may be configured to recommend one or more applications and/or processing actions to a user, such as in a contextual list (e.g., contextual applications list, contextual processing action list). In such examples, the recommended application and/or processing action may include a subset of the applications and/or processing actions available to a user via the group-based communication system118. In various examples, application data may be stored individually for various groups, and accordingly the application data may be stored in an application table associated with the particular group, such that client devices102-106associated with the particular group have access to the stored application data (and accordingly the application data associated with the one or more processing action identifiers). For example, when providing updates to application functionality provided by the application system112-116, updates are disseminated and stored via each application table such that the updated application data is available to individual groups. In some examples, updates may be disseminated to individual application tables only upon approval from an administrator associated with the particular group and application table. Similarly, when introducing a new processing action associated with the application identifier, the application data associated with the new processing action is disseminated to all application tables (e.g., processing action tables) having application data associated with the particular application system112-116. As shown at Block1411, the group-based communication server110may index at least one of the one or more processing actions within the application table for the group based at least in part on processing action characteristics. In various examples, the at least one of the one or more processing actions may be indexed in order to facilitate various facets of searching (i.e. search queries that return results from the group-based communication repository120(e.g., the application table)). As shown at Block1412, environmental contextual data may be generated and/or collected at least in part by the group-based communication system118(e.g., the group-based communication server110) and/or the client device102-106. As described herein, environmental contextual data is indicative of a user's interaction with a group-based communication system118at and/or before the time the user requests execution of a processing action. In various examples, environmental contextual data is generated and/or collected based at least in part on data utilized to generate a group-based communication interface to be presented to a user via a client device102-106. The environmental contextual data encompasses current environmental contextual data indicative of a current display provided to a user via a client device (e.g., embodied as an active channel identifier stored as at least a portion of the environmental contextual data for a particular user and/or client device), and/or prior environmental contextual data indicative of immediately prior displays provided to the user (e.g., that a user navigated through to reach the current display, which may be embodied as one or more prior channel identifiers (and/or associated time stamps) stored as at least a portion of the environmental contextual data for a particular user and/or client device). Moreover, as noted, the environmental contextual data may be stored in a cache memory storage area (associated with a group-based communication repository120in communication with the group-based communication servers110as reflected at Block1413ofFIG.14A; or embodied as a memory storage area of the client device102-106), such that the environmental contextual data may reflect both current environmental contextual data and prior environmental contextual data. As shown at Block1414, the client device102-106may receive a user input comprising a search request. The user input may be provided as a user selecting a particular user interface element utilized to initialize a processing action menu. As noted above, some examples may comprise a plurality of user interface elements within the group-based communication interface, each corresponding to a different type of processing action. Alternatively, in various examples, the user input received at the client device102-106may comprise an at least partial search query entered at a search element rendered within the group-based communication interface. Responsive to receiving the search request, the process proceeds to Block1415, at which time the environmental contextual data collected and stored for a client device (e.g., the user) at the time the search request is received is passed to the group-based communication server110, wherein at least a portion of the environmental contextual data received from the client device102-106may be generated based at least in part on interactions of the client device102-106with the group-based communication system118during a current connection session. For example, environmental contextual data may comprise user input embodied as an at least partial search query entered into a search element rendered within the group-based communication interface. As shown at Block1415, the user input indicative of the search request may be provided to the group-based communication server110, which may retrieve cached environmental data relevant to the search request (e.g., from the group-based communication repository120or from a storage area associated with the client device102-106). As shown at Block1416, the group-based communication server110may generate relevance scores for each of a plurality of applications identified within the application table based at least in part on the environmental contextual data generated for the client device. In various examples, the group-based communication server may utilize one or more models and/or algorithms generated via machine learning and/or artificial intelligence based at least in part on training data to determine (and/or generate) one or more recommended applications (e.g., application identifiers) for presentation to the user. The training data in some examples comprises sets of training data, wherein each set of training data comprises environmental contextual data (e.g., active channel identifiers; prior channel identifiers (and their respective order of presentation to the user); active group identifiers; time stamps; and/or the like) presented when a processing action was requested, and the processing action(s) ultimately selected by the user (and the order of processing actions selected, if multiple processing actions are selected). Training data may be group-specific (or other subset of user-specific) and may be utilized to generate models and/or algorithms specific to a particular group. Moreover, the training data as well as the resulting models and/or algorithms may be stored in a memory storage area accessible to the group-based communication server110, such that the group-based communication server may quickly access and apply the stored models and/or algorithms based on the environmental contextual data retrieved for either the at least partial search query or the particular requested processing action. In examples in which one or more applications are determined to be recommended, the group-based communication server110may utilize training data to determine one or more user interactions with the group-based communication system118(as well as their respective order of interaction) initiated by a user, as well as the contextual data associated therewith. As indicated at Block1417, once recommended applications are determined, the group-based communication server110generates a contextual application list of one or more of the plurality of applications to be presented to the user. The contextual application list may include at least a subset (or all) of the applications listed within an application table identifying applications that are available to a user under particular circumstances. In some examples, the contextual application list may include a number of recommended applications (e.g., the most-highly recommended applications) and may end with an option for searching for other applications that may not be listed within the contextual application listing. The number of recommended applications may be pre-defined (e.g., 5 processing actions, 5 processing action characteristics, 5 applications) or may be determined based on contextual data associated with the user interaction with the group-based communication system. In some examples, the number of applications may be determined based on one or more display criteria. In some examples, the display criteria may be determined based at least in part on environmental contextual data. In some examples, a contextual list may be generated based at least in part on data stored at an individual client device and/or data stored in a group-based communication repository. Moreover, the applications (and/or processing actions, processing action characteristics associated therewith) presented to a user within a contextual list may be organized in accordance with one or more suggestion algorithms such that a most-suggested/recommended item (e.g., processing action, processing action characteristic, application) is presented at a top of the contextual list, and other alternative items may be presented lower (e.g., reflecting a lower priority) within the contextual list. For example, the items displayed within the contextual list may include items to have the highest relevance scores based on a predictive textual analysis executed by a stored model and/or algorithm (e.g., such that the environmental contextual data comprises user input at a search interface of the group-based communication interface). With respect to Blocks1418and1419inFIG.14B, the group-based communication server110may retrieve processing action data associated with each of the processing action identifiers associated with each of the one or more of the plurality of applications. As shown at Block1420, the group-based communication server110may generate one or more contextual processing action lists based at least in part on environmental contextual data, each of the one or more contextual processing action lists corresponding to a respective application of the contextual application list generated at Block1417. As shown at Blocks1421,1422, and1423the group-based communication server110may transmit the contextual application list of the one or more of the plurality of applications to the client device102-106for presentation via a group-based communication interface, wherein the contextual application list further comprises each of the one or more contextual processing action lists. In various examples, each of the one or more contextual processing action lists comprises one or more processing actions associated with one or more processing action identifiers, each of the one or more processing action identifiers being associated with an application of one or more of the plurality of applications. For example, the one or more contextual processing action lists may comprise a single recommended application (e.g., application identifier) rendered within an interface element, with the corresponding contextual processing action list displayed proximate the application identifier. In such a circumstance, the group-based communication server110may provide a user searching for an application with one or more recommended processing actions associated therewith based at least in part on environmental contextual data. In various examples, the application table may comprise a processing action characteristic table identifying each of the plurality of processing action characteristics of each plurality of processing action characteristics defining each of the one or more processing actions. In such a circumstance, similar to the process discussed above at Block1416, the group-based communication server110may generate relevance scores for each of one or more processing action characteristics identified within the processing action characteristic table based at least in part on the environmental contextual data generated for the client device102-106, as shown at Block1424. As indicated at Block1425, once recommended processing action characteristics are determined, the group-based communication server110may generate a contextual processing action characteristic list of one or more of the plurality of processing action characteristics to be presented to the user. The contextual processing action characteristic list may comprise a defined number of recommended processing action characteristics (e.g., the most-highly recommended processing action characteristics) and may end with an option for searching for other processing action characteristics that may not be listed within the contextual processing action characteristic listing. The contextual processing action characteristic list may include a subset (or all) of the processing action characteristics listed within a processing action characteristic table identifying each of the processing action characteristics associated the processing actions available to a user under particular circumstances. The contextual processing action characteristic list may comprise a listing of processing action characteristics associated with either one or more processing actions performed by a single application system or a plurality of processing actions respectively performed by a plurality of application systems. In some examples, the one or more processing action characteristics listed in a contextual processing action characteristic list may each be associated with a common processing action identifier. In some examples, the contextual processing action characteristic list of processing action characteristics may exclude certain processing action characteristics associated with one or more processing actions to the user. With respect to Blocks1426and1427, the group-based communication server110may retrieve processing action characteristic data associated with each of the processing action characteristic identifiers associated with each of the one or more processing action characteristics. As shown at Block1428, the group-based communication server110may, based at least on the environmental contextual data, generate one or more contextual processing action lists of one or more processing actions associated with a processing action characteristic identifier associated with the one or more processing action characteristics of the contextual processing action characteristic list generated at Block1420. A contextual processing action list may include a listing of processing actions performed by one or more application systems (e.g., a single application system, multiple application systems, etc.). In some examples, the one or more processing actions listed in a contextual processing action list may each be associated with a common processing action characteristic identifier and/or a common application identifier. In some examples, the contextual processing action list of processing actions may exclude certain processing actions to the user, such that the contextual processing action list of processing actions includes a subset of all of the processing actions available to a user. In various examples, each of the one or more contextual processing action lists may respectively correspond to the at least one of the one or more of processing action characteristics of the contextual processing action characteristics list. As shown at Blocks1429,1430, and1431inFIG.14C, the group-based communication server110may transmit the contextual processing action characteristic list and the corresponding one or more contextual processing action lists to the client device102106for presentation via a group-based communication interface. In various examples, each processing action of the one or more contextual processing action lists is associated with an application of the plurality of applications. For example, the contextual processing action characteristic list may comprise a single recommended processing action characteristic (e.g., processing action characteristic identifier) rendered within an interface element, with the contextual processing action list corresponding thereto being displayed proximate the processing action characteristic identifier. In such a circumstance, the group-based communication server110may provide a user searching for a processing action characteristic with one or more recommended processing actions associated therewith based at least in part on environmental contextual data. As shown at Blocks1432and1433, the client device102-106may receive user input at a group-based communication interface embodied as a processing action pin request and transfer the processing action pin request to the group-based communication server110. In various examples, a processing action pin request may comprise a collection of data transmitted by a client device102-106to the group-based communication server110that is representative of a user's request to pin an executable processing action element to a group-based communication channel interface, such that the executable processing action element may be accessible to within the group-based communication channel interface to each user with access rights to the group-based communication channel. A processing pin request may be associated with a client device, a user identifier of user with access rights to the group-based communication channel, a group-based communication channel identifier, and/or a processing action. In various examples, a processing action pin request may be represented via a temporary code that notifies at least one entity of the group-based communication system118that a user has made the request. In various examples, a processing action pin request may be generated in response to a user interaction with a group-based communication interface presented on a display screen of a client device102-106associated with the user identifier. A user causes the client device to generate a processing action pin request by interacting with, for example, a specific pin-processing-action actuator button that forms part of the group-based communication interface. The processing action pin request may be associated with a user identifier associated with the client device102-106, a group-based communication channel identifier, and a processing action associated with an application112-116. The user identifier may be associated with access rights to a group-based communication channel associated with the group-based communication channel identifier. In various embodiment, the group-based communication server110may receive the processing action pin request from client devices associated with a channel admin identifier associated with the group-based communication channel identifier. Further, in various examples, the processing action pin request may be further associated with a processing action identifier and an application identifier associated with the application112-116. As shown at Block1434, the group-based communication server110may associate the group-based communication channel identifier and the processing action identifier associated with the processing action pin request. Responsive to associating the processing action pin request to the group-based communication channel identifier, the group-based communication server110may generate an executable processing action element corresponding to the processing action identifier associated with the processing action pin request and render the executable processing action element for display within the group-based communication channel interface associated with the group-based communication channel identifier, as shown at Blocks1435and1436. In various examples, wherein an executable processing action element is pinned at a group-based communication channel interface, the executable processing action element may be accessible to each of the users associated with access rights to the group-based communication channel. Similarly, where an executable processing action element is pinned at a group-based communication channel interface, a corresponding executable processing action element associated with the same processing action identifier may be pinned to one or more action lists accessible via the group-based communication channel interface (e.g., global action list, channel action list, message action lists). As shown at Blocks1437and1438, the client device102-106associated with the processing action pin request may generate secondary processing action accessibility data associated with the group-based communication channel identifier and the processing action identifier based on user input received via the group-based communication interface and transmit the secondary processing action accessibility data to the group-based communication server110. The secondary processing action accessibility data may include one or more executable instructions associated with a processing action configured to at least partially restrict access to the processing action within a designated part of the group-based communication system (e.g., a group-based communication channel). For example, secondary processing action accessibility data may be generated based at least in part on user input received from a user who pins a processing application, such that the user input defines a request to selectively limit one of the processing action parameters associated with the processing action to be the group-based communication channel identifier associated with the group-based communication channel. In some examples, for example, secondary processing action accessibility data may define instructions received from an owner of a group-based communication channel which prevent a user from accessing a particular processing action from within the group-based communication channel. As indicated at Block1439, the group-based communication server110may restrict access to a particular processing action at one or more locations within the group-based communication platform119based on secondary processing action accessibility data. Secondary processing action accessibility data may comprise one or more executable instructions configured to at least partially restrict access to a particular processing action when initialized from a designated part of the group-based communication system (e.g., within a group-based communication channel). For example, secondary processing action accessibility data may be generated based at least in part on user input received from a client device associated with a processing action pin request associated with a group-based communication channel. The secondary processing action accessibility data may define a request by the user to selectively limit a processing action parameters associated with the processing action to be the group-based communication channel identifier associated with the group-based communication channel. Further, for example, secondary processing action accessibility data may define instructions received from an owner (e.g., channel admin) of a group-based communication channel that may prevent those users associated with the group-based communication channel from initializing a particular processing action via the group-based communication channel. In various examples, the group-based communication server110may be configured to transmit at least a portion of the secondary action accessibility data associated with an application and/or a processing action to a group-based communication repository120for storage. CONCLUSION Many modifications and other embodiments will come to mind to one skilled in the art to which this disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the disclosure is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
278,386
11861381
DETAILED DESCRIPTION OF THE EMBODIMENTS The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. Although the drawings show some embodiments of the present disclosure, it should be understood that the present disclosure can be implemented in various forms and is not limited to the embodiments. The embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments in the present disclosure are only illustrative of the disclosure, and are not intended to limit the protection scope of the present disclosure. It should be understood that the steps of the method according to the embodiments of the present disclosure may be performed in different orders, and/or be performed in parallel. In addition, the method embodiments may include additional steps and/or omit to perform the illustrated steps, not limiting the scope of the present disclosure. The term “including” and its variants as used herein are open-ended includes, that is, “including but not limited to”. The term “based on” means “based at least in part on”. The term “one embodiment” means “at least one embodiment”. The term “another embodiment” means “at least one additional embodiment”. The term “some embodiments” means “at least some embodiments”. Definitions of other terms are provided in the following description. It should be noted that concepts such as “first” and “second” mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not intended to limit the order of functions performed by the devices, modules or units or the interdependence of the devices, modules and units. It should be noted that the modifications of “one” and “a plurality of” mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, “one” and “a plurality of” should be understood as “one or a plurality of”. Names of messages or information interacted between multiple apparatuses in the embodiments of the present disclosure are illustrative rather than limit the scope of the messages or information. Reference is made toFIG.1, which shows a flow chart of an icon updating method according to some embodiments of the present disclosure. As shown inFIG.1, the icon updating method includes the following steps101to104. In step101, it is determined whether to prompt a user to open a preset sub-page. In some embodiments, the execution body (such as, the terminal device501and the terminal device502shown inFIG.5) of the icon updating method may determine whether to prompt a user to open a preset sub-page. A page displayed in an application may contain multiple sub-pages. The preset sub-page may be a sub-page preset from the multiple sub-pages contained in the page. The preset sub-page has an initial page icon. The initial page icon may be a page icon that the preset sub-page has before the page icon is updated. In some scenarios, the execution subject may determine a time length since the user last opened the preset sub-page. In a case that the time length since the user last opened the preset sub-page is greater than or equal to a preset time length, the execution body may determine to prompt the user to open the preset sub-page. In a case that the time length since the user last opened the preset sub-page is less than a preset time length, the execution body may determine not to prompt the user to open the preset sub-page. In step102, in response to determining to prompt the user to open the preset sub-page, a reference image for replacing the initial page icon is obtained. In some embodiments, in response to determining to prompt the user to open the preset sub-page, the execution body may obtain a reference image for replacing the initial page icon. The reference image may be used for replacing the initial page icon. In some scenarios, the execution subject may obtain the reference image for replacing the initial page icon from a local server or a communicatively connected server (such as, the server504shown inFIG.5). In step103, a first page icon is generated based on the reference image. In some embodiments, after obtaining the reference image, the execution body may generate a first page icon. The first page icon may be used for replacing the initial page icon. In some scenarios, the execution body may compress the reference image to obtain a thumbnail, and then to generate the first page icon. That is, the first page icon may be a thumbnail of the reference image. In step104, the initial page icon is replaced with the first page icon. In some embodiments, after generating the first page icon, the execution body may replace the initial page icon of the preset sub-page with the first page icon. Reference is made toFIG.2, which shows a schematic diagram of an application scenario of an icon updating method according to some embodiments of the present disclosure. As shown inFIG.2, a terminal device201may determine whether to prompt the user to open a preset sub-page202. In response to determining to prompt the user to open the preset sub-page202, the terminal device201may obtain a reference image204for replacing an initial page icon203of the preset sub-page202. The terminal device201may generate a first page icon205based on the reference image204. Finally, the terminal device201may replace the initial page icon203with the first page icon205. At present, with regard to prompting the user to open a sub-page, a prompt sign is displayed near a page icon of a sub-page to prompt the user to open the sub-page in a case that the sub-page contains information to be viewed by the user according to the conventional technology. It should be understood that in a case that the displayed prompt sign is small in size, the user tends to ignore the prompt sign, resulting in a poor effect of prompting the user to open the sub-page. In the embodiments, in response to determining to prompt the user to open the preset sub-page, the reference image for replacing the initial page icon of the preset sub-page is obtained. Based on the obtained reference image, the first page icon is generated. Then, the initial page icon of the preset sub-page is replaced with the generated first page icon. Therefore, the user is prompted to open the preset sub-page by replacing the initial page icon of the preset sub-page with the generated first page icon. It should be understood that, it is easy to cause the user to notice the change of the page icon of the preset sub-page by replacing the initial page icon of the preset sub-page, thereby improving the effect of prompting the user to open the preset sub-page. In some optional implementations, the execution body may perform the following steps. Specifically, in response to a time length, in which an operation of opening the preset sub-page is not detected after replacing the initial page icon, not less than a second preset time length, the first page icon is replaced with the initial page icon. In the implementations, the execution body may detect an operation of opening the preset sub-page performed by the user through a preset detection program. In the implementations, in response to a time length, in which an operation of opening the preset sub-page is not detected after replacing the initial page icon, not less than a second preset time length, the execution body may replace the first page icon with the initial page icon. Therefore, after the initial page icon of the preset sub-page is replaced with the first page icon, it is stopped to prompt the user to open the preset sub-page if the user does not open the preset sub-page for a long time period. In some optional implementations, the execution body may generate the first page icon in the following manner. Specifically, a page icon matching a style of the initial page icon is generated as the first page icon. The style of the initial page icon may include, but is not limited to, at least one of: a size of the initial page icon, a shape of the initial page icon, and a pixel value of the initial page icon. The page icon matching the style of the initial page icon indicates that a difference between the page icon and the initial page icon in style is within a preset difference range. In some scenarios, the execution body may adjust the size, the shape, the pixel value and the like of the initial page icon, and generate a page icon within the preset difference range from the initial page icon in size, shape, pixel value and the like, as the first page icon. Reference is made toFIG.3, which shows a flow chart of an icon updating method according to some embodiments of the present disclosure. As shown inFIG.3, the icon updating method includes the following steps301to307. In step S301, it is determined whether to prompt a user to open a preset sub-page. In step302, in response to determining to prompt the user to open the preset sub-page, a reference image for replacing the initial page icon is obtained. In step303, a first page icon is generated based on the reference image. The steps301,302, and303may be respectively performed similar to the steps101,102, and103in the embodiments shown inFIG.1. The descriptions of the steps101,102and103are applicable to the steps301,302and303, which are not repeated herein. In step304, a second page icon is generated based on the reference image. In some embodiments, after obtaining the reference image for replacing the initial page icon, the execution subject (such as, the terminal device501and the terminal device502shown inFIG.5) of the icon updating method may generate a second page icon based on the obtained reference image. The second page icon may be used for replacing the initial page icon. The second page icon is different from the first page icon. In some scenarios, the second page icon is different from the first page icon in size. The execution body may compress the reference image to obtain a thumbnail having a size different from the first page icon, and then to generate the second page icon. In some scenarios, the pixel value of the second page icon is different from the pixel value of the first page icon. The execution body may compress the reference image into a thumbnail having a same size as the first page icon, and adjust the pixel value of the obtained thumbnail to a preset pixel value, thereby generating the second page icon. It should be noted that, the execution subject may perform step303and step304in parallel, or may perform step303and step304respectively, which is not limited herein. In step305, the initial page icon is replaced with the first page icon. The step305may be respectively performed similar to the step104in the embodiments shown inFIG.1. The descriptions of the step104are applicable to the step305, which are not repeated herein. In step306, in response to detecting an operation of opening the preset sub-page after replacing the initial page icon, a currently displayed sub-page is switched to the preset sub-page. In some embodiments, the execution body may detect the operation of opening the preset sub-page performed by the user through a preset detection program. In practice, the operation performed by the user for opening the preset sub-page may be various operations performed by the user, for example, an operation of sliding from the currently displayed page to the preset sub-page, and an operation, such as a single click, a double click and a long press, performed by the user on a page icon of the preset sub-page. In some embodiments, in response to detecting an operation of opening the preset sub-page after replacing the initial page icon, the execution subject may switch from a currently displayed sub-page to the preset sub-page. In step307, the first page icon is replaced with the second page icon. In some embodiments, the execution subject may replace the first page icon with the second page icon. In some scenarios, on detecting the operation of opening the preset sub-page performed by the user, the execution body may replace the first page icon with the second page icon. In other scenarios, when switching from the currently displayed sub-page to the preset sub-page, the execution body may replace the first page icon with the second page icon. In some optional implementations, the execution body may perform the following steps. Specifically, in response to a time length, in which the first page icon is replaced with the second page icon, not less than a first preset time length, the second page icon is replaced with the initial page icon. It should be understood that, in a case that the time length, in which the first page icon is replaced with the second page icon, is not less than the first preset time length, it indicates that the user opens the preset sub-page for a long time period. Thus, after the user is successfully prompted to open the preset subpage, it is stopped to prompt the user to open the preset subpage. In some embodiments, in response to detecting the operation of opening the preset sub-page, the currently displayed sub-page is switched to the preset sub-page, and the first page icon is replaced with the second page icon. Therefore, by replacing the first page icon of the preset sub-page with the second page icon, the user is prompted that the preset sub-page has been switched to. In some optional implementations of an icon updating method according to the embodiments of the present disclosure, the execution body may perform the following operations to determine whether to prompt the user to open the preset sub-page. In a first step, in response to detecting an operation of following a target user performed by the user, a request for following the target user is transmitted to a communicatively connected server. The target user may be a user whom the user requests to follow. The operation of following a target user may be an operation of the user requesting to follow a target user. In some scenarios, the operation of following a target user may be an operation performed by the user on a control prompting following a target user. In a second step, in response to following result information returned by the server indicating that the user has successfully followed the target user, it is determined to prompt the user to open the preset sub-page. The following result information may indicate a result of the user requesting to follow the target user. After the user successfully follows the target user, it is determined to prompt the user to open the preset sub-page. In some optional implementations of an icon updating method according to the embodiments of the present disclosure, the execution subject may obtain an avatar of the target user as the reference image. Therefore, after the user successfully follows the target user, the avatar of the target user is used as the reference image for generating the first page icon. In some optional implementations of an icon updating method according to the embodiments of the present disclosure, information published by the target user followed by the user is displayed in the preset sub-page. Information published by a user may include, but is not limited to, at least one of: a text, an image, a video, and an audio. It should be understood that after the user successfully follows the target user, the user is further prompted to view the information published by the followed target user by prompting the user to open the preset sub-page. Referring toFIG.4, as an implementation of the method shown in the above Figures, an icon updating apparatus is provided according to some embodiments of the present disclosure. The apparatus embodiments correspond to the above method embodiments shown inFIG.1. Specifically, the apparatus may be applied to various electronic devices. As shown inFIG.4, the icon updating apparatus according to some embodiments of the present disclosure includes: a determination unit401, an obtaining unit402, a first generation unit403, and a first updating unit404. The determination unit401is configured to determine whether to prompt a user to open a preset sub-page, where the preset sub-page has an initial page icon. The obtaining unit402is configured to obtain a reference image for replacing the initial page icon in response to determining to prompt the user to open the preset sub-page. The first generation unit403is configured to generate a first page icon based on the reference image. The first updating unit404is configured to replace the initial page icon with the first page icon. In some embodiments, the processing of the determination unit401, the obtaining unit402, the first generation unit403, and the first updating unit404of the icon updating apparatus and the technical effects obtained by performing the processing may refer to the descriptions of the steps101to104in the embodiments corresponding toFIG.1, and are not repeated herein. In some optional implementations, the icon updating apparatus may further include: a second generation unit (not shown in the Figures) and a second updating unit (not shown in the Figures). The second generation unit is configured to generate a second page icon based on the reference image. The second updating unit is configured to, in response to detecting an operation of opening the preset sub-page after replacing the initial page icon, switch from a currently displayed sub-page to the preset sub-page; and replace the first page icon with the second page icon. In some optional implementations, the icon updating apparatus may further include: a first restoring unit (not shown in the Figures). The first restoring unit is configured to, in response to a time length, in which the first page icon is replaced with the second page icon, not less than a first preset time length, replace the second page icon with the initial page icon. In some optional implementations, the icon updating apparatus may further include: a second restoring unit (not shown in the Figures). The second restoring unit is configured to, in response to a time length, in which an operation of opening the preset sub-page is not detected after replacing the initial page icon, not less than a second preset time length, replace the first page icon with the initial page icon. In some optional implementations, the first generation unit403is further configured to generate a page icon matching a style of the initial page icon as the first page icon. In some optional implementations, the determination unit401is further configured to: in response to detecting an operation of following a target user performed by the user, transmit a request for following the target user to a communicatively connected server; and in response to following result information returned by the server indicating that the user has successfully followed the target user, determine to prompt the user to open the preset sub-page. In some optional implementations, the obtaining unit402is further configured to: obtain an avatar of the target user as the reference image. In some optional implementations, information published by the target user followed by the user is displayed in the preset sub-page. Reference is made toFIG.5, which shows an exemplary system architecture to which an icon updating method according to some embodiments of the present disclosure may be applied. As shown inFIG.5, the system architecture may include a terminal device501, a terminal device502, a network503, and a server504. The network503is configured to provide a medium of communication links between the terminal device501, the terminal device502and the server504. The network503may include various connections, such as wired connections, wireless communication link connections, fiber optic cable connections, or the like. The terminal devices501and502may interact with the server504through the network503. Various client applications may be installed on the terminal devices501and502. For example, shopping applications, search applications, social networking applications and the like may be installed on the terminal devices501and502. In some scenarios, in response to determining to prompt the user to open the preset sub-page, each of the terminal devices501and502may generate a first page icon. Further, each of the terminal devices501and502may replace the initial page icon of the preset sub-page with the generated first page icon. The terminal devices501and502may be hardware or software. In a case that the terminal devices501and502are hardware, the terminal devices501and502may be various electronic devices having a display screen and supporting information interaction, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and the like. In a case that the terminal devices501and502are software, the terminal devices501and502may be installed in the electronic devices listed above, and may be implemented as multiple software or software modules, and may be implemented as a single software or software module. There is no specific limitation herein. The server504may provide various services. In some scenarios, the server504may provide the terminal devices501and502with the reference image for replacing the initial page icon of the preset sub-page. The server504may be hardware or software. In a case that the server504is hardware, the server504may be implemented as a distributed server cluster including multiple servers, or may be implemented as a single server. In a case that the server504is software, the server504may be implemented as multiple software or software modules (such as, multiple software or software modules for providing distributed services), or may be implemented as a single software or software module. There is no limitation herein. It should be noted that, the icon updating method according to the embodiments of the present disclosure may be performed by each of the terminal devices501and502. Correspondingly, the icon updating apparatus may be arranged in each of the terminal devices501and502. It should be understood that the numbers of the terminal devices, the network and the server inFIG.5are only illustrative. There can be any number of terminal devices, networks and servers according to implementation requirements. Hereinafter, reference is made toFIG.6, which shows a schematic structural diagram of an electronic device (such as a terminal device shown inFIG.5) suitable for implementing the embodiments of the present disclosure. The terminal devices according to the embodiments of the present disclosure may include, but are not limited to, mobile terminals, such as mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet PCs), PMPs (portable multimedia players) and vehicle-mounted terminals (such as in-vehicle navigation terminals), and fixed terminals such as digital TVs and desktop computers. The electronic device shown inFIG.6is only exemplary, and should not indicate any limitation to the function and application scope of the embodiments of the present disclosure. As shown inFIG.6, the electronic device may include a processing device601(such as a central processing unit and a graphics processor) which may execute various operations and processing through a program stored in a Read Only Memory (ROM)602or a program loaded from the storage device608into a Random Access Memory (RAM)603. The RAM603is further configured to store various programs and data required by the electronic device. The processing device601, the ROM602and the RAM603are connected to each other through a bus604. An Input/output (I/O) interface605is also connected to the bus604. Generally, the I/O interface605may be connected to: an input device606, such as a touch screen, a touch panel, a keyboard, a mouse, a camera, a microphone, an accelerometer, and a gyroscope; an output device607, such as a liquid crystal display (LCD), a speaker, and a vibrator; a storage device608, such as a magnetic tape and a hard disk; and a communication device609. The communication device609enables the electronic device to perform wireless or wired communication with other devices for data exchanging. AlthoughFIG.6shows an electronic device having various components, it should be understood that the illustrated components are not necessarily required to all be implemented or included. Alternatively, more or fewer components may be implemented or included. Each of the blocks shown inFIG.6may represent one device, or may represent multiple devices as required. Particularly, according to some embodiments of the present disclosure, the process described above in conjunction with flow charts may be implemented as a computer program. For example, a computer program product is further provided according to some embodiments of the present disclosure, including a computer program carried on a non-transitory computer readable medium. The computer program includes program codes for performing the method shown in the flow charts. In the embodiments, the computer program may be downloaded and installed from the network via the communication device609, or installed from the storage device608, or installed from the ROM602. When the computer program is executed by the processing device601, the above-mentioned functions defined in the method according to the embodiments of the present disclosure are performed. It should be noted that, the computer readable medium according to some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination thereof. The computer readable storage medium may be, but is not limited to, a system, an apparatus, or a device in an electronic, magnetic, optical, electromagnetic, infrared, or semi-conductive form, or any combination thereof. The computer readable storage medium may be, but is not limited to, an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), a light storage device, a magnetic storage device or any combination thereof. In some embodiments of the present disclosure, the computer readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, the computer readable signal medium may be a data signal transmitted in a baseband or transmitted as a part of a carrier wave and carrying computer readable program codes. The transmitted data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal or any proper combination thereof. The computer readable signal medium may be any computer readable medium other than the computer readable storage medium and can send, propagate or transmit programs to be used by or with an instruction execution system, apparatus or device. The program codes stored in the computer readable medium may be transmitted via any proper medium including but not limited to: wired, optical fiber cable, radio frequency (RF), or any suitable combination of the foregoing. In some embodiments, the client and the server may perform communication using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (for example, a communication network). The communication network embodiments include local area networks (“LANs”), wide area networks (“WANs”), internet, end-to-end networks (for example, ad hoc end-to-end networks), and any networks currently known or developed in the future. The computer readable medium may be incorporated in the electronic device, or may exist alone without being assembled into the electronic device. The computer readable medium carries one or more programs. The one or more programs, when being executed by the electronic device, cause the electronic device to: determine whether to prompt a user to open a preset sub-page, where the preset sub-page has an initial page icon; in response to determining to prompt the user to open the preset sub-page, obtain a reference image for replacing the initial page icon; generate a first page icon based on the reference image; and replace the initial page icon with the first page icon. Computer program code for performing operations of the present disclosure may be written in one or more programming languages, or a combination of the foregoing, and the programming language includes, but is not limited to, object oriented programming languages, such as Java, Smalltalk, and C++, also includes conventional procedural programming languages, such as “C” language or similar programming languages. The program codes may be executed entirely on a user's computer, or be executed partly on the user's computer, or be executed as a stand-alone software package, or be executed partly on the user's computer and partly on a remote computer, or be executed entirely on the remote computer or server. In a case that the execution of the program code involves a remote computer, the remote computer may be connected to a user's computer via any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, via an Internet providing by an Internet service provider). The flow charts and block diagrams in the Figures shows the architecture, functionality and operation of possible implementations of the products of system, method and computer program provided according to the embodiments of the present disclosure. Each block in the flow charts or block diagrams can represent a module, a program segment, or a part of code, and the module, the program segment, or the part of code includes one or more executable instructions for implementing specified logical functions. It should be noted that in some alternative implementations, the functions noted in the blocks may be implemented in a different order than those illustrated in the Figures. For example, two blocks shown in succession may in fact be executed substantially in parallel, and they may sometimes be executed in a reverse order, depending upon the functionality involved. It also should be noted that each block in the schematic diagrams and/or flow charts, and combinations of blocks in the schematic diagrams and/or flowcharts, can be implemented by a dedicated hardware-based system which is configured to implement specified functions or operations, or can be implemented by using a combination of dedicated hardware and computer instructions. The units mentioned in the description of the embodiments of the present disclosure may be implemented by means of software, or otherwise by means of hardware. The designation of these units does not in any case constitute a qualification of the unit itself. For example, the determination unit may also be described as a unit “for determining whether to prompt a user to open a preset sub-page”. The functions described above in this application may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that can be used include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on. In the present disclosure, a machine-readable medium may be a tangible medium, which may contain or store a program used by the instruction execution system, apparatus, or device or a program used in combination with the instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any suitable combination of thereof. The machine-readable storage media, for example, includes an electrical connection based on one or more wires, a portable computer disk, a hard drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of thereof. The above description includes merely preferred embodiments of the present disclosure and explanations of technical principles used. Those skilled in the art should understand that the scope of the present disclosure is not limited to technical solutions formed by a specific combination of the above technical features, but covers other technical solutions formed by any combination of the above technical features or equivalent features thereof without departing from the concept of the present disclosure. For example, a technical solution formed by interchanging the above features with technical features having similar functions as disclosed (but not limited thereto) is also covered in the scope of the present disclosure.
33,793
11861382
DESCRIPTION OF EMBODIMENTS In embodiments of this application, “at least one” means one or more, and “a plurality of” means two or more. “And/or” describes an association relationship between associated objects, and represents that three relationships may exist. For example, A and/or B may represent the following cases: Only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. The character “I” generally indicates an “or” relationship between the associated objects. At least one of the following items (pieces) or a similar expression thereof indicates any combination of these items, including a single item (piece) or any combination of a plurality of items (pieces). For example, at least one of a, b, or c may represent: a, b, c, a combination of a and b, a combination of a and c, a combination of b and c, or a combination of a, b and c, where each of a, b, and c may be in a singular form or a plural form. In addition, terms “first” and “second” are merely used for a purpose of description, and shall not be understood as an indication or implication of relative importance. The terms “center”, “vertical”, “horizontal”, “up”, “down”, “left”, “right”, “front”, “back”, and the like indicate an orientation or a position relationship based on an orientation or a position relationship shown in the accompanying drawings, and are merely intended to facilitate description of the embodiments and simplify description of this application, but do not indicate or imply that a specified apparatus or element needs to have an orientation and be constructed and operated in an orientation, which therefore cannot be understood as a limitation on embodiments of this application. Embodiments of this application provide an application starting method and apparatus, and an electronic device. For any application that is installed in an electronic device and that supports multi-window display, based on configurations of an application layer and an application framework layer in a software system of the electronic device, the electronic device that supports a floating window and split-screen display may have a single-application multi-instance feature. In this way, the Dock triggers any application to start the multi-instance feature of the application, and the electronic device may simultaneously display a plurality of windows of the application on a display in a plurality of window combination forms, so that multi-instance coordination processing and operation may be performed on a same application, thereby improving speed and efficiency of running the same application by the electronic device, maximizing continuation of an operating habit of a user on a personal computer (PC), improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. The electronic device may be a device such as a tablet computer, a mobile phone (such as a foldable screen mobile phone or a large-screen mobile phone), a notebook computer, a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (PDA), a smart television, a smart screen, a high-definition television, a 4K television, a smart speaker, or a smart projector. A type of the electronic device is not limited in an embodiment of the application. The following uses an example in which the electronic device is a tablet computer to describe an electronic device according to an embodiment of this application with reference toFIG.1. FIG.1is a schematic diagram of a structure of an electronic device according to an embodiment of this application. As shown inFIG.1, the electronic device100may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (USB) interface130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communication module150, a wireless communication module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor module180, a key190, a motor191, an indicator192, a camera193, a display194, a subscriber identification module (SIM) card interface195, and the like. The sensor module180may include a pressure sensor180A, a gyro sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a distance sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. It can be understood that the structure illustrated in this application does not constitute a specific limitation on the electronic device100. In some other embodiments, the electronic device100may include more or fewer components than those shown in the figure, or some components may be combined, or some components may be split, or different component arrangements may be used. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. The processor110may include one or more processing units. For example, the processor110may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU). Different processing units may be independent components, or may be integrated into one or more processors. The controller may be a nerve center and a command center of the electronic device100. The controller may generate an operation control signal based on an instruction operation code and a time sequence signal, to complete control of instruction reading and instruction execution. A memory may be further disposed in the processor110, and is configured to store instructions and data. In some embodiments, the memory in the processor110is a cache memory. The memory may store an instruction or data that has been used or cyclically used by the processor110. If the processor110needs to use the instructions or the data again, the processor may directly invoke the instructions or the data from the memory. This avoids repeated access, reduces waiting time of the processor110, and improves system efficiency. In some embodiments, the processor110may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, a universal serial bus (USB) interface, and/or the like. The I2C interface is a two-way synchronization serial bus, and includes one serial data line (SDA) and one serial clock line (SCL). In some embodiments, the processor110may include a plurality of groups of I2C buses. The processor110may be separately coupled to the touch sensor180K, a charger, a flash, the camera193, and the like through different I2C bus interfaces. For example, the processor110may be coupled to the touch sensor180K through the I2C interface, so that the processor110communicates with the touch sensor180K through the I2C bus interface, to implement a touch function of the electronic device100. The I2S interface may be configured to perform audio communication. In some embodiments, the processor110may include a plurality of groups of I2S buses. The processor110may be coupled to the audio module170through the I2S bus, to implement communication between the processor110and the audio module170. In some embodiments, the audio module170may transmit an audio signal to the wireless communication module160through the I2S interface, to implement a function of answering a call through a Bluetooth headset. The PCM interface may also be used to perform audio communication, and sample, quantize, and code an analog signal. In some embodiments, the audio module170may be coupled to the wireless communication module160through a PCM bus interface. In some embodiments, the audio module170may also transmit an audio signal to the wireless communication module160through the PCM interface, to implement a function of answering a call through a Bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication. The UART interface is a universal serial data bus, and is configured to perform asynchronous communication. The bus may be a two-way communication bus. The bus converts to-be-transmitted data between serial communication and parallel communication. In some embodiments, the UART interface is usually configured to connect the processor110to the wireless communication module160. For example, the processor110communicates with a Bluetooth module in the wireless communication module160through the UART interface, to implement a Bluetooth function. In some embodiments, the audio module170may transmit an audio signal to the wireless communication module160through the UART interface, to implement a function of playing music through a Bluetooth headset. The MIPI interface may be configured to connect the processor110to a peripheral component such as the display194or the camera193. The MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like. In some embodiments, the processor110communicates with the camera193by using the CSI interface, to implement a photographing function of the electronic device100. The processor110communicates with the display194by using the DSI interface, to implement a display function of the electronic device100. The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or a data signal. In some embodiments, the GPIO interface may be configured to connect the processor110to the camera193, the display194, the wireless communication module160, the audio module170, the sensor module180, or the like. The GPIO interface may alternatively be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, or the like. The USB interface130is an interface that conforms to a USB standard specification, and may be a mini USB interface, a micro USB interface, a USB type-C interface, or the like. The USB interface130may be configured to connect to a charger to charge the electronic device100, or may be configured to transmit data between the electronic device100and a peripheral device, or may be configured to connect to a headset for playing audio through the headset. The interface may be further configured to connect to another electronic device such as an AR device. It can be understood that an interface connection relationship between modules illustrated in this application is merely an example for description, and does not constitute a limitation on the structure of the electronic device100. In some other embodiments, the electronic device100may alternatively use an interface connection manner different from an interface connection manner in the foregoing embodiment, or use a combination of a plurality of interface connection manners. The charging management module140is configured to receive a charging input from the charger. The charger may be a wireless charger or a wired charger. In some embodiments of wired charging, the charging management module140may receive a charging input of a wired charger through the USB interface130. In some embodiments of wireless charging, the charging management module140may receive a wireless charging input through a wireless charging coil of the electronic device100. The charging management module140supplies power to the electronic device through the power management module141while charging the battery142. The power management module141is configured to connect to the battery142, the charging management module140, and the processor110. The power management module141receives input of the battery142and/or the charging management module140, to supply power to the processor110, the internal memory121, an external memory, the display194, the camera193, the wireless communication module160, and the like. The power management module141may further be configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health state (electric leakage or impedance). In some other embodiments, the power management module141may alternatively be disposed in the processor110. In some other embodiments, the power management module141and the charging management module140may be alternatively disposed in a same component. A wireless communication function of the electronic device100may be implemented by using the antenna1, the antenna2, the mobile communication module150, the wireless communication module160, the modem processor, the baseband processor, and the like. The antenna1and the antenna2are configured to transmit and receive an electromagnetic wave signal. Each antenna in the electronic device100may be configured to cover one or more communication frequency bands. Different antennas may be further multiplexed, to improve antenna utilization. For example, the antenna1may be multiplexed as a diversity antenna of a wireless local area network. In some other embodiments, the antenna may be used in combination with a tuning switch. The mobile communication module150may provide a solution applied to the electronic device100for wireless communication such as 2G/3G/4G/5G. The mobile communication module150may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like. The mobile communication module150may receive an electromagnetic wave through the antenna1, perform processing such as filtering or amplification on the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module150may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation through the antenna1. In some embodiments, at least some functional modules in the mobile communication module150may be disposed in the processor110. In some embodiments, at least some functional modules of the mobile communication module150may be disposed in a same device as at least some modules of the processor110. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then transmitted to the application processor. The application processor outputs a sound signal by using an audio device (which is not limited to the speaker170A, the receiver170B, or the like), or displays an image or a video by using the display194. In some embodiments, the modem processor may be an independent component. In some other embodiments, the modem processor may be independent of the processor110, and is disposed in a same device as the mobile communication module150or another functional module. The wireless communication module160may provide a wireless communication solution that includes a wireless local area network (WLAN) (for example, a wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field communication (NFC) technology, an infrared (IR) technology, or the like and that is applied to the electronic device100. The wireless communication module160may be one or more components integrating at least one communication processor module. The wireless communication module160receives an electromagnetic wave by the antenna2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor110. The wireless communication module160may further receive a to-be-sent signal from the processor110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna2. In some embodiments, the antenna1and the mobile communication module150in the electronic device100are coupled, and the antenna2and the wireless communication module160in the electronic device100are coupled, so that the electronic device100can communicate with a network and another device by using a wireless communication technology. The wireless communication technology may include a global system for mobile communications GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a BeiDou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a satellite based augmentation system (SBAS). The electronic device100may implement a display function through the GPU, the display194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display194and the application processor. The GPU is configured to: perform mathematical and geometric computation, and render an image. The processor110may include one or more GPUs, which execute program instructions to generate or change display information. The display194is configured to display an image, a video, and the like. The display194includes a display panel. The display panel may be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Miniled, a MicroLed, a Micro-oLed, a quantum dot light emitting diode (QLED), or the like. In some embodiments, the electronic device100may include one or N displays194, where N is a positive integer greater than 1. The electronic device100may implement a photographing function through the camera193, the ISP, the video codec, the GPU, the display194, the application processor and the like. The ISP is configured to process data fed back by the camera193. For example, during photographing, a shutter is pressed, and light is transmitted to a photosensitive element of the camera through a lens. An optical signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, to convert the electrical signal into a visible image. The ISP may further perform algorithm optimization on noise, brightness, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of a photographing scenario. In some embodiments, the ISP may be disposed in the camera193. The camera193is configured to capture a static image or a video. An optical image of an object is generated through the lens, and is projected onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light-sensitive element converts an optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert the electrical signal into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard format such as RGB or YUV. In some embodiments, the electronic device100may include one or N cameras193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may process another digital signal in addition to the digital image signal. For example, when the electronic device100selects a frequency, the digital signal processor is configured to perform Fourier transformation on frequency energy. The video codec is configured to compress or decompress a digital video. The electronic device100may support one or more video codecs. In this way, the electronic device100may play or record videos in a plurality of coding formats, for example, moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, and MPEG-4. The NPU is a neural-network (NN) computing processor. The NPU quickly processes input information based on a structure of a biological neural network, for example, based on a transfer mode between human brain neurons; and may further continuously perform self-learning. Applications such as intelligent cognition of the electronic device100may be implemented through the NPU, for example, image recognition, facial recognition, speech recognition, and text understanding. The external memory interface120may be used to connect to an external storage card, for example, a micro SD card, to extend a storage capability of the electronic device100. The external memory card communicates with the processor110through the external memory interface120, to implement a data storage function. For example, files such as music and videos are stored in the external storage card. The internal memory121may be configured to store computer-executable program code. The executable program code includes instructions. The processor110runs the instructions stored in the internal memory121, to perform various function applications of the electronic device100and data processing. The internal memory121may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (for example, a sound playing function or an image playing function), and the like. The data storage area may store data (for example, audio data or an address book) created during use of the electronic device100, and the like. In addition, the internal memory121may include a high-speed random access memory, or may include a nonvolatile memory such as at least one disk storage device, a flash memory, or a universal flash storage (UFS). The electronic device100may implement an audio function, for example, music playing and recording, through the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into an analog audio signal for output, and is also configured to convert analog audio input into a digital audio signal. The audio module170may be further configured to encode and decode an audio signal. In some embodiments, the audio module170may be disposed in the processor110, or some functional modules in the audio module170are disposed in the processor110. The speaker170A, also referred to as a “loudspeaker”, is configured to convert an audio electrical signal into a sound signal. The electronic device100may be used to listen to music or answer a call in a hands-free mode over the speaker170A. The receiver170B, also referred to as an “earpiece”, is configured to convert an electrical audio signal into a sound signal. When a call is answered or speech information is received through the electronic device100, the receiver170B may be put close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call or sending a voice message, a user may make a sound near the microphone170C through the mouth of the user, to input a sound signal to the microphone170C. At least one microphone170C may be disposed in the electronic device100. In some other embodiments, two microphones170C may be disposed in the electronic device100, to collect a sound signal and implement a noise reduction function. In some other embodiments, three, four, or more microphones170C may alternatively be disposed in the electronic device100, to collect a sound signal, implement noise reduction, and identify a sound source, so as to implement a directional recording function and the like. The headset jack170D is configured to connect to a wired headset. The headset jack170D may be the USB interface130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface. The pressure sensor180A is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor180A may be disposed on the display194. There are a plurality of types of pressure sensors180A, such as a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor180A, capacitance between electrodes changes. The electronic device100determines pressure intensity based on the change in the capacitance. When a touch operation is performed on the display194, the electronic device100detects intensity of the touch operation through the pressure sensor180A. The electronic device100may also calculate a touch position based on a detection signal of the pressure sensor180A. In some embodiments, touch operations that are performed in a same touch position but have different touch operation intensity may be corresponding to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on an SMS message application icon, an instruction for viewing an SMS message is performed. When a touch operation whose touch operation intensity is greater than or equal to the first pressure threshold is performed on the SMS message application icon, an instruction for creating a new SMS message is performed. The gyro sensor180B may be configured to determine a moving posture of the electronic device100. In some embodiments, an angular velocity of the electronic device100around three axes (namely, x, y, and z axes) may be determined by using the gyro sensor180B. The gyro sensor180B may be configured to implement image stabilization during photographing. For example, when the shutter is pressed, the gyro sensor180B detects an angle at which the electronic device100jitters, calculates, based on the angle, a distance for which a lens module needs to compensate, and allows the lens to cancel the jitter of the electronic device100through reverse motion, to implement image stabilization. The gyro sensor180B may also be used in a navigation scenario and a somatic game scenario. The barometric pressure sensor180C is configured to measure barometric pressure. In some embodiments, the electronic device100calculates an altitude through the barometric pressure measured by the barometric pressure sensor180C, to assist in positioning and navigation. The magnetic sensor180D includes a Hall sensor. The electronic device100may detect opening and closing of a flip cover by using the magnetic sensor180D. In some embodiments, when the electronic device100is a clamshell phone, the electronic device100may detect opening and closing of a flip cover based on the magnetic sensor180D. Further, a feature such as automatic unlocking of the flip cover is set based on a detected opening or closing state of the leather case or a detected opening or closing state of the flip cover. The acceleration sensor180E may detect accelerations in various directions (usually on three axes) of the electronic device100, and When the electronic device100is still, a magnitude and a direction of gravity may be detected. The acceleration sensor180E may be further configured to identify a posture of the electronic device, and is used in an application such as switching between a landscape mode and a portrait mode or a pedometer. The distance sensor180F is configured to measure a distance. The electronic device100may measure the distance in an infrared manner or a laser manner. In some embodiments, in a photographing scenario, the electronic device100may measure a distance through the distance sensor180F to implement quick focusing. The optical proximity sensor180G may include, for example, a light-emitting diode (LED), and an optical detector, for example, a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device100emits infrared light by using the light-emitting diode. The electronic device100detects infrared reflected light from a nearby object through the photodiode. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device100. When insufficient reflected light is detected, the electronic device100may determine that there is no object near the electronic device100. The electronic device100may detect, by using the optical proximity sensor180G, that the user holds the electronic device100close to an ear for a call, to automatically turn off a screen for power saving. The optical proximity sensor180G may also be used in a smart cover mode or a pocket mode to automatically perform screen unlocking or locking. The ambient light sensor180L is configured to sense ambient light brightness. The electronic device100may adaptively adjust brightness of the display194based on the sensed ambient light brightness. The ambient light sensor180L may also be configured to automatically adjust white balance during photographing. The ambient light sensor180L may also cooperate with the optical proximity sensor180G to detect whether the electronic device100is in a pocket, to avoid an accidental touch. The fingerprint sensor180H is configured to collect a fingerprint. The electronic device100may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. The temperature sensor180J is configured to detect a temperature. In some embodiments, the electronic device100executes a temperature processing policy through the temperature detected by the temperature sensor180J. For example, when the temperature reported by the temperature sensor180J exceeds a threshold, the electronic device100lowers performance of a processor nearby the temperature sensor180J, to reduce power consumption for thermal protection. In some other embodiments, when the temperature is less than another threshold, the electronic device100heats the battery142to prevent the electronic device100from being shut down abnormally due to a low temperature. In some other embodiments, when the temperature is lower than still another threshold, the electronic device100boosts an output voltage of the battery142to avoid abnormal shutdown caused by a low temperature. The touch sensor180K is also referred to as a touch panel. The touch sensor180K may be disposed on the display194, and the touch sensor180K and the display194form a touchscreen, which is also referred to as a “touchscreen”. The touch sensor180K is configured to detect a touch operation performed on or near the touch sensor. The touch sensor may transfer the detected touch operation to the application processor to determine a type of the touch event. A visual output related to the touch operation may be provided through the display194. In some other embodiments, the touch sensor180K may also be disposed on a surface of the electronic device100in a position different from that of the display194. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal-cord part. The bone conduction sensor180M may also be in contact with a body pulse to receive a blood pressure beating signal. In some embodiments, the bone conduction sensor180M may also be disposed in the headset, to obtain a bone conduction headset. The audio module170may obtain a speech signal through parsing based on the vibration signal that is of the vibration bone of the vocal-cord part and that is obtained by the bone conduction sensor180M, to implement a speech function. The application processor may parse heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor180M, to implement a heart rate detection function. The key190includes a power key, a volume key, and the like. The key190may be a mechanical key, or may be a touch key. The electronic device100may receive a key input, and generate a key signal input related to a user setting and function control of the electronic device100. The motor191may generate a vibration prompt. The motor191may be configured to provide an incoming call vibration prompt and a touch vibration feedback. For example, touch operations performed on different applications (for example, photographing and audio play) may be corresponding to different vibration feedback effects. The motor191may also be corresponding to different vibration feedback effects for touch operations performed on different areas of the display194. Different application scenarios (for example, a time reminder, information receiving, an alarm clock, and a game) may also be corresponding to different vibration feedback effects. A touch vibration feedback effect may be further customized. The indicator192may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like. The SIM card interface195is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface195or removed from the SIM card interface195, to implement contact with or separation from the electronic device100. The electronic device100may support one or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface195may support a nano-SIM card, a micro-SIM card, a SIM card, and the like. A plurality of cards may be inserted into a same SIM card interface195at the same time. The plurality of cards may be of a same type or different types. The SIM card interface195may be compatible with different types of SIM cards. The SIM card interface195is also compatible with an external storage card. The electronic device100interacts with a network through the SIM card, to implement functions such as conversation and data communication. In some embodiments, the electronic device100uses an eSIM, that is, an embedded SIM card. The eSIM card may be embedded into the electronic device100, and cannot be separated from the electronic device100. A software system of the electronic device100may use a layered architecture, an event-driven architecture, a microkernel architecture, a micro service architecture, or a cloud architecture. A type of an operating system of the electronic device is not limited in an embodiment of the application, for example, an Android system, a Linux system, a Windows system, an iOS system, or a HarmonyOS (harmony operating system, HarmonyOS). In an embodiment of the application, a layered architecture of the Android system is used as an example to describe a software architecture of the electronic device100. The following describes some terms of the Android system in embodiments of this application, to facilitate understanding by one of ordinary skilled in the art. 1. An activity (Activity) is a display component of an Android APP. The activity provides a window for interacting with a user and displaying content of the APP. An activity is usually corresponding to a separate window. The window may be full of a display area of the electronic device, or may be smaller than a display area of the electronic device and float above another window. An APP usually includes a plurality of loosely related activities. Generally, the APP may specify an activity as a main entry activity, which is an activity presented to the user when the user starts the APP for the first time. Correspondingly, a window presented by the main entry activity is a home page in the APP. In the APP, an entry activity other than the main entry activity is referred to as a sub-entry activity. The sub-entry activity and the main entry activity may be in a same stack (which can be understood as a same task (Task)), or may not be in a same stack as the main entry activity. This is not limited in an embodiment of the application, provided that both the sub-entry activity and the main entry activity may be corresponding to a same APP. A child activity inherits a behavior attribute of a parent activity. The parent activity starts the child activity, and the child activity obtains data. When the child activity is closed, the child activity sends data back to the parent activity, to trigger the parent activity to invoke an event processing function to obtain the data from the child activity. In an embodiment of the application, a same activity has a same name, and the same activity may be corresponding to a plurality of instances. In an embodiment of the application, an instance identifier (ID) of an activity is used to distinguish between different instances of the same activity, so as to accurately jump to any instance of the activity and delete any instance of the activity. An ID of an instance of the same activity may include but is not limited to at least one representation form such as a digit, a letter, or a character string. In the following, a plurality of instances of a same activity of an application are illustrated based on an example in which running information of an activity of any application is queried by using a command.HWTET: / #dumpsys activity activitiesACTIVITY MANAGER ACTIVITIES (dumpsys activity activities)Display #0 (activities from top to bottom):Stack #31: type=standard mode=hwMultiwindow-freeformisSleeping=falsemBounds=Rect(1091, 286-2251, 2465)Task id #22mBounds=Rect(1091, 286-2251, 2465)mMinWidth=−1mMinHeight=−1mLastNonFullscreenBounds=null* TaskRecord{26acf7b #22 A=com.huawei.notepad U=0 StackId=31 sz=2}userId=0 effectiveUid=u0a142 mCallingUid=u0a82mUserSetupComplete=true mCallingPackage=com.huawei.android.launcheraffinity=com.huawei.notepadintent={act=android.intent.action.MAINcat=[android.intent.category.LAUNCHER] flg=0x10200000cmp=com.huawei.notepad/com.example.android.notepad.NotePadActivity}mActivityComponent=com.huawei.notepad/com.example.android.notepad.NotePadActivityautoRemoveRecents=false isPersistable=true numFullscreen=2 activityType=1rootWasReset=true mNeverRelinquishIdentity=true mReuseTask=falsemLockTaskAuth=LOCK_TASK_AUTH_PINNABLEActivities=[ActivityRecord{261b3c9 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t22},ActivityRecord{277cdc9 u0 com.huawei.notepad/com.example.android.notepad.NoteEditor t22 f}]askedCompatMode=false inRecents=true isAvailable=truemRootProcess=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}stackId=31hasBeenVisible=truemResizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmSupportsPicturelnPicture=false isResizeable=true lastActiveTime=252482707 (inactive for 2s)* Hist #1: ActivityRecord{277cdc9 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t22 f}packageName=com.huawei.notepad processName=com.huawei.notepadlaunchedFromUid=10142 launchedFromPackage=com.huawei.notepad userId=0app=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}Intent { act=android.huawei.intent.action.note.editcmp=com.huawei.notepad/com.example.android.notepad.NoteEditor (has extras)}frontOfTask=false task=TaskRecord{26acf7b #22A=com.huawei.notepad U=0 StackId=31 sz=2}taskAffinity=com.huawei.notepadmActivityComponent=com.huawei.notepad/com.example.android.notepad.NoteEditorbaseDir=/hw_product/app/HwNotePad/HwNotePad.apkdataDirqdata/user/0/com.huawei.notepadstateNotNeeded=false componentSpecified=truemActivityType=standardcompat={520 dpi always-compat} labelRes=0x7f110002 icon=0x7f080178 theme=0x2060009mLastReportedConfigurations:mGlobalConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw676dp w676dp h724dp 520 dpi lrg hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(0, 0-2200, 2480) mAppBounds=Rect(0, 0-2200, 2480) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=undefined mAlwaysOnTop=undefined mRotation=ROTATION_0} suim:1 s.1536}mOverrideConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(1091, 286-2251, 2465) mAppBounds=Rect(1091, 286-2251, 2465) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.175}CurrentConfiguration={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(1091, 286-2251, 2465) mAppBounds=Rect(1091, 286-2251, 2465) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.229}launchFailed=false launchCount=1 lastLaunchTime=−10m24s276 mshaveState=false icicle=nullstate=FINISHING stopped=false delayedResume=false finishing=truekeysPaused=false inHistory=true visible=true sleeping=false idle=truemStartingWindowState=STARTING_WINDOW_SHOWNfullscreen=true noDisplay=false immersive=false launchMode=0frozenBeforeDestroy=false forceNewConfig=falsemActivityType=standardnowVisible=false lastVisibleTime=−1m20s427 msresizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmLastReportedMultiWindowMode=truemLastReportedPictureInPictureMode=false* Hist #0: ActivityRecord{261b3c9 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t22}packageName=com.huawei.notepad processName=com.huawei.notepadlaunchedFromUid=10082launchedFromPackage=com.huawei.android.launcher userId=0app=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}Intent { act=android.intent.action.MAINcat=[android.intent.category.LAUNCHER] flg=0x10200000cmp=com.huawei.notepad/com.example.android.notepad.NotePadActivitybnds=[908,167][1306,492]}frontOfTask=true task=TaskRecord{26acf7b #22 A=com.huawei.notepad U=0 StackId=31 sz=2}taskAffinity=com.huawei.notepadmActivityComponent=com.huawei.notepad/com.example.android.notepad.NotePadActivitybaseDir=/hw_product/app/HwNotePad/HwNotePad.apkdataDirqdata/user/0/com.huawei.notepadstateNotNeeded=false componentSpecified=truemActivityType=standardcompat={520 dpi always-compat} labelRes=0x7f110002 icon=0x7f080178 theme=0x2060009mLastReportedConfigurations:mGlobalConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw676dp w676dp h724dp 520 dpi lrg hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(0, 0-2200, 2480) mAppBounds=Rect(0, 0-2200, 2480) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=undefined mAlwaysOnTop=undefined mRotation=ROTATION_0} suim:1 s.1536}mOverrideConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(1091, 286-2251, 2465) mAppBounds=Rect(1091, 286-2251, 2465) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.143}CurrentConfiguration={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(1091, 286-2251, 2465) mAppBounds=Rect(1091, 286-2251, 2465) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreenmActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.145}launchFailed=false launchCount=0 lastLaunchTime=−21m6s251 mshaveState=false icicle=nullstate=RESUMED stopped=false delayedResume=false finishing=falsekeysPaused=false inHistory=true visible=true sleeping=false idle=falsemStartingWindowState=STARTING_WINDOW_REMOVEDfullscreen=true noDisplay=false immersive=false launchMode=1frozenBeforeDestroy=false forceNewConfig=falsemActivityType=standardnowVisible=true lastVisibleTime=−6s43 msresizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmLastReportedMultiWindowMode=truemLastReportedPictureInPictureMode=falseRunning activities (most recent first):TaskRecord{26acf7b #22 A=com.huawei.notepad U=0 StackId=31 sz=2}Run #1: ActivityRecord{261b3c9 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t22}Run #0: ActivityRecord{277cdc9 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t22 f}mResumedActivity: ActivityRecord{261b3c9 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t22}mLastPausedActivity: ActivityRecord{277cdc9 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t22 f}Stack #23: type=standard mode=hwMultiwindow-freeformisSleeping=falsemBounds=Rect(139, 216-1299, 2395)Task id #23mBounds=Rect(139, 216-1299, 2395)mMinWidth=−1mMinHeight=−1mLastNonFullscreenBounds=Rect(1139, 341-2299, 2520)* TaskRecord{26acec2 #23 A=com.huawei.notepad U=0 StackId=23 sz=2}userId=0 effectiveUid=u0a142 mCallingUid=u0a130mUserSetupComplete=true mCallingPackage=com.huawei.hwdockbaraffinity=com.huawei.notepadintent={act=android.intent.action.MAINcat=[android.intent.category.LAUNCHER] flg=0x18200000 hwFlg=0x20000cmp=com.huawei.notepad/com.example.android.notepad.NotePadActivity}mActivityComponent=com.huawei.notepad/com.example.android.notepad.NotePadActivityautoRemoveRecents=false isPersistable=true numFullscreen=2 activityType=1rootWasReset=true mNeverRelinquishIdentity=true mReuseTask=falsemLockTaskAuth=LOCK_TASK_AUTH_PINNABLEActivities=[ActivityRecord{26d7279 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t23},ActivityRecord{277cd21 u0 com.huawei.notepad/com.example.android.notepad.NoteEditor t23}]askedCompatMode=false inRecents=true isAvailable=truemRootProcess=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}stackId=23hasBeenVisible=truemResizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmSupportsPicturelnPicture=false isResizeable=true lastActiveTime=252476699 (inactive for 8s)* Hist #1: ActivityRecord{277cd21 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t23}packageName=com.huawei.notepad processName=com.huawei.notepadlaunchedFromUid=10142 launchedFromPackage=com.huawei.notepad userId=0app=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}Intent { act=android.huawei.intent.action.note.editcmp=com.huawei.notepad/com.example.android.notepad.NoteEditor (has extras)}frontOfTask=false task=TaskRecord{26acec2 #23A=com.huawei.notepad U=0 StackId=23 sz=2}taskAffinity=com.huawei.notepadmActivityComponent=com.huawei.notepad/com.example.android.notepad.NoteEditorbaseDir=/hw_product/app/HwNotePad/HwNotePad.apkdataDir=/data/user/0/com.huawei.notepadstateNotNeeded=false componentSpecified=truemActivityType=standardcompat={520 dpi always-compat} labelRes=0x7f110002 icon=0x7f080178 theme=0x2060009mLastReportedConfigurations:mGlobalConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw676dp w676dp h724dp 520 dpi lrg hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(0, 0-2200, 2480) mAppBounds=Rect(0, 0-2200, 2480) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=undefined mAlwaysOnTop=undefined mRotation=ROTATION_0} suim:1 s.1536}mOverrideConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(139, 216-1299, 2395) mAppBounds=Rect(139, 216-1299, 2395) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.683}CurrentConfiguration={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(139, 216-1299, 2395) mAppBounds=Rect(139, 216-1299, 2395) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.751}launchFailed=false launchCount=0 lastLaunchTime=−10m21s667 mshaveState=false icicle=nullstate=RESUMED stopped=false delayedResume=false finishing=falsekeysPaused=false inHistory=true visible=true sleeping=false idle=truemStartingWindowState=STARTING_WINDOW_SHOWNfullscreen=true noDisplay=false immersive=false launchMode=0frozenBeforeDestroy=false forceNewConfig=falsemActivityType=standardnowVisible=true lastVisibleTime=−1m20s428 msresizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmLastReportedMultiWindowMode=truemLastReportedPictureInPictureMode=false*Hist #0:ActivityRecord{26d7279u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t23}packageName=com.huawei.notepad processName=com.huawei.notepadlaunchedFromUid=10130launchedFromPackage=com.huawei.hwdockbar userId=0app=ProcessRecord{26dc5b9 10196: com.huawei.notepad/u0a142}Intent { act=android.intent.action.MAINcat=[android.intent.category.LAUNCHER] flg=0x18200000 hwFlg=0x20000cmp=com.huawei.notepad/com.example.android.notepad.NotePadActivity}frontOfTask=true task=TaskRecord{26acec2 #23 A=com.huawei.notepad U=0 StackId=23 sz=2}taskAffinity=com.huawei.notepadmActivityComponent=com.huawei.notepad/com.example.android.notepad.NotePadActivitybaseDir=/hw_product/app/HwNotePad/HwNotePad.apkdataDirqdata/user/0/com.huawei.notepadstateNotNeeded=false componentSpecified=truemActivityType=standardcompat={520 dpi always-compat} labelRes=0x7f110002 icon=0x7f080178 theme=0x2060009mLastReportedConfigurations:mGlobalConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw676dp w676dp h724dp 520 dpi lrg hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(0, 0-2200, 2480) mAppBounds=Rect(0, 0-2200, 2480) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=undefined mAlwaysOnTop=undefined mRotation=ROTATION_0} suim:1 s.1536}mOverrideConfig={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(13, 263-1173, 2442) mAppBounds=Rect(13, 263-1173, 2442) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.165}CurrentConfiguration={1.0 ?mcc?mnc [zh_CN_#Hans] ldltr sw356dp w356dp h670dp 520 dpi nrml hdr port finger -keyb/v/h -nav/h winConfig={mBounds=Rect(139, 216-1299, 2395) mAppBounds=Rect(139, 216-1299, 2395) mWindowingMode=hwMultiwindow-freeform mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=on mRotation=ROTATION_0} suim:1 s.626}launchFailed=false launchCount=0 lastLaunchTime=−21m1s900 mshaveState=true icicle=Bundle[mParcelledData.dataSize=8352]state=STOPPED stopped=true delayedResume=false finishing=falsekeysPaused=false inHistory=true visible=false sleeping=false idle=truemStartingWindowState=STARTING_WINDOW_REMOVEDfullscreen=true noDisplay=false immersive=false launchMode=1frozenBeforeDestroy=false forceNewConfig=falsemActivityType=standardnowVisible=false lastVisibleTime=−10m22s153 msresizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSIONmLastReportedMultiWindowMode=truemLastReportedPictureInPictureMode=falseRunning activities (most recent first):TaskRecord{26acec2 #23 A=com.huawei.notepad U=0 StackId=23 sz=2}Run #1: ActivityRecord{277cd21 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t23}Run #0: ActivityRecord{26d7279 u0com.huawei.notepad/com.example.android.notepad.NotePadActivity t23}mResumedActivity: ActivityRecord{277cd21 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t23}mLastPausedActivity: ActivityRecord{277cd21 u0com.huawei.notepad/com.example.android.notepad.NoteEditor t23} Based on the foregoing information, in a task stack whose task stack identifier (Task id) is #22, an ID of an instance whose activity name is “com.huawei.notepad/com.example.android.notepad.NoteEditor” is 277cdc9. In a task stack whose identifier (Task id) of the task stack is #23, an ID of an instance whose activity name is “com.huawei.notepad/com.example.android.notepad.NoteEditor” is 277cd21. Therefore, for the same activity, the activity may be corresponding to a plurality of instances, and IDs of different instances are different. 2. There are four activity starting modes (LaunchMode): Standard mode: Default mode, in which an instance is created and placed in the task stack each time an activity is activated. singleTop mode: If the instance exists on the top of the task stack, the instance is reused. Otherwise, a new instance is created and placed at the top of the task stack. Even if the instance already exists in the task stack, a new instance is created provided that the instance is not at the top of the stack. singleTask mode: If the instance already exists in the task stack, the instance is reused. When the instance is reused, the instance is returned to the top of the stack, and an instance above the instance will be removed from the task stack. If the instance does not exist in the task stack, a new instance is created and placed in the task stack. singleInstance mode: An instance is created in a new task stack, and a plurality of applications share the instance in the task stack. Once an instance in this mode already exists in a task stack, the instance in the task stack will be reused when any application reactivates the instance. 3. A foreground task and a background ask can be switched between each other. The foreground task can respond to a user input when running. The background task does not need to interact with the user during running, and no substantive user interface (UI) processing is required. Therefore, the background task does not disturb the work of the user, and does not consume resources and power of the electronic device too quickly. 4. The single-instance feature can be understood as follows: If an activity is started for a plurality of times, the activity has only one instance. That is, if an instance of the activity exists in the stack, the current activity object is not re-instantiated, but the previous activity object is directly reused, and all other activity objects on the current activity object are removed. 5. The multi-instance feature can be understood as follows: Each time an activity is started, the activity has an instance. That is, if an instance of the activity exists in the stack, the current activity object is re-instantiated instead of reusing the previous activity object. 6. The single-application multi-instance feature allows an application to open a plurality of windows for operation. Each window runs in a task stack. Different windows are corresponding to different task stacks. A window is corresponding to an instance of an activity, and each window is corresponding to a plurality of instances of the same activity. The IDs of the plurality of instances of the same activity are different. In addition, a display manner of the plurality of windows (that is, multi-window display) in an embodiment of the application may be a split-screen manner, a floating window manner, or a split-screen manner and a floating window manner. This is not limited in an embodiment of the application. In addition, a single application mentioned in an embodiment of the application can be understood as a same application. FIG.2Ais a block diagram of a software architecture of an electronic device according to an embodiment of this application. As shown inFIG.2A, the layered architecture divides the software system into several layers, and each layer has a clear role and division of work. The layers communicate with each other by using a software interface. In some embodiments, the Android system is divided into four layers, that is, an application layer, an application framework (APP framework) layer, Android runtime and system libraries, and a kernel layer from top to bottom. The application layer may include a series of application packages. As shown inFIG.2A, the application packages may include applications (APP) such as email, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, game, chat, shopping, travel, instant messaging (such as SMS), smart home, and device control. The smart home applications may be used to control or manage home devices with a networking function. For example, the home device may include an electric light, a television, and an air conditioner. For another example, the home device may further include an anti-theft door lock, a sound box, a floor sweeping robot, a socket, a body fat scale, a desk lamp, an air purifier, a refrigerator, a washing machine, a water heater, a microwave oven, an electric cooker, a curtain, a fan, a television, a set-top box, a door, and a window. In addition, the application package may further include applications such as a Dock (that is, a sidebar application), a home screen (that is, a desktop), a leftmost screen, a control center, and a notification center. The leftmost screen may also be referred to as the “minus 1 screen”, and refers to a split-screen UI that is displayed when a user slides from the home screen of the electronic device to the leftmost. For example, the leftmost screen may be used to place some quick service functions and notification messages, such as global search, a quick entry (a payment code, WeChat, and the like) of a page of an application, instant information, and reminders (express information, expenditure information, commuting road conditions, taxi hailing information, schedule information, and the like), and followed dynamic information (such as a football platform, a basketball platform, and stock information). The control center is a slide-up message notification bar of the electronic device, that is, the user interface displayed by the electronic device when the user starts to perform a slide-up operation at the bottom of the electronic device. The notification center is a drop-down message notification bar of the electronic device, that is, the user interface displayed by the electronic device when the user starts to perform a drop-down operation at the bottom of the electronic device. Generally, applications may be classified into system applications and third-party applications. A system application can be understood as an application that is not independent and depends on a software system, for example, a Dock and a desktop. A third-party application is an application other than a system application, and may include but is not limited to a self-developed application of a device vendor and an application of a non-device vendor, such as an electronic device application and a computer application. For ease of description, an example in which the application layer includes three applications, namely Dock, email, and desktop, is used for illustration. The Dock interface (that is, an interface of the Dock) is a function interface that is in UIs and that is used to start or switch an icon of a running application. An operation such as adding, deleting, or moving an application icon in the Dock may be performed, so that an application that supports a floating window, a split screen, or a full screen is displayed on each UI to the user. There may be a plurality of operations for instructing the user to open the Dock. Generally, in an Android system, the Dock may be invoked by sliding the display of the electronic device inwards from any side edge and then pausing. In this way, the Dock interface may be displayed in a position at a corresponding side edge. In addition, the user may also click/tap a hardware key or a virtual key used to open the Dock. In this way, the Dock interface may be displayed on the screen of the electronic device. It should be noted that the full-screen display mentioned in an embodiment of the application can be understood as that the electronic device displays any page in all display areas of the display, or can be understood as that the electronic device displays any page in a display area other than an area a that displays parameters such as a time, a signal, and a battery level on the display. The area a is usually located above the display. A full-screen display area may be divided into an area b1 and an area b2. The area b1 and the area b2 do not overlap, and sizes of the area b1 and the area b2 may be equal or unequal. Split-screen display can be understood as that the electronic device displays one page in the area b1 and displays another page in the area b2. With reference toFIG.2BandFIG.2C, the following describes an embodiment of a Dock interface. FIG.2BandFIG.2Care schematic diagrams of an embodiment of a Dock interface. FIG.2Bshows an example of the Dock interface11. In an embodiment of the application, parameters such as a layout position (which may be configured based on parameters such as a configuration of the software system and an operation of a user), an interface size, an interface color, and an application icon size of the Dock interface11are not limited. The Dock interface11may include but is not limited to icons of a plurality of applications, for example, an icon of an email application, an icon of a Notepad application, an icon of a browser application, an icon of a calculator application, and an adding control. The adding control may receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer displays an interface used to add an application. After detecting an operation that is instructed by the user and that is performed on the adding control, the electronic device may display the text “Tap an application to add”, the interface12, and the interface13that are shown inFIG.2Cas an example. The text “Tap an application to add” is used to instruct the user to tap an application icon on the interface13to add the corresponding application icon to the interface12. The interface12is a preview interface of the Dock interface11, and is used to store an icon of an added application. In addition, the interface12is further used to delete an icon of an added application. For example, an icon “” is arranged in an upper right corner of each application icon. When the user taps an icon “” in an upper right corner of any application icon, the application icon is deleted from the interface12. The interface13is an interface for adding an application, and is used to provide an application that can be displayed on a split screen in the electronic device. For example, the interface13is divided into two areas. One area is used to display icons of applications that are frequently used by the user. For example, the area is illustrated by using a text “Recommended applications”, such as an icon of a Huawei video application, and an icon of a music application. The other area is used to display icons of all applications that can be displayed on a split screen in the electronic device. For example, the area is illustrated by using a text “Applications supporting smart screen-splitting”, such as an icon of an email application, an icon of a Notepad application, an icon of a browser application, an icon of a calculator application, an icon of a camera application, an icon of a health application, an icon of a Huawei video application, and an icon of a music application. In addition, when an icon of an application on the interface13is already added to the interface12, the icon of the application is dimmed on the interface13, for example, the icon of the email application, the icon of the Notepad application, the icon of the browser application, and the icon of the calculator application. When an icon of an application on the interface13is not added to the interface12, the icon of the application is normally displayed on the interface13, for example, the icon of the Huawei video application, the icon of the music application, the icon of the camera application, and the icon of the health application. It should be noted that parameters such as a layout position (which may be configured based on parameters such as a configuration of the software system and an operation of a user), an interface size, an interface color, and an application icon size of the interface12and the interface13are not limited to the foregoing implementation. In an embodiment of the application, the Dock may use any third-party application in a full-screen display manner. For example, the application1may be displayed on a full screen by touching and holding an icon of the application1on the Dock interface. The Dock can use a plurality of third-party applications in a floating window or split-screen mode. For example, when the electronic device displays the interface of the application1, the application1and the application2may be displayed on split screens by touching and holding the icon of the application2on the Dock interface. The display area of the electronic device includes a first area and a second area, and the first area and the second area may be arranged left and right or may be arranged up and down. In addition, sizes of the first area and the second area may be equal or unequal. If the first area and the second area are arranged left and right, and the first area is located on the left side of the second area, when the user drags the icon of the application2and leaves the display from the first area of the electronic device, the electronic device may display the application1on the left side and display the application2on the right side. The floating window of the application2may be displayed on the interface of the application1by tapping the icon of the application2on the Dock interface. It should be noted that, when the electronic device displays the interface of the application1, the application2may also be displayed on a full screen by touching and holding the icon of the application2on the Dock interface. The application1and the application2may be a same application, or may be different applications. In an embodiment of the application, the Dock may be used as a starting entry for each third-party application to implement a plurality of instances. The Dock can identify, start, and manage the multi-instance feature based on a metadata rule. Each third-party application needs to explicitly state the metadata rule in a configuration file AndroidManifest.xml of the application. For example, the metadata rule may include but is not limited to: whether the application supports the multi-instance feature, and a starting entry activity of the multi-instance feature that is specified by the application, as shown in Table 1 and Table 2. TABLE 1Whether the application in the metadata rule supports the multi-instance featureExplicit statement of the application in the configuration fileParameterAndroidManifest.xmldescription<application>Explicit statement<mata-datain theandroid:name=“com.huawei.android.multiwindow.multiniistance.enable”“application” fieldandroid:value=“true”>in a meta-data</meta-data>mannertrue: The multi-instance feature issupported.false: The multi-instance feature isnot supported.By default, nostatement is made,indicating that themulti-instancefeature is notsupported. TABLE 2Starting entry activity of the multi-instance feature thatis specified by the application in the metadata ruleExplicit statement of the application in the configuration fileParameterAndroidManifest.xmldescription<application>Explicit<mata-datastatement in theandroid:name=“com.huawei.android.multiwindow.multiinistance.level”“application”android:value=“0”>field in a meta-</meta-data>data manner0: The multi-instance featurecannot be startedby using the mainentry activity.1: The multi-instance featurecan be started byusing the mainentry activity.By default, nostatement ismade, indicatingthat the multi-instance featurecannot be startedby using the mainentry activity. In Table 1, whether the application supports the multi-instance feature may be determined by using a character configured for a configuration item “com.huawei.android.multiwindow.multiinistance.enable” in the configuration file AndroidManifest.xml of the application. If the character configured for the configuration item “com.huawei.android.multiwindow.multiinistance.enable” is “true”, the application supports the multi-instance feature. If the character configured for the configuration item “com.huawei.android.multiwindow.multiinistance.enable” is “false” or the configuration file AndroidManifest.xml of the application does not contain the configuration item “com.huawei.android.multiwindow.multiinistance.enable”, the application does not support the multi-instance feature. In Table 2, it can be determined, according to a value configured for a configuration item “com.huawei.android.multiwindow.multiinistance.level” in the configuration file AndroidManifest.xml of the application, whether the application supports starting of the multi-instance feature by using the main entry activity. If the value configured for “com.huawei.android.multiwindow.multiinistance.level” is “0” or the configuration file AndroidManifest.xml of the application does not contain the configuration item “com.huawei.android.multiwindow.multiinistance.level”, the application does not support starting of the multi-instance feature by using the main entry activity. If the value configured for “com.huawei.android.multiwindow.multiinistance.level” is “1”, the application supports starting of the multi-instance feature by using the main entry activity. It should be noted that, in addition to using the foregoing two configuration items to represent the metadata rule, in an embodiment of the application, one configuration item may also be used to indicate whether the application in the metadata rule supports the multi-instance feature and a starting entry activity of the multi-instance feature that is specified by the application. For example, if a character configured for a configuration item is “true”, it may indicate that the application supports the multi-instance feature, and a starting entry activity of the multi-instance feature that is specified by the application supports starting of the multi-instance feature by using the main entry activity. If a character configured for the configuration item is “false”, it may indicate that the application does not support the multi-instance feature, or may indicate that the application supports the multi-instance feature, and a starting entry activity of the multi-instance feature that is specified by the application does not support starting of the multi-instance feature by using the main entry activity. Based on the foregoing description, if the value configured for the configuration item “com.huawei.android.multiwindow.multiinistance.level” is “0” or the configuration file AndroidManifest.xml of the application does not contain the configuration item “com.huawei.android.multiwindow.multiinistance.level”, the Dock cannot start a plurality of instances of the application. If the value configured for com.huawei.android.multiwindow.multiinistance.level is “1”, the Dock can determine a LaunchMode of the main entry activity of the application. In this case, the electronic device further needs to determine, by using the LaunchMode of the main entry activity of the application, a starting activity of the application, that is, a page displayed on the display of the electronic device when the application is started. The LaunchMode of the main entry activity of the application may be set to “standard” or “singleTop”. In this way, the Dock may use a multi-task (MultiTask) capability in the Android system to start a plurality of instances of the application, and a vendor that develops the application has no additional workload. It should be noted that if the main entry activity starts an interface of another activity whose launchMode mode is “singleInstance” or “singleTask+specified attribute taskAffinity”, the electronic device cannot implement a plurality of instances of the application by using the main entry activity. taskAffinity, that is, task dependency, indicates a name of a task stack required by an activity. By default, names of task stacks required by all activities are package names of applications. In addition, the attribute value of taskAffinity specified by each activity can be different from the package name of the application. The LaunchMode of the main entry activity of the application may also be set to “singleTask”. Considering a high workload of modifying the LaunchMode, an embodiment of the application provides the following two manners. Manner 1 The application may further specify, by using a configuration item “com.huawei.android.multiwindow.multiinstance.targetactivity” in the configuration file AndroidManifest.xml of the application, that the application supports starting of the multi-instance feature by an entry activity other than the main entry activity, thereby reducing a workload caused by modification of the application. For ease of description, the entry activities other than the main entry activity is referred to as a specified entry activity (referred to as a targetActivity). In some embodiments, the configuration file AndroidManifest of the application may make a statement by using the following code: activityandroid:name=″com.example.demo.MainActivity″android:resizeableActivity=″true″><meta-dataandroid:name=″com.huawei.android.multiwindow.multiinstance.targetactivity″android:value=″com.example.demo.SubEntry Activity″<meta-data><intent-filter><action android:name=″android.intent.action.MAIN″/><category android:name=″android.intent.category.LAUCHER″/<intent-filter></activity><activityandroid:name=″com.example.demo.SubEntry Activity″...</activity> Manner 2 The multi-instance feature is implemented by the application without depending on the software system of the electronic device. The application uses the main entry activity as the only external entry of the application. After the main entry activity receives a message (used to indicate a starting intent) sent by the software system of the electronic device, internal management logic of the application triggers starting of the sub-entry activity, so as to start a plurality of instances of the application. The application needs to configure the attribute taskAffinity of the sub-entry activity in the configuration file AndroidManifest.xml of the application, so that the sub-entry activity can be started in different tasks. The sub-entry activity is an entry activity other than the main entry activity. Therefore, the sub-entry activity may also be referred to as a specified entry activity (referred to as a targetActivity). The application usually reminds the user when a current quantity of tasks reaches a maximum value. Based on the descriptions in Table 1 and Table 2, for any application, after the application has been started, if the application supports a single-application multi-instance feature, an icon of the application on the Dock interface is in a tappable/draggable state, and the application may be restarted to display a page of the application. After the application is started, if the application does not support the single-application multi-instance feature, the icon of the application on the Dock interface cannot be tapped or dragged (for example, the icon of the application is dimmed or does not respond to the user's tap or drag operation). In this case, the application cannot be started. The application framework layer provides an application programming interface (API) and a programming framework for an application at the application layer. The application framework layer includes some predefined functions. As shown inFIG.2A, the application framework layer may include an activity manager, a package manager, a multi-window task manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like. For ease of description, an example in which the application framework layer includes the activity manager, the package manager, and the multi-window task manager is used for illustration. The activity manager is configured to manage application task stacks. The activity manager may provide an ActivityManagerService (AMS for short). The AMS is the core service of Android, and is responsible for starting, switching, and scheduling the four components in the software system, and managing and scheduling application processes, and the like. The package manager is responsible for managing application packages, obtaining application information, and parsing metadata rules. The multi-window task manager (HwMultiWindowManger) is configured to manage one or more window programs. The multi-window task manager may obtain a size of a display, determine whether there is a status bar, lock the screen, capture the screen, and the like. The content provider is configured to: store and obtain data, and enable the data to be accessed by an application. The data may include a video, an image, an audio, calls that are made and answered, a browsing history and bookmarks, an address book, and the like. The view system includes visual controls such as a control for displaying a text and a control for displaying an image. The view system may be configured to construct an application. A display interface may include one or more views. For example, a display interface including an SMS message notification icon may include a text display view and an image display view. The phone manager is configured to provide a communication function for the electronic device100, for example, management of a call status (including answering, declining, or the like). The resource manager provides various resources such as a localized character string, an icon, an image, a layout file, and a video file for an application. The notification manager enables an application to display notification information in a status bar, and may be configured to convey a notification message. The notification manager may automatically disappear after a short pause without requiring a user interaction. For example, the notification manager is configured to notify download completion, give a message notification, and the like. The notification manager may alternatively be a notification that appears in a top status bar of the system in a form of a graph or a scroll bar text, for example, a notification of an application that is run on a background, or may be a notification that appears on the screen in a form of a dialog window. For example, text information is displayed on the status bar, an announcement is given, the electronic device vibrates, or the indicator light blinks. The Android runtime includes a core library and a virtual machine. The Android runtime is responsible for scheduling and management of an Android system. The core library includes two parts: One part is a function that a java language needs to invoke, and the other part is a core library of the Android system. The application layer and the application framework layer run on the virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is configured to implement functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection. The system library may include a plurality of functional modules, for example, a surface manager, a media library (Media Libraries), a three-dimensional graphics processing library (for example, OpenGL ES), and a 2D graphics engine (for example, SGL). The surface manager is configured to manage a display subsystem and provide fusion of 2D and 3D layers for a plurality of applications. The media library supports playback and recording in a plurality of commonly used audio and video formats, and static image files. The media library may support a plurality of audio and video encoding formats such as MPEG-4, H.264, MP3, AAC, AMR, JPG, and PNG. The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, composition, layer processing, and the like. The 2D graphics engine is a drawing engine for 2D drawing. The kernel layer is a layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver. An example of an operating process of software and hardware of the electronic device100is described below with reference to a scenario in which sound is played by using a smart speaker. When the touch sensor180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including information such as a touch coordinate and a timestamp of the touch operation). The original input event is stored at the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies a control corresponding to the input event. Using that the touch operation is a tap operation, and a control corresponding to the tap operation is a control of a smart speaker icon as an example, an interface at the application framework layer is invoked for a smart speaker application, to start the smart speaker application, then the kernel layer is invoked to start the audio driver, and the speaker170A converts an audio electrical signal into a sound signal. It can be understood that the structure illustrated in this application does not constitute a specific limitation on the electronic device100. In some other embodiments, the electronic device100may include more or fewer components than those shown in the figure, or some components may be combined, or some components may be split, or different component arrangements may be used. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. Based on the foregoing description, for any application (for example, the application1) in the third-party applications, the electronic device may trigger/start a plurality of instances of the application1by using the Dock. This can fully consider a problem that an excessive quantity of single instances causes a resource waste and an excessive quantity of times of invoking the plurality of instances causes an invoking error. The application1has explicitly stated, in the configuration file AndroidManifest.xml of the application1, that “the application1supports a multi-instance feature” and “a starting entry activity of the multi-instance feature that is specified by the application1”. That is, in the configuration file AndroidManifest.xml, “com.huawei.android.multiwindow.multiinistance.enable” is set to “true” and “com.huawei.android.multiwindow.multiinistance.level” is set to “1”. It should be noted that, in addition to using the Dock to trigger/start the function of a single-application multi-instance of the electronic device, the electronic device may also use an application such as a desktop as a starting entry activity of the single-application multi-instance feature. In an embodiment of the application, the application1may perform a coordination operation by using instances in a plurality of window forms, for example, window combination forms such as “split screen+floating window”, “full screen+floating window”, “floating window+floating window”, “left split screen+right split screen”, and “left split screen+right split screen+floating window”. With reference toFIG.3AtoFIG.3S, the following describes a implementation process in which the Dock starts a plurality of instances of the application1in a plurality of window forms. For ease of description, inFIG.3AtoFIG.3S, an example in which the electronic device is a tablet computer and the application1is an email application is used for illustration. Refer toFIG.3AtoFIG.3S.FIG.3AtoFIG.3Sare schematic diagrams of human-computer interaction interfaces according to an embodiment of this application. The tablet computer may display a user interface11that is shown inFIG.3Aas an example. The user interface11may be a home screen of a desktop. The user interface11may include but is not limited to a status bar, a navigation bar, a calendar indicator, a weather indicator, and a plurality of application icons, for example, a calculator application icon, a Huawei video application icon, a browser application icon, a camera application icon, a music application icon, a setting application icon, an email application icon, and a Notepad application icon. After detecting the operation of opening the multi-task interface that is instructed by the user, the tablet computer may display a user interface12shown inFIG.3Bas an example. The user interface12may be a current multi-task interface of the tablet computer, that is, indicates whether a background task corresponding to an application exists in a background task list of the tablet computer. InFIG.3B, if the user interface12displays a text “No applications running recently”, it may indicate that no application is currently running on the tablet computer, that is, it indicates that no background task corresponding to the application exists in the background task list of the tablet computer. There may be a plurality of manners in which the user instructs to open a multi-task interface. For example, the user may perform an operation of clicking/tapping a hardware key or a virtual key that is used to open the multi-task interface. For another example, the user may perform an operation of sliding upward from the bottom of the screen, as shown inFIG.3A. The multi-task interface can be understood as an interface that includes windows corresponding to a plurality of applications that are running on the tablet computer. A plurality of windows on the multi-task interface may be arranged horizontally and in parallel according to a preset sequence policy. In some embodiments, the tablet computer arranges, according to a time sequence for running different applications, windows corresponding to the different applications. In addition, the user may perform, on the multi-task interface, an operation such as deleting, adding, or moving a window corresponding to an application. After detecting an operation of opening the user interface11that is instructed by the user, the tablet computer may change from displaying the user interface12shown inFIG.3Bas an example to displaying the user interface11shown inFIG.3Cas an example. The user may open the user interface11in a plurality of manners. For example, the user performs an operation of tapping a blank area on the user interface12. For another example, after detecting no operation of the user within preset duration, the tablet computer may return from the user interface12shown inFIG.3Bas an example to the user interface11shown inFIG.3Cas an example. A value of the preset duration is not limited in an embodiment of the application. In conclusion, no application is currently running on the tablet computer. That is, the tablet computer does not start a foreground task corresponding to the email application, and does not run the background task corresponding to the email application, that is, the tablet computer does not start the email application. It should be noted that an embodiment of the application is not limited to the user interface12shown inFIG.3Bas an example. The following describes another implementation of the user interface12. For example, the user interface12includes a window corresponding to an application other than the email application, that is, it indicates whether a background task corresponding to the application other than the email application exists in the background task list of the tablet computer. For another example, the user interface12includes a window corresponding to the email application, that is, it indicates that a background task corresponding to the email application exists in the background task list of the tablet computer. For another example, the user interface12includes a window corresponding to an application other than the email application and a window corresponding to the email application, that is, it indicates that a background task corresponding to the email application and a background task corresponding to the application other than the email application exist in the background task list of the tablet computer. One or more background tasks corresponding to the email application may exist in the user interface12; that is, one or more instances of the email application run in the background of the electronic device, names of activities corresponding to pages displayed when the one or more instances of the email application are started are the same, and IDs of the instances of the activities are different. Pages currently displayed by the one or more instances of the email application may include the same or different content. In addition, if the electronic device starts another instance of the email application, a name of an activity corresponding to a page displayed when the instance of the email application is started is the same as a name of an activity corresponding to a page displayed when an existing instance of the email application is started, and IDs of the instances of the activities are different. The page displayed when the instance of the email application is started and the page displayed when the existing instance of the email application is started may include the same or different content. After detecting an operation of opening the Dock that is instructed by the user, the tablet computer may display, on the user interface11, the Dock interface201shown inFIG.3Das an example. The Dock interface201may include but is not limited to icons of a plurality of applications, for example, an icon201aof an email application, an icon of a Notepad application, an icon of a browser application, an icon201bof a calculator application, and an adding control201c. The icon of the application may be in a tappable/draggable state (for example, normal display), which is used to indicate that the application may support the multi-instance feature, or may be in a non-tappable and non-draggable state (for example, dimmed display or normal display), which is used to indicate that the application does not support the multi-instance feature. For ease of description, inFIG.3AtoFIG.3P, icons of applications on the Dock interface201are in the tappable/draggable state, that is, each application on the Dock interface201supports the multi-instance feature, and a plurality of instances of any application on the Dock interface201may be started. The adding control201cmay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer displays an interface used to add an application. In an embodiment of the application, parameters such as a layout position (which may be configured based on parameters such as a configuration of the software system and an operation of a user), an interface size, an interface color, and an application icon size of the Dock interface201are not limited. In addition, there may be a plurality of operations for instructing the user to open the Dock. For example, the user performs a pause operation after sliding inward from a right side of the screen. Correspondingly, the Dock interface201may be displayed at a corresponding position on a right side edge. For another example, the user may perform a click/tap operation on a hardware key or a virtual key that is used to open the Dock. After the tablet computer detects the operation1that is instructed by the user to open the email application on the Dock interface201, because the tablet computer does not start the email application, the tablet computer may display, based on a type of the operation1and a configuration of the software system, any page of the email application in a window form corresponding to the operation1, to implement an instance (referred to as an instance1) of the email application. Different types of the operation1indicate different window forms for displaying any page of the email application. A type of the operation1is not limited in an embodiment of the application, for example, a tap operation, a drag operation, a touch and hold operation, or a double-tap operation. In some embodiments, if the operation1is corresponding to a window form of the floating window, the tablet computer may display, on the user interface11, a window202shown inFIG.3Eas an example. The window202may be used to implement the instance1of the email application. The window202may display any page (for example, a home page) of the email application in the instance1. The window202may include but is not limited to a maximization control202a, a moving control202b, a closing control202c, a search box, and page content. The maximization control202amay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer may display the window202on a full screen. Page content displayed on a full screen of the window202may be the same as or different from page content displayed in the floating window202. The moving control202bmay receive an input operation (for example, a drag operation) of the user; and in response to the detected input operation, the tablet computer may display any page of the moved email application in the instance1(for example, display the window202in another position, or display the window202on a full screen/a split screen). The closing control202cmay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer may close the window202. Parameters such as a window size, page content, and a window position of the window202are not limited in an embodiment of the application. It should be noted that, if the operation1is corresponding to a window form of full-screen display, the tablet computer may display any page in any instance of the email application on a full screen. In this way, the tablet computer may implement an instance of the email application in a window form of a floating window. It should be noted that, if a background task corresponding to the email application exists in the background task list of the tablet computer, the tablet computer may implement, based on the foregoing content, a plurality of instances of the email application, that is, an instance corresponding to the background task corresponding to the email application and an instance1; and a name of an activity corresponding to a page displayed when the instance corresponding to the background task corresponding to the email application is started and a name of an activity corresponding to a page displayed when the instance1is started are the same, and IDs of the instances of the activities are different. The page displayed when the instance corresponding to the background task corresponding to the email application is displayed and the page displayed when the instance1is started may include the same or different content. After detecting an operation of opening the Dock that is instructed by the user, the tablet computer may display, on the user interface11, the Dock interface201shown inFIG.3Fas an example. For the Dock interface201, refer to the description shown inFIG.3Das an example. Details are not described herein again. In addition, a position of the Dock interface201shown inFIG.3Fas an example may be the same as or different from a position of the Dock interface201shown inFIG.3Das an example, which may be configured based on parameters such as a configuration of the software system and an operation of a user. For example, when the user invokes the Dock on a same side edge of the tablet computer, the Dock interface201may be displayed in a position corresponding to the side edge. In addition, the window202shown inFIG.3Eas an example continues to be displayed on the user interface11as shown inFIG.3Fas an example, where the positions of the window202inFIG.3EandFIG.3Fusually remain unchanged. In this way, the tablet computer may further open the Dock when opening the email application. After detecting an operation2that is instructed by the user to open the email application on the Dock interface201, based on a type of the operation2and a configuration of the software system, the tablet computer may display any page of the email application in a window form corresponding to the operation2, to implement another instance (referred to as an instance2) of the email application. The instance2is different from the instance1, and names of activities corresponding to pages displayed when the instance2and the instance1are started are the same, and IDs the instances of the activities are different. Pages displayed when the instance2and the instance1are started may include the same or different content. Different types of the operation2indicate different window forms for displaying any page of the email application. A type of the operation2is not limited in an embodiment of the application, for example, a tap operation, a drag operation, a touch and hold operation, or a double-tap operation. In some embodiments, if the operation2is corresponding to a window form of a floating window, the tablet computer may display, on the user interface11, a window203shown inFIG.3Gas an example. The window203may be used to implement the instance2of the email application. The window203may display any page (for example, a home page) of the email application in the instance2. A name of an activity corresponding to the page is the same as a name of an activity corresponding to a page displayed when the instance1is started, and IDs of the instances of the activities are different. The window202may include but is not limited to a maximization control203a(that is, a first control), a moving control203b, a closing control203c(that is, a second control), a search box, and page content. The maximization control203amay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer displays the window203on a full screen. Page content displayed on a full screen of the window203may be the same as or different from page content displayed in the floating window203. The moving control203bmay receive an input operation (for example, a drag operation) of the user; and in response to the detected input operation, the tablet computer may display any page of the moved email application in the instance2(for example, display the window203in another position, or display the window203on a full screen/a split screen). The closing control203cmay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer may close the window203. Parameters such as a window size, page content, and a window position of the window203are not limited in an embodiment of the application. In addition, parameters such as a window size, page content, and a window position of the window202and the window203may be the same or different. This is not limited in an embodiment of the application. It should be noted that, if the operation2is corresponding to a window form of full-screen display, the tablet computer may display a page of the email application in the instance1in a floating window, and at the same time, display a page of the email application in the instance2on a full screen, so that the tablet computer implements two instances of the email application in a window combination form of “full screen+floating window”. If the operation2is corresponding to a window form of split-screen display, the tablet computer may display a page of the email application in the instance1and a page of the email application in the instance2in split screens, so that the tablet computer implements two instances of the email application in a window combination form of “left split screen+right split screen”. In addition, the tablet computer may continue to display, on the user interface11, the window202shown inFIG.3Gas an example. In this way, the tablet computer may display a page of the email application in the instance1in a floating window, and at the same time, display a page of the email application in the instance2in a floating window, so that the tablet computer implements two instances of the email application in a window combination form of “floating window+floating window”. It should be noted that, the electronic device may further implement three or more instances of the email application by referring to the description about implementing two instances (the instance1and the instance2) of an email application by the electronic device. For example, when a background task corresponding to the email application exists in the background task list of the tablet computer, the tablet computer may implement, based on the foregoing content, the instance corresponding to the background task corresponding to the email application, the instance1, and the instance2. In addition, an embodiment of the application includes but is not limited to the foregoing implementation for implementing, by the tablet computer, a plurality of instances of the email application in a window combination form of “floating window+floating window”. After detecting an operation input by the user on the maximization control202aof the window202, the tablet computer may change the window202to display, on the user interface11, the user interface13shown inFIG.3Has an example, so that the user interface13covers the user interface11, and the window202does not continue to be displayed. The user interface13may be used to implement the instance1of the email application. The user interface13may display any page (for example, a home page) of the email application in the instance1. A name of an activity corresponding to the page is the same as a name of an activity corresponding to a page displayed when the instance1is started, and IDs of the instances of the activities are different. An embodiment of the user interface13is not limited in an embodiment of the application. In addition, page content in the window202may overlap or may not overlap with that on the user interface13. In addition, the tablet computer may display, on the user interface13, the window203shown inFIG.3Has an example. In this way, the tablet computer may display a page of the email application in the instance1on a full screen, and at the same time, display a page of the email application in the instance2in a floating window, so that the tablet computer implements two instances of the email application in a window combination form of “full screen+floating window”. It should be noted that an embodiment of the application includes but is not limited to the foregoing implementation for implementing, by the tablet computer, a plurality of instances of the email application in a window combination form of “full screen+floating window”. After detecting an operation input by the user on the moving control203bof the window203, the tablet computer may move the window203, and display a user interface16and a window206that are shown inFIG.3Qas an example. The user interface16is a thumbnail of the user interface13, and is used to display the display content of an email application corresponding to the user interface13. The user interface16is illustrated by using an example in which a thumbnail of the user interface13is displayed in a rectangular ring. The window206is a thumbnail of the window203, and is used to display the display content of the email application corresponding to the window203. The window206is illustrated by using an example in which an icon of the email application is displayed in a rounded rectangular box. After detecting that a finger of the user leaves the pressed window206on the right side of the display area of the tablet computer, the tablet computer may change from displaying a user interface17aand a user interface17bshown inFIG.3Ras an example to displaying a user interface14aand a user interface14bshown inFIG.3Ias an example. The user interface17ais displayed on a left split screen and the user interface17bis displayed on a right split screen, to indicate that two instances of the email application are displayed on a left split screen and a right split screen. For ease of description, the icon of the email application is used in both the user interface17aand the user interface17bas an example for illustration. In this way, the user interface14ais displayed on a left split screen, and the user interface14bis displayed on a right split screen. In addition, after it is detected that the finger of the user leaves the pressed window206on the left side of the display area of the tablet computer, the user interface17ais displayed on a right split screen and the user interface17bis displayed on a left split screen, so that the user interface14amay also be displayed on a right split screen and the user interface14bmay be displayed on a left split screen. This is not limited in an embodiment of the application. The user interface14amay be used to implement the instance1of the email application, and the user interface14amay display any page (for example, a home page) of the email application in the instance1. The user interface14bmay be used to implement the instance2of the email application, and the user interface14bmay display any page (for example, a home page) of the email application in the instance2. In addition, a name of an activity corresponding to any page of the email application in the instance1is the same as a name of an activity corresponding to any page of the email application in the instance2, and IDs of the instances of the activities are different. An embodiment of the user interface14aand the user interface14bare not limited in an embodiment of the application. In addition, page content on the user interface14amay overlap or may not overlap with that on the user interface13. The page content on the user interface14bmay overlap or not overlap with that in the window203. In this way, the tablet computer may display a page of the email application in the instance1on a left split screen, and at the same time, display a page of the email application in the instance2on a right split screen, so that the tablet computer implements two instances of the email application in a window combination form of “left split screen+right split screen”. It should be noted that an embodiment of the application includes but is not limited to the foregoing implementation for implementing, by the tablet computer, a plurality of instances of the email application in a window combination form of “left split screen+right split screen”. After detecting an operation of opening the Dock that is instructed by the user, the tablet computer may display, on the user interface14b, the Dock interface201shown inFIG.3Jas an example. For the Dock interface201, refer to the description shown inFIG.3DorFIG.3Fas an example. Details are not described herein again. In addition, a position of the Dock interface201shown inFIG.3Jas an example may be the same as or different from a position of the Dock interface201shown inFIG.3DorFIG.3Fas an example, which may be specifically configured based on parameters such as a configuration of the software system and an operation of a user. For example, when the user invokes the Dock on a same side edge of the tablet computer, the Dock interface201may be displayed in a position corresponding to the side edge. After detecting an operation3that is instructed by the user to open the email application on the Dock interface201, based on a type of the operation3and a configuration of the software system, the tablet computer may display any page of the email application in a window form corresponding to the operation3, to implement another instance (referred to as an instance3) of the email application. The instance3is different from both the instance2and the instance1. Names of activities corresponding to pages displayed when the instance3, the instance2, and the instance1are started are the same, and IDs of the instances of the activities are different. Pages displayed when the instance3, the instance2, and the instance1are started may include the same or different content. Different types of the operation3indicate different window forms for displaying any page of the email application. A type of the operation3is not limited in an embodiment of the application, for example, a tap operation, a drag operation, a touch and hold operation, or a double-tap operation. In some embodiments, if the operation3is corresponding to a window form of a floating window, the tablet computer may display, on the user interface14a, a window204shown inFIG.3Kas an example. The window204may be used to implement the instance3of the email application. The window204may display any page (for example, a home page) of the email application in the instance3. A name of an activity corresponding to the page is the same as names of activities corresponding to pages displayed when the instance1and the instance2are started, and IDs of the instances of the activities are different. The window204may include but is not limited to a maximization control204a, a moving control204b, a closing control204c, a search box, and page content. The maximization control204amay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer displays the window204on a full screen. Page content displayed on a full screen of the window204may be the same as or different from page content displayed in the floating window204. The moving control204bmay receive an input operation (for example, a drag operation) of the user; and in response to the detected input operation, the tablet computer may display any page of the moved email application in the instance3(for example, display the window204in another position, or display the window204on a full screen/a split screen). The closing control204cmay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer may close the window204. In an embodiment of the application, parameters such as a window size, page content, and a window position (in addition to the user interface14a, the window204may be on the user interface14aand the user interface14b, or may be on the user interface14b) of the window204are not limited. In addition, parameters such as a window size, page content, and a window position of the window202, the window203, and the window204may be the same or different. This is not limited in an embodiment of the application. It should be noted that, if the operation3is corresponding to a window form of split-screen/full-screen display, the tablet computer may display a page of the email application in the instance3on a split screen/a full screen. In this way, the tablet computer may display a page of the email application in the instance1on a left split screen, and at the same time, display a page of the email application in the instance2on a right split screen, and at the same time, display an application of the email application in the instance3in the floating window, so that the tablet computer implements three instances of the email application in a window combination form “floating window+left split screen+right split screen”. It should be noted that an embodiment of the application includes but is not limited to the foregoing implementation for implementing, by the tablet computer, a plurality of instances of the email application in a window combination form of “floating window+left split screen+right split screen”. After detecting an operation input by the user on the closing control204cof the window204, the tablet computer may close the window204, so that the tablet computer displays the user interface14aand the user interface14bshown inFIG.3Las an example; that is, the instance3of the email application is closed, and the instance1and the instance2of the email application are running. For the user interface14aand the user interface14b, refer to the description of the example shown inFIG.3J. Details are not described herein again. After detecting an operation of opening the Dock that is instructed by the user, the tablet computer may display, on the user interface14b, the Dock interface201shown inFIG.3Mas an example. For the Dock interface201, refer to the description shown inFIG.3D,FIG.3F, orFIG.3Jas an example. Details are not described herein again. In addition, a position of the Dock interface201shown inFIG.3Mas an example may be the same as or different from a position of the Dock interface201shown inFIG.3D,FIG.3F, orFIG.3Jas an example, which may be configured based on parameters such as a configuration of the software system and an operation of a user. For example, when the user invokes the Dock on a same side edge of the tablet computer, the Dock interface201may be displayed in a position corresponding to the side edge. After detecting an operation4that is instructed by the user to open the calculator application on the Dock interface201, the tablet computer may display, based on a type of the operation4and a configuration of the software system, any page of the calculator application in a window form corresponding to the operation4, to implement an instance of the calculator application. Different types of the operation4indicate different window forms for displaying any page of the calculator application. A type of the operation4is not limited in an embodiment of the application, for example, a tap operation, a drag operation, a touch and hold operation, or a double-tap operation. In some embodiments, if the operation4is corresponding to a window form of a split screen, the tablet computer may display the user interface14aand the user interface18bthat are shown inFIG.3Sas an example. The user interface18bincludes a thumbnail of the user interface14band an icon of a computer application, and is used to display the display content of an email application corresponding to the user interface14band indicate that the display content of the computer application is used to replace the display content of the email application corresponding to the user interface14b, that is, an instance (that is, the instance3) of the email application that is run by the tablet computer in the background. Correspondingly, the instance of the email application is referred to as a background task of the email application. After detecting that the finger of the user leaves the right side of the display area of the tablet computer, the tablet computer may display the user interface15aand the user interface15bthat are shown inFIG.3Nas an example, so that the user interface15ais displayed on a left split screen and the user interface15bis displayed on a right split screen. In addition, after the finger of the user leaves the left side of the display area of the tablet computer, the user interface15amay also be displayed on a right split screen and the user interface15bmay be displayed on a left split screen. This is not limited in an embodiment of the application. The user interface15amay be used to implement an instance (for example, the instance1or the instance2) of the email application, and the user interface15amay display any page (for example, a home page) of the email application in the instance. The user interface15bmay be used to implement an instance of the calculator application, and the user interface15bmay display any page (for example, a home page) of the calculator application in the instance. An embodiment of the user interface15aand the user interface15bare not limited in an embodiment of the application. In addition, page content on the user interface15amay overlap or may not overlap that on the user interface14aor the user interface14b. In this way, the tablet computer may display a page of an email application in an instance on a left split screen, and at the same time, display a page of a calculator application in an instance on a right split screen, and run another instance of the email application in the background, so that the tablet computer may display the email application and the computer application on a left split screen and a right split screen. After detecting an operation of opening the Dock that is instructed by the user, the tablet computer may display, on the user interface15b, the Dock interface201shown inFIG.3Oas an example. For the Dock interface201, refer to the description shown inFIG.3D,FIG.3F,FIG.3J, orFIG.3Mas an example. Details are not described herein again. In addition, a position of the Dock interface201shown inFIG.3Oas an example may be the same as or different from a position of the Dock interface201shown inFIG.3D,FIG.3F,FIG.3J, orFIG.3Mas an example, which may be configured based on parameters such as a configuration of the software system and an operation of a user. For example, when the user invokes the Dock on a same side edge of the tablet computer, the Dock interface201may be displayed in a position corresponding to the side edge. After detecting an operation5that is instructed by the user to open the email application on the Dock interface201, the tablet computer may display, based on a type of the operation5and a configuration of the software system, any page of the email application in a window form corresponding to the operation5, to implement an instance of the email application. Different types of the operation5indicate different window forms for displaying any page of the email application. A type of the operation5is not limited in an embodiment of the application, for example, a tap operation, a drag operation, a touch and hold operation, or a double-tap operation. In some embodiments, if the operation5is corresponding to a window form of a floating window, the tablet computer may display, on the user interface15a, the window205shown inFIG.3Pas an example. The window205may be used to implement the instance4of the email application. Names of activities corresponding to pages displayed when the instance4, the instance3, the instance2, and the instance1are started are the same, and IDs of the instances of the activities are different. Pages displayed when the instance4, the instance3, the instance2, and the instance1are started may include the same or different content. The window205may display any page (for example, a home page) of the email application in the instance4. A name of an activity corresponding to the page is the same as names of activities corresponding to pages displayed when the instance1, the instance2, and the instance3are started, and IDs of the instances of the activities are different. The window205may include but is not limited to a maximization control205a, a moving control205b, a closing control205c, a search box, and page content. The maximization control205amay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer displays the window205on a full screen. Page content displayed on a full screen of the window205may be the same as or different from page content displayed in the floating window205. The moving control205bmay receive an input operation (for example, a drag operation) of the user; and in response to the detected input operation, the tablet computer may display any page of the moved email application in the instance4(for example, display the window205in another position, or display the window205on a full screen/a split screen). The closing control205cmay receive an input operation (for example, a tap operation) of the user; and in response to the detected input operation, the tablet computer may close the window205. In an embodiment of the application, parameters such as a window size, page content, and a window position (in addition to the user interface15a, the window205may be on the user interface15aand the user interface15b, or may be on the user interface15b) of the window205are not limited. In addition, parameters such as a window size, page content, and a window position of the window202, the window203, the window204, and the window205may be the same or different. This is not limited in an embodiment of the application. It should be noted that, if the operation5is corresponding to a window form of split-screen/full-screen display, the tablet computer may display a page of the email application in the instance4on a split screen/a full screen. In this way, the tablet computer may display a page of the email application in the instance1or the instance2on a left split screen, and at the same time, display an application of the email application in the instance4in a floating window, so that the tablet computer implements three instances of the email application in a window combination form of “floating window+split screen”. The three instances mentioned herein are respectively the instance1and the instance4of the email application displayed by the tablet computer, and the instance2of the email application running in the background of the tablet computer. Alternatively, the three instances mentioned herein are respectively the instance2and the instance4of the email application displayed by the tablet computer, and the instance1of the email application running in the background of the tablet computer. It should be noted that an embodiment of the application includes but is not limited to the foregoing implementation for implementing, by the tablet computer, a plurality of instances of the email application in a window combination form of “floating window+split screen”. In addition, the implementation sequence of the window combination forms of “split screen+floating window”, “full screen+floating window”, “floating window+floating window”, “left split screen+right split screen”, and “left split screen+right split screen+floating window” is not limited, and there is no necessary association between the window combination forms. In addition, the electronic device includes but is not limited to an execution sequence of the foregoing manner to display a plurality of instances of any application. Based on the foregoing description, with reference toFIG.4AtoFIG.4C, an operating principle of any instance of triggering/starting the application1by the Dock is described. The application1is any application. FIG.4AtoFIG.4Care a signaling interaction diagram of a method for starting any instance of an application by a Dock according to an embodiment of this application. As shown inFIG.4AtoFIG.4C, a method for starting any instance of the application1by the Dock may include the following operations: S1. The Dock receives an operation performed by a user on an icon of the application1. The foregoing operation may include but is not limited to an operation such as tap, drag, touch and hold, or double-tap. S21. In response to the foregoing operation, the Dock may invoke an activity manager at an application framework layer to obtain a task list. For example, the Dock may send a second message to the activity manager to obtain the task list. In response to the second message, the activity manager may return a fourth message to the Dock, where the fourth message carries the task list. In an embodiment of the application, the task list is used to store tasks corresponding to all third-party applications currently running on the electronic device. Generally, the types of tasks can include background tasks and foreground tasks. Any task in the task list may be represented by using at least one of a letter, a character string, a digit, a character, or the like. This is not limited in an embodiment of the application. In addition, when the task list is empty, it may indicate that the electronic device is not running any third-party application. S22. In response to the foregoing operation, the Dock may invoke a package manager at the application framework layer, for example, obtain a configuration file AndroidManifest.xml of the application1by using a PackageManger.getApplicationInfo method, and obtain a configuration parameter of the application1from the configuration file AndroidManifest.xml of the application1. For example, the Dock may send a first message to the package manager, to obtain the configuration file AndroidManifest.xml of the application1. In response to the first message, the package manager may return a third message to the Dock, where the third message carries the configuration file AndroidManifest.xml of the application1, and the configuration file of the application1includes a configuration parameter of the application1. In an embodiment of the application, the Dock may obtain the configuration file AndroidManifest.xml of the application1in a plurality of manners. In some embodiments, the Dock sends a request1to the package manager, where the request1is used to request the configuration file AndroidManifest.xml of the application1. The package manager may send a request2(that is, a twelfth message) to the application1, where the request2is used to request the configuration file AndroidManifest.xml of the application1. The application1sends a response1(that is, a thirteenth message) to the package manager, where the response1carries the configuration file AndroidManifest.xml of the application1. The package manager sends a response2to the Dock, where the response2carries the configuration file AndroidManifest.xml of the application1. An embodiment of the message1, the message2, the response1, and the response2are not limited in an embodiment of the application. In some other embodiments, the Dock sends a request3to the package manager, where the request3is used to request the configuration file AndroidManifest.xml of the application1. Because the package manager pre-stores the configuration file AndroidManifest.xml of the application1, the package manager may send a response3to the Dock, where the response3carries the configuration file AndroidManifest.xml of the application1. An embodiment of the message3and the response3are not limited in an embodiment of the application. In some other embodiments, the Dock sends a request4to the package manager, where the request4is used to request the configuration file AndroidManifest.xml of the application1. Because the package manager learns an application programming interface (API) of the application1in advance, the package manager may obtain the configuration file AndroidManifest.xml of the application1from the API of the application1; and the package manager sends a response4to the Dock, where the response4carries the configuration file AndroidManifest.xml of the application1. An embodiment of the message4and the response4are not limited in an embodiment of the application. It should be noted that an embodiment of the application includes but is not limited to the foregoing three feasible implementations. In an embodiment of the application, the configuration parameter is used to indicate a parameter related to a multi-instance feature of the application1. The configuration parameter may include an identifier1, or may include an identifier1and an identifier2, or may include an identifier1, an identifier2, and a LaunchMode of the main entry activity, or may include an identifier1, an identifier2, a LaunchMode of the main entry activity, and an identifier3. A specific implementation of the configuration parameter is not limited in an embodiment of the application. The identifier1is used to indicate whether the application1supports the multi-instance feature, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. For the identifier1mentioned herein, refer to the description of the character configured for “com.huawei.android.multiwindow.multiniistance.enable” shown in Table 1. The identifier2is used to indicate whether the application1supports starting of the multi-instance feature by using the main entry activity, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. For the identifier mentioned herein, refer to the description of the value configured for com.huawei.android.multiwindow.multiinistance.level shown in Table 2. Existence of the identifier3is used to indicate that the application1supports starting of the multi-instance feature by a specified entry activity (targetActivity), and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. For details about the identifiers mentioned herein, refer to the foregoing description of the configuration of “com.huawei.android.multiwindow.multiinistance.targetactivity”. Absence of the identifier3is used to indicate that the application1does not support starting of the multi-instance feature by using specified entry activity (targetActivity). It should be noted that there is no time sequence between operation S21and operation S22. Operation S21and operation S22may be performed synchronously or in sequence. This is not limited in an embodiment of the application. S3. The Dock determines whether a task corresponding to the application1exists in the task list. If it is determined that the task corresponding to the application1does not exist in the task list, operation S4is performed. If it is determined that the task corresponding to the application1exists in the task list, it may indicate that the electronic device has run at least one instance of the application1, and operation S5, or operations S61to S64, or operations S71to S74, or operations S81to S85are performed. S4. The Dock starts the application1. In operation S4, in response to the foregoing operation, the Dock may start the application1when determining that the task corresponding to the application1does not exist in the task list. It should be noted that the foregoing operation herein may be corresponding to the operation1inFIG.3AtoFIG.3P, and details are not described herein again. In addition, after the Dock starts the application1, the electronic device may run an instance of the application1. In this way, the electronic device may display an instance of the application1, for example, any page (for example, a home page) of the application1. For details, refer to the description of displaying, by the tablet computer, the window2shown inFIG.3Eon the user interface11. Details are not described herein again. In an embodiment of the application, after determining that the task corresponding to the application1exists in the task list, the Dock may determine whether the identifier1indicates that the application1supports the multi-instance feature, and determine whether the identifier2indicates that the application1does not support starting of the multi-instance feature by using the main entry activity. If the identifier1indicates that the application1does not support the multi-instance feature, operation S5is performed. If the identifier1indicates that the application1does not support the multi-instance feature, and the identifier2indicates that the application1does not support starting of the multi-instance feature by using the main entry activity, operation S5is performed. If the identifier1indicates that the application1supports the multi-instance feature and the identifier2indicates that the application1supports starting of the multi-instance feature by using the main entry activity, operations S61to S64, operations S71to S74, or operations S81to S85are performed, so that a plurality of instances of the application1can be implemented. S5. The Dock does not start the application1. In an embodiment of the application, when the identifier1indicates that the application1does not support the multi-instance feature, the Dock cannot start the application1. Alternatively, when the identifier1indicates that the application1supports the multi-instance feature and the identifier2indicates that the application1does not support starting of the multi-instance feature by using the main entry activity, the Dock cannot start the application1. There are a plurality of manners for implementing that the icon of the application1in the Dock is in an untappable/undraggable state. For example, the icon of the application1is dimmed. For another example, the icon of the application1is normally displayed, but the Dock does not respond to a tap/drag operation of the user. S61. When determining that the LaunchMode of the main entry activity is set to “standard” or “singleTop”, the Dock sends an identifier of the main entry activity, a starting parameter1, and a window mode1to the activity manager at the application framework layer. The identifier of the main entry activity is used to represent the main entry activity, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. The starting parameter1is used to request/indicate/notify to establish or apply for establishing anew task stack (task stack1for short), and the task stack1is different from a task stack corresponding to the task corresponding to the application1in the task list. The starting parameter1may include but is not limited to intent flags such as FLAG_ACTIVITY_NEW_TASK, FLAG_ACTIVITY_MULTIPLE_TASK, and FLAG_HW_ACTIVITY_MULTIPLE_TASK. The window mode1is related to the type of the operation in operation S1. For example, when the operation in operation S1is a tap operation, the window mode1may be set to a floating window display mode. When the operation in operation S1is a drag operation, the window mode1may be set to a split-screen display mode. It should be noted that the Dock may add the identifier of the main entry activity, the starting parameter1, and the window mode1to a message1, and send the message1(that is, a fifth message) to the activity manager. An embodiment of the message1is not limited in an embodiment of the application. S62. The activity manager creates a task stack1based on the identifier of the main entry activity, the starting parameter1, and the window mode1. The task stack1is different from all the existing task stacks corresponding to the application1in the task list. S63. The activity manager sends the identifier of the main entry activity, the window mode1, and the identifier of the task stack1to the multi-window task manager. The identifier of the task stack1is used to represent the task stack1, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. It should be noted that the activity manager may add the identifier of the main entry activity, the window mode1, and the identifier of the task stack1to a message2, and send the message2(that is, a sixth message) to the multi-window task manager. An embodiment of the message2is not limited in an embodiment of the application. S64. The multi-window task manager calculates a window coordinate1based on the window mode1, and starts the main entry activity in the task stack1based on the identifier of the main entry activity and the identifier of the task stack1. Based on the descriptions of operations S61to S64, the electronic device may start an instance of the application1in the task stack1, and display the instance of the application1based on a window form corresponding to the window coordinate1. A specific representation manner of the window coordinate1is not limited in an embodiment of the application. In addition, the instance of the application1mentioned herein is an instance of the main entry activity, and has a same name as all the existing instances of the main entry activity corresponding to the application1in the task list, but IDs of the instances of the main entry activity are different. For example, based on the foregoing description, the electronic device may run the instance1and the instance2of the email application mentioned in the embodiments inFIG.3AtoFIG.3S, or the instance1, the instance2, and the instance3of the email application, or the instance1and the instance4of the email application, or the instance2and the instance4of the email application. Activities corresponding to pages displayed when the instance1, the instance2, the instance3, and the instance4are started are all main entry activities; names of the activities corresponding to pages displayed when the instance1, the instance2, the instance3, and the instance4are started are the same; and IDs of the instances of the activities are different, and may be used to distinguish between the instance1, the instance2, the instance3, and the instance4. In addition, pages displayed when the instance1, the instance2, the instance3, and the instance4are started may include the same or different content. In conclusion, because the task corresponding to the application1exists in the task list, it may indicate that the electronic device has run at least one instance of the application1. Therefore, the electronic device may run two or more instances of the application1, so that the electronic device implements a plurality of instances of the application1. When the identifier1indicates that the application1supports the multi-instance feature, the identifier2indicates that the application1supports starting of the multi-instance feature by using the main entry activity, and it is determined that LaunchMode of the main entry activity is set to “singleTask”, the Dock may determine whether the identifier3exists. If it is determined that the identifier3exists, operations S71to S74are performed. If it is determined that the identifier3does not exist, operations S81to S85are performed. In this way, a plurality of instances of the application1are implemented. S71. When determining that the LaunchMode of the main entry activity is set to “singleTask” and the identifier3exists, the Dock sends, to the activity manager at the application framework layer, an identifier of a specified entry1activity (targetActivity1) corresponding to the identifier3, a starting parameter2, and a window mode2. The specified entry1activity (targetActivity1) is any entry activity other than the main entry activity. An identifier of the specified entry1activity (targetActivity1) is used to represent the specified entry1activity (targetActivity1), and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. The starting parameter2is used to instruct/notify to establish or apply for establishing a new task stack (task stack2for short), and the task stack2is different from a task stack corresponding to the task corresponding to the application1in the task list. The starting parameter2may include but is not limited to intent attributes such as FLAG_ACTIVITY_NEW_TASK, FLAG_ACTIVITY_MULTIPLE_TASK, and FLAG_HW_ACTIVITY_MULTIPLE_TASK. The window mode2is related to the type of the operation in operation S1. For example, when the operation in operation S1is a tap operation, the window mode2may be set to a floating window display mode. When the operation in operation S1is a drag operation, the window mode2may be set to a split-screen display mode. It should be noted that the Dock may add, to a message3(that is, a seventh message), the identifier of the specified entry1activity (targetActivity1) corresponding to the identifier3, the starting parameter2, and the window mode2, and send the message3to the activity manager. An embodiment of the message3is not limited in an embodiment of the application. S72. The activity manager creates a task stack2based on the identifier of the specified entry1activity (targetActivity1), the starting parameter2, and the window mode2. The task stack2is different from all the existing task stacks corresponding to the application1in the task list. S73. The activity manager sends the identifier of the specified entry1activity (targetActivity1), the window mode2, and an identifier (that is, an eighth message) of the task stack2to the multi-window task manager. The identifier of the task stack2is used to represent the task stack2, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. It should be noted that the activity manager may add the identifier of the specified entry1activity (targetActivity1), the window mode2, and the identifier of the task stack2to a message4, and send the message4to the multi-window task manager. An embodiment of the message4is not limited in an embodiment of the application. S74. The multi-window task manager calculates a window coordinate2based on the window mode2, and starts the specified entry1activity (targetActivity1) in the task stack2based on the identifier of the specified entry1activity (targetActivity1) and the identifier of the task stack2. Based on the descriptions of operations S71to S74, the electronic device may start an instance of the application1in the task stack2, and display the instance of the application1based on a window form corresponding to the window coordinate2. A specific representation manner of the window coordinate2is not limited in an embodiment of the application. In addition, the window coordinate1and the window coordinate2may be the same or different. This is not limited in an embodiment of the application. In addition, the instance of the application1mentioned herein is an instance of the specified entry1activity, and has a same name as all the existing instances of the specified entry1activity corresponding to the application1in the task list, but IDs of the instances of the specified entry1activity are different. For example, based on the foregoing description, the electronic device may run the instance1and the instance2of the email application mentioned in the embodiments inFIG.3AtoFIG.3S, or the instance1, the instance2, and the instance3of the email application, or the instance1and the instance4of the email application, or the instance2and the instance4of the email application. Activities corresponding to the pages displayed when the instance1, the instance2, the instance3, and the instance4are all specified entry1activities; names of ActivityIDs corresponding to the pages displayed when the instance1, the instance2, the instance3, and the instance4are the same; and IDs of the instances of the activities are different, which may be used to distinguish between the instance1, the instance2, the instance3, and the instance4. In addition, pages displayed when the instance1, the instance2, the instance3, and the instance4are started may include the same or different content. In conclusion, because the task corresponding to the application1exists in the task list, it may indicate that the electronic device has run at least one instance of the application1. Therefore, the electronic device may run two or more instances of the application1, so that the electronic device implements a plurality of instances of the application1. S81. When determining that LaunchMode of a main entry activity is set to “singleTask” and the identifier3does not exist, the Dock sends an identifier4and a window mode3to the application1. The identifier4is used to instruct the application1to start a new instance of the application1, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. The window mode3is related to the type of the operation in operation S1. For example, when the operation in operation S1is a tap operation, the window mode3may be set to a floating window display mode. When the operation in operation S1is a drag operation, the window mode3may be set to a split-screen display mode. It should be noted that the Dock may add the identifier4and the window mode3to a message5, and send the message5(that is, the ninth message) to the application1. An embodiment of the message5is not limited in an embodiment of the application. S82. When determining, based on the identifier4and the window mode3, that the application1meets a preset condition, the application1sends an identifier of a specified entry2activity (targetActivity2), the window mode3, and an identifier5(a tenth message) to the activity manager at the application framework layer. In an embodiment of the application, when receiving the identifier4, the application1may determine whether the application1meets the preset condition. Specific content of the preset condition is not limited in an embodiment of the application. For example, the preset condition may be that the application1supports the multi-instance feature. For another example, the preset condition may be that the application1supports the multi-instance feature, and a sum of a quantity of current task stack records corresponding to the application1plus 1 is less than or equal to a preset maximum value. In some embodiments, the application1may record a corresponding quantity of times that the application1applies to the electronic device for establishing a new task stack each time, and determine the quantity of times as a quantity of current task stack records corresponding to the application1. In some other embodiments, the application1may query the electronic device for the quantity of current task stacks corresponding to the application1. The preset maximum value may be set based on a configuration of the application1. The preset maximum value is not limited in an embodiment of the application. In addition, when the sum of the quantity of current task stacks corresponding to the application1plus 1 is greater than the preset maximum value, the application1may close an instance of the application1that is first started, or close an instance of the application1that is not started within preset duration, so that the sum of the quantity of current task stacks corresponding to the application1plus 1 is less than or equal to the preset maximum value. In this way, when determining that the application1meets the preset condition, the application1may send the identifier of the specified entry2activity (targetActivity2), the window mode3, and the identifier5to the activity manager at the application framework layer. The specified entry2activity (targetActivity2) is any entry activity other than the main entry activity, and the specified entry2activity (targetActivity2) is defined by the application1. The identifier of the specified entry2activity (targetActivity2) is used to represent the specified entry2activity (targetActivity2), and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. The identifier5is used to request/instruct/notify to establish or apply for establishing a new task stack (task stack3for short), and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. The task stack3is different from a task stack corresponding to the task corresponding to the application1in the task list. The window mode3is related to the type of the operation in operation S1. For example, when the operation in operation S1is a tap operation, the window mode3may be set to a floating window display mode. When the operation in operation S1is a drag operation, the window mode3may be set to a split-screen display mode. It should be noted that the application1may add the identifier of the specified entry2activity (targetActivity2), the window mode3, and the identifier5to a message5, and send the message5to the activity manager. An embodiment of the message5is not limited in an embodiment of the application. S83. The activity manager creates a task stack3based on the identifier of the specified entry2activity (targetActivity2), the identifier5, and the window mode3. The task stack3is different from all the existing task stacks corresponding to the application1in the task list. S84. The activity manager sends the identifier of the specified entry2activity (targetActivity2), the window mode3, and an identifier (that is, an eleventh message) of the task stack3to the multi-window task manager. The identifier of the task stack3is used to represent the task stack3, and may be represented in at least one manner such as a letter, a character string, a digit, or a text. This is not limited in an embodiment of the application. It should be noted that the activity manager may add the identifier of the specified entry2activity (targetActivity2), the window mode3, and the identifier of the task stack3to a message6, and send the message6to the multi-window task manager. An embodiment of the message6is not limited in an embodiment of the application. S85. The multi-window task manager calculates a window coordinate3based on the window mode3, and starts the specified entry2activity (targetActivity2) in the task stack3based on the identifier of the specified entry2activity (targetActivity2) and the identifier of the task stack3. Based on the descriptions of operations S81to S85, the electronic device may start an instance of the application1in the task stack3, and display the instance of the application1based on a window form corresponding to the window coordinate3. A specific representation manner of the window coordinate3is not limited in an embodiment of the application. In addition, the window coordinate1, the window coordinate2, and the window coordinate3may be the same or different. This is not limited in an embodiment of the application. In addition, the instance of the application1mentioned herein is a specified entry2activity corresponding to a page displayed when the application1is started; and a name of the specified entry2activity may be the same as or different from names of all the existing entry activities corresponding to the application1in the task list. When the names of the two are the same, the IDs of the instances of the entry activities of the two are different. For example, based on the foregoing description, the electronic device may run the instance1and the instance2of the email application mentioned in the embodiments inFIG.3AtoFIG.3S, or the instance1, the instance2, and the instance3of the email application, or the instance1and the instance4of the email application, or the instance2and the instance4of the email application. Activities corresponding to pages displayed when the instance1, the instance2, the instance3, and the instance4are started are all specified entry2activities. In addition, pages displayed when the instance1, the instance2, the instance3, and the instance4are started may include the same or different content. In conclusion, because the task corresponding to the instance of the application1exists in the task list, it may indicate that the electronic device has run at least one instance of the application1. Therefore, the electronic device may run two or more instances of the application1, so that the electronic device implements a plurality of instances of the application1. It should be noted that in the foregoing descriptions, for descriptions of implementing a plurality of instances of the application1by the electronic device, reference may be made to descriptions of implementing a plurality of instances of the email application by the tablet computer in the window combination forms such as “split screen+floating window”, “full screen+floating window”, “floating window+floating window”, “left split screen+right split screen”, and “left split screen+right split screen+floating window” inFIG.3AtoFIG.3P. Details are not described herein again. In conclusion, for any application that is installed in the electronic device and that supports the multi-instance feature, based on configurations of the application layer and the application framework layer in the software system of the electronic device, the electronic device that supports a floating window and split-screen display may have a single-application multi-instance feature. In this way, the Dock triggers any application to start the multi-instance feature of the application, and the electronic device may simultaneously display a plurality of windows of the application on a display in a plurality of window combination forms, so that multi-instance coordination processing and operation may be performed on a same application, thereby improving speed and efficiency of running the same application by the electronic device, maximizing continuation of an operating habit of a user on a PC, improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. Based on the foregoing description, an embodiment of this application may provide an application starting method. The application starting method is applied to an electronic device, the electronic device includes a first application, a second application, a first software module, a second software module, and a third software module, and the first application is different from the second application. For an embodiment of the first application, refer to the foregoing description of the Dock; for an embodiment of the second application, refer to the foregoing description of the third-party application (such as the email application and the application1); for an embodiment of the first software module, refer to the foregoing description of the package manager; for an embodiment of the second software module, refer to the foregoing description of the activity manager; and for an embodiment of the third software module, refer to the foregoing description of the multi-window task manager (HwMultiWindowManger). Details are not described herein again. In some embodiments, the application starting method in an embodiment of the application may include operation S101to operation S108. S101. An electronic device displays a first interface of a first application, where the first interface includes an icon of a second application. S102. In response to the first operation performed on the icon of the second application, the first application sends a first message to a first software module, and the first application sends a second message to a second software module. S103. In response to receiving the first message, the first software module sends a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application. S104. In response to receiving the second message, the second software module sends a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device. S105. When determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports a multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, and a starting mode of the main entry activity is a first mode, sending, by the first application, a fifth message to the second software module, where the fifth message carries an identifier of the main entry activity and a first identifier, and the first identifier is used to request to establish a first task stack. S106. In response to receiving the fifth message, the second software module establishes a first task stack based on the identifier of the main entry activity and the first identifier, where the first task stack is different from a task stack corresponding to the second application in the task list. S107: The second software module sends a sixth message to a third software module, where the sixth message carries the identifier of the main entry activity and an identifier of the first task stack. S108: In response to receiving the sixth message, the third software module starts the main entry activity in the first task stack based on the identifier of the main entry activity and the identifier of the first task stack, to run a second instance of the second application. For an embodiment of operation S101, refer to the description of the embodiment inFIG.3D; for an embodiment of the first interface, refer to the description of the window201in the embodiment inFIG.3D; for an embodiment of the icon of the second application, refer to the description of the icon201aof the email application in the embodiment inFIG.3D; for an embodiment of operation S102, refer to the description of operation S21and operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S103, refer to the description of operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S104, refer to the description of operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S105, refer to the description of operation S3and operation S61in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S106, refer to the description of operation S62in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S107, refer to the description of operation S63in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S108, refer to the description of operation S64in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first operation, refer to the description of the operation performed by the user on the icon of the application1in operation S1in the embodiment inFIG.4AtoFIG.4C; for a task list, refer to the description of the operation of the task list in operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first configuration item, refer to descriptions of the identifier1and the identifier2in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the main entry activity, refer to the description of the main entry activity in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first identifier, refer to the description of the starting parameter1in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the first task stack, refer to the description of the task stack1in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. In an embodiment of the application, the multi-instance feature of the second application can be triggered by the first application, and one instance of the second application is started each time by using the main entry activity of the second application, so that the electronic device implements two or more instances of the second application, thereby improving speed and efficiency of running a same application by the electronic device, maximizing continuation of an operating habit of a user on a PC, improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. In some embodiments, the method includes: the first application sends the fifth message to the second software module, where the fifth message carries the identifier of the main entry activity, the first identifier, and a first window mode, and the first window mode is related to a type of the first operation; in response to receiving the fifth message, the second software module establishes the first task stack based on the identifier of the main entry activity, the first identifier, and the first window mode; the second software module sends the sixth message to the third software module, where the sixth message carries the identifier of the main entry activity, the identifier of the first task stack, and the first window mode; and in response to receiving the sixth message, the third software module starts the main entry activity in the first task stack based on the identifier of the main entry activity, the identifier of the first task stack, and the first window mode, where a page display manner of the main entry activity is related to the first window mode. For an embodiment of the foregoing solution, refer to descriptions of operation S61to operation S64in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the first window mode, refer to the description of the window mode1in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. In some other embodiments, the application starting method in an embodiment of the application may include operation S201to operation S208. S201. An electronic device displays a first interface of a first application, where the first interface includes an icon of a second application. S202. In response to receiving a first operation performed on the icon of the second application, the first application sends a first message to a first software module, and the first application sends a second message to a second software module. S203. In response to receiving the first message, the first software module sends a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application. S204. In response to receiving the second message, the second software module sends a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device. S205. When determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports a multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, a starting mode of the main entry activity is a second mode, and the third message further carries an identifier of a first entry activity, the first application sends a seventh message to the second software module, where the seventh message carries the identifier of the first entry activity and a second identifier, the first mode is different from the second mode, the first entry activity is an entry activity other than the main entry activity, and the second identifier is used to request to establish a second task stack. S206. In response to receiving the seventh message, the second software module establishes a second task stack based on the identifier of the first entry activity and the second identifier, where the second task stack is different from a task stack corresponding to the second application in the task list. S207. The second software module sends an eighth message to the third software module, where the eighth message carries the identifier of the first entry activity and an identifier of the second task stack. S208. In response to receiving the eighth message, the third software module starts the first entry activity in the second task stack based on the identifier of the first entry activity and the identifier of the second task stack, to run a second instance of the second application. For an embodiment of operation S201, refer to the description of the embodiment inFIG.3D; for an embodiment of the first interface, refer to the description of the window201in the embodiment inFIG.3D; for an embodiment of the icon of the second application, refer to the description of the icon201aof the email application in the embodiment inFIG.3D; for an embodiment of operation S202, refer to the description of operation S21and operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S203, refer to the description of operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S204, refer to the description of operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S205, refer to the description of operation S5and operation S71in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S206, refer to the description of operation S72in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S207, refer to the description of operation S73in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S208, refer to the description of operation S74in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first operation, refer to the description of the operation performed by the user on the icon of the application1in operation S1in the embodiment inFIG.4AtoFIG.4C; for a task list, refer to the description of the operation of the task list in operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first configuration item, refer to descriptions of the identifier1and the identifier2in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first entry activity, refer to the description of the specified entry activity1in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the second identifier, refer to the description of the starting parameter2in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the second task stack, refer to the description of the task stack2in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. In an embodiment of the application, the multi-instance feature of the second application can be triggered by the first application, and one instance of the second application is started each time by using the first entry activity of the second application, so that the electronic device implements two or more instances of the second application, thereby improving speed and efficiency of running a same application by the electronic device, maximizing continuation of an operating habit of a user on a PC, improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. In some embodiments, the method specifically includes: the first application sends the seventh message to the second software module, where the seventh message carries the identifier of the first entry activity, the second identifier, and a second window mode, and the second window mode is related to a type of the first operation; in response to receiving the seventh message, the second software module establishes the second task stack based on the identifier of the first entry activity, the second identifier, and the second window mode; the second software module establishes the eighth message to the third software module, where the eighth message carries the identifier of the first ingress activity, an identifier of the second task stack, and the second window mode; and in response to receiving the eighth message, the third software module starts the first entry activity in the second task stack based on the identifier of the first entry activity, the identifier of the second task stack, and the second window mode, where a page display manner of the first entry activity is related to the second window mode. For an embodiment of the foregoing solution, refer to descriptions of operation S71to operation S74in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the second window mode, refer to the description of the window mode2in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. In some other embodiments, the application starting method in an embodiment of the application may include operation S301to operation S309. S301. An electronic device displays a first interface of a first application, where the first interface includes an icon of a second application. S302. In response to the first operation performed on the icon of the second application, the first application sends a first message to a first software module, and the first application sends a second message to a second software module. S303. In response to receiving the first message, the first software module sends a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application. S304. In response to receiving the second message, the second software module sends a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device. S305: When determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, a starting mode of the main entry activity is a second mode, and the third message does not carry an identifier of the first entry activity, the first application sends a ninth message to the second application, where the ninth message carries a third identifier, the first mode is different from the second mode, the first entry activity is an entry activity other than the main entry activity, and the third identifier is used to request the second application to start a new instance of the second application. S306. In response to receiving the ninth message, the second application sends a tenth message to the second software module, where the tenth message carries an identifier of a second entry activity and a fourth identifier, the second entry activity is any entry activity, and the fourth identifier is used to request to establish a third task stack. S307. In response to receiving the tenth message, the second software module establishes the third task stack based on the identifier of the second entry activity and the fourth identifier, where the third task stack is different from a task stack corresponding to the second application in the task list. S308: The second software module sends an eleventh message to the third software module, where the eleventh message carries the identifier of the second entry activity and an identifier of the third task stack. S309. In response to receiving the eleventh message, the third software module starts the second entry activity in the third task stack based on the identifier of the second entry activity and the identifier of the third task stack, to run a second instance of the second application. For an embodiment of operation S301, refer to the description of the embodiment inFIG.3D; for an embodiment of the first interface, refer to the description of the window201in the embodiment inFIG.3D; for an embodiment of the icon of the second application, refer to the description of the icon201aof the email application in the embodiment inFIG.3D; for an embodiment of operation S302, refer to the description of operation S21and operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S303, refer to the description of operation S22in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S304, refer to the description of operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S305, refer to the description of operation S5and operation S81in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S306, refer to the description of operation S82in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S307, refer to the description of operation S83in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S308, refer to the description of operation S84in the embodiment inFIG.4AtoFIG.4C; for an embodiment of operation S309, refer to the description of operation S85in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first operation, refer to the description of the operation performed by the user on the icon of the application1in operation S1in the embodiment inFIG.4AtoFIG.4C; for the task list, refer to the description of the operation of the task list in operation S21in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the first configuration item, refer to the descriptions of the identifier1and the identifier2in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the second entry activity, refer to the description of the specified entry activity2in the embodiment inFIG.4AtoFIG.4C; for an embodiment of the third identifier and the fourth identifier, refer to the descriptions of the identifier5in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the third task stack, refer to the description of the task stack3in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. In an embodiment of the application, the multi-instance feature of the second application can be triggered by the first application, and one instance of the second application is started each time by using the second entry activity specified by the second application, so that the electronic device implements two or more instances of the second application, thereby improving speed and efficiency of running a same application by the electronic device, maximizing continuation of an operating habit of a user on a PC, improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. In some embodiments, the method includes: the first application sends the ninth message to the second application, where the ninth message carries the third identifier and a third window mode, and the third window mode is related to a type of the first operation; in response to receiving the ninth message, the second application sends the tenth message to the second software module, where the tenth message carries the identifier of the second entry activity, the fourth identifier, and the third window mode; in response to receiving the tenth message, the second software module establishes the third task stack based on the identifier of the second entry activity, the fourth identifier, and the third window mode; the second software module sends the eleventh message to the third software module, where the eleventh message carries the identifier of the second entry activity, the identifier of the third task stack, and the third window mode; and in response to receiving the eleventh message, the third software module starts the second entry activity in the third task stack based on the identifier of the second entry activity, the identifier of the third task stack, and the third window mode, where a page display manner of the second entry activity is related to the third window mode. For an embodiment of the foregoing solution, refer to descriptions of operation S81to operation S85in the embodiment inFIG.4AtoFIG.4C; and for an embodiment of the third window mode, refer to the description of the window mode3in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the type of the first operation is a tap operation, the window mode is a page in a window form; or when the type of the first operation is a drag operation, the window mode is a page displayed on a full screen or displayed on a split screen. For a page displayed in a window form, refer to the foregoing descriptions of the window202, the window203, the window204, and the window205in the embodiments inFIG.3AtoFIG.3P; for a page displayed on a full screen, refer to the foregoing descriptions of the user interface13in the embodiments inFIG.3AtoFIG.3P; and for a page displayed on a split screen, refer to the foregoing descriptions of the user interface14aand the user interface14bin the embodiments inFIG.3AtoFIG.3P. Details are not described herein again. Therefore, manners in which the electronic device displays the second instance of the second application vary with different types of the first operation. In this way, the electronic device may simultaneously display a plurality of windows of the second application on the display in a plurality of window combination forms, so that multi-instance coordination processing and operation may be performed on a same application in the plurality of windows, thereby improving speed and efficiency of running a same application by the electronic device, maximizing continuation of an operating habit of a user on a PC, improving office efficiency and working efficiency of the user, and bringing experience closer to that on a desktop-level operating system to the user. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the method includes: the electronic device displays, in a first area of the display, a page corresponding to the first instance of the second application in a window form, and displays, in a second area of the display, a page corresponding to the second instance of the second application in a window form. For the first area and the second area, refer to the descriptions of the area corresponding to the window202and the area corresponding to the window203in the embodiment inFIG.3G; and for an embodiment of the window form, refer to the description of displaying the instance1of the email application by using the floating window202and displaying the instance2of the email application by using the floating window203in the embodiment inFIG.3G. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the method includes: the electronic device displays a page corresponding to the first instance of the second application on a full screen, and displays, in a third area of the display, the page corresponding to the second instance of the second application in a window form. For the third area, refer to the description of the area corresponding to the window203in the embodiment inFIG.3H; for an embodiment of the window form, refer to the description of displaying the instance2of the email application in the floating window203in the embodiment inFIG.3H; and for an embodiment of full-screen display, refer to the description of displaying the instance1of the email application in the area corresponding to the user interface13in the embodiment inFIG.3H. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the method includes: the electronic device displays, in a fourth area of the display, a page corresponding to the first instance of the second application, and displays, in a fifth area of the display, a page corresponding to the second instance of the second application, where the fourth area and the fifth area do not overlap. For the fourth area and the fifth area, refer to the descriptions of the area corresponding to the user interface14aand the area corresponding to the user interface14bin the embodiment inFIG.3I; and for an embodiment of the foregoing solution, refer to the description of displaying the instance1of the email application in the area corresponding to the user interface14aand displaying the instance2of the email application in the area corresponding to the user interface14bin the embodiment inFIG.3I. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the method includes: the electronic device displays, in a sixth area of the display, a page corresponding to the first instance of the second application, displays, in a seventh area of the display, a page corresponding to the second instance of the second application, and displays, in an eighth area of the display, a page corresponding to the third instance of the second application in a window form, where the sixth area and the seventh area do not overlap, the eighth area partially overlaps an area that is jointly formed by the sixth area and the seventh area, and the third instance of the second application are different from both the first instance of the second application and the second instance of the second application. For the sixth area and the seventh area, refer to the descriptions of the area corresponding to the user interface14aand the area corresponding to the user interface14bin the embodiment inFIG.3K; for the eighth area, refer to the description of the area corresponding to the window204in the embodiment inFIG.3K; and for an embodiment of the foregoing solution, refer to the description of displaying the instance1of the email application in the area corresponding to the user interface14a, displaying the instance2of the email application in the area corresponding to the user interface14b, and displaying the instance3of the email application in the floating window in the area corresponding to the window204in the embodiment inFIG.3K. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the method includes: the electronic device displays, in a ninth area of the display, a page corresponding to the first instance of the second application, displays, in a tenth area of the display, a page corresponding to a third application, and displays, in an eleventh area of the display, a page corresponding to the second instance of the second application in a window form, where the ninth area and the tenth area do not overlap, and the third application is different from both the first application and the second application. For an embodiment of the third application, refer to the description of the computer application in the embodiment inFIG.3P; for the ninth area and the tenth area, refer to the descriptions of the area corresponding to the user interface15aand the area corresponding to the user interface15bin the embodiment inFIG.3P; for the eleventh area, refer to the description of the area corresponding to the window205in the embodiment inFIG.3P; and for an embodiment of the foregoing solution, refer to the description of displaying the instance1of the email application in the area corresponding to the user interface15a, displaying the computer application in the area corresponding to the user interface15b, and displaying instance4of the email application in the floating window in the area corresponding to the window205in the embodiment inFIG.3P. Details are not described herein again. In this way, the electronic device may implement a plurality of instances of the second application in window combination forms such as “split screen+floating window”, “full screen+floating window”, “floating window+floating window”, “left split screen+right split screen”, and “left split screen+right split screen+floating window”. This provides rich window forms for the user, and meets various use requirements of the user. Based on the descriptions of operations S101to S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, when the electronic device displays a page corresponding to the second instance of the second application in the window form, the page includes a first control and a second control, and the first control is different from the second control. The method further includes: when receiving a second operation performed on the first control, the electronic device displays, on a full screen in response to the second operation, the page corresponding to the second instance of the second application; and when receiving a third operation performed on the second control, the electronic device closes, in response to the third operation, the page corresponding to the second instance of the For an embodiment of the first control, refer to the foregoing descriptions of the maximization control202ain the window202, the maximization control203ain the window203, the maximization control204ain the window204, and the maximization control205ain the window205in the embodiments inFIG.3AtoFIG.3P; and for an embodiment of the second control, refer to the foregoing descriptions of the closing control202cin the window202, the closing control203cin the window203, the closing control204cin the window204, and the closing control205cin the window205in the embodiments inFIG.3AtoFIG.3P. Details are not described herein again. Therefore, when a page corresponding to any instance of the second application is displayed in a floating manner, based on a user's intention, the electronic device can not only quickly close the page corresponding to the any instance of the second application, but also switch to a full screen to display the page corresponding to the any instance of the second application. Based on the descriptions of operations S101to S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, the first interface is located in a side area of the display. For an embodiment of the first interface, refer to the foregoing description of the window201in the embodiments inFIG.3AtoFIG.3P. Details are not described herein again. In this way, the first application can be quickly started, and the electronic device can easily implement a new instance of the second application from the first interface of the first application, without blocking page content currently displayed by the electronic device. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, the method further includes: when determining that the task corresponding to the first instance of the second application exists in the task list, and the first configuration item indicates that the second application does not support the multi-instance feature, the first application shields the first operation; or when determining that the task corresponding to the first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, and the second application does not support starting of the multi-instance feature by using the main entry activity, the first application shields the first operation. For an embodiment of the foregoing solution, refer to the description of operation S5in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, the method further includes: the first application starts the second application when the task corresponding to the instance of the second application does not exist in the task list. For an embodiment of the foregoing solution, refer to the description of operation S4in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, the first application is a sidebar application Dock, the second application is any application program other than the sidebar application Dock, the first software module is a package manager, the second software module is an activity manager, the third software module is a multi-window task manager, the first mode is standard or singleTop, and the second mode is singleTask. Based on the descriptions of operation S101to operation S108, operation S201to operation S208, and operation S301to operation S309, in some embodiments, the first software module is further configured to send a twelfth message to the second application, where the twelfth message is used to request a configuration file of the second application; and the second application is further configured to: in response to receiving the twelfth message, send a thirteenth message to the first software module, where the thirteenth message carries the first configuration item of the second application and the starting mode of the main entry activity of the second application. For an embodiment of the configuration file of the second application, refer to the foregoing configuration file AndroidManifest.xml. Details are not described herein again. In addition, the first software module includes but is not limited to obtaining the first configuration item of the second application and the starting mode of the main entry activity of the second application in the foregoing manner. For an embodiment, refer to the description in operation S3in the embodiment inFIG.4AtoFIG.4C. Details are not described herein again. For example, an embodiment of this application provides an application starting apparatus, including: a display module, configured to display a first interface of a first application, where the first interface includes an icon of a second application, and the first application is different from the second application; the first application, configured to: in response to receiving a first operation performed on an icon of the second application, send a first message to a first software module, and send a second message to a second software module; the first software module, configured to: in response to receiving the first message, send a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application; the second software module, configured to: in response to receiving the second message, send a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device; the first application, configured to: when determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports a multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, and a starting mode of the main entry activity is a first mode, send a fifth message to the second software module, where the fifth message carries an identifier of the main entry activity and a first identifier, and the first identifier is used to request to establish a first task stack; the second software module, further configured to: in response to receiving the fifth message, establish the first task stack based on the identifier of the main entry activity and the first identifier, where the first task stack is different from a task stack corresponding to the second application in the task list; the second software module, further configured to send a sixth message to a third software module, where the sixth message carries the identifier of the main entry activity and an identifier of the first task stack; and the third software module, further configured to: in response to receiving the sixth message, start the main entry activity in the first task stack based on the identifier of the main entry activity and the identifier of the first task stack, to run a second instance of the second application. In some embodiments, the first application is configured to send the fifth message to the second software module, where the fifth message carries the identifier of the main entry activity, the first identifier, and a first window mode, and the first window mode is related to a type of the first operation; the second software module is configured to: in response to receiving the fifth message, establish the first task stack based on the identifier of the main entry activity, the first identifier, and the first window mode; the second software module is configured to send the sixth message to the third software module, where the sixth message carries the identifier of the main entry activity, the identifier of the first task stack, and the first window mode; and the third software module is configured to: in response to receiving the sixth message, start the main entry activity in the first task stack based on the identifier of the main entry activity, the identifier of the first task stack, and the first window mode, where a page display manner of the main entry activity is related to the first window mode. In some embodiments, when the type of the first operation is a tap operation, the window mode is displaying a page in a window form; or when the type of the first operation is a drag operation, the window mode is displaying a page on a full screen or a split screen. In some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the display module is configured to: display, in a first area of a display, a page corresponding to the first instance of the second application in a window form, and display, in a second area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, on a full screen, a page corresponding to the first instance of the second application, and display, in a third area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, in a fourth area of the display, a page corresponding to the first instance of the second application, and display, in a fifth area of the display, a page corresponding to the second instance of the second application, where the fourth area and the fifth area do not overlap; or the display module is configured to: display, in a sixth area of the display, a page corresponding to the first instance of the second application, display, in a seventh area of the display, a page corresponding to the second instance of the second application, and display, in an eighth area of the display, a page corresponding to the third instance of the second application in a window form, where the sixth area and the seventh area do not overlap, and the eighth area partially overlaps an area jointly formed by the sixth area and the seventh area, and the third instance of the second application is different from both the first instance of the second application and the second instance of the second application; or the display module is configured to: display, in a ninth area of the display, a page corresponding to the first instance of the second application, display, in a tenth area of the display, a page corresponding to a third application, and display, in an eleventh area of the display, a page corresponding to the second instance of the second application in a window form, where the ninth area and the tenth area do not overlap, and the third application is different from both the first application and the second application. In some embodiments, when the display apparatus displays the page corresponding to the second instance of the second application in the window form, the page includes a first control and a second control, and the first control is different from the second control; the display apparatus is further configured to: when receiving a second operation performed on the first control, display, on a full screen in response to the second operation, the page corresponding to the second instance of the second application; and the display apparatus is further configured to: when receiving a third operation performed on the second control, close, in response to the third operation, the page corresponding to the second instance of the second application. In some embodiments, the first interface is located in a side area of the display. The side area may be an area in which the window201in the embodiment inFIG.3Dis located, or may be another display area of the display of the electronic device, for example, an upper display area of the display of the electronic device, a lower display area of the display of the electronic device, or a left display area of the display of the electronic device. In some embodiments, the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, and the first configuration item indicates that the second application does not support the multi-instance feature, shield the first operation; or the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, and the second application does not support starting of the multi-instance feature by using the main entry activity, shield the first operation. In some embodiments, the first application is further configured to: when the task corresponding to the instance of the second application does not exist in the task list, start the second application. In some embodiments, the first application is a sidebar application Dock, the second application is any application program other than the sidebar application Dock, the first software module is a package manager, the second software module is an activity manager, the third software module is a multi-window task manager, the first mode is standard or singleTop, and the second mode is singleTask. In an embodiment, the first software module is further configured to send a twelfth message to the second application, where the twelfth message is used to request a configuration file of the second application; and the second application is further configured to: in response to receiving the twelfth message, send a thirteenth message to the first software module, where the thirteenth message carries the first configuration item of the second application and the starting mode of the main entry activity of the second application. The application starting apparatus in an embodiment of the application may be configured to perform the technical solutions of the electronic device in the foregoing application starting method embodiments. Implementation principles and technical effects of the application starting apparatus are similar to those in the foregoing application starting method embodiments. For operations implemented by the modules, reference may further be made to related descriptions in the method embodiments, and details are not described herein again. The module herein may alternatively be replaced with a component or a circuit. For example, an embodiment of this application provides an application starting apparatus, including: a display apparatus, configured to display a first interface of a first application, where the first interface includes an icon of a second application, and the first application is different from the second application; the first application, configured to: in response to receiving a first operation performed on an icon of the second application, send a first message to a first software module, and send a second message to a second software module; the first software module, configured to: in response to receiving the first message, send a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application; the second software module, configured to: in response to receiving the second message, send a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device; the first application, further configured to: when determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports a multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, a starting mode of the main entry activity is a second mode, and the third message further carries an identifier of a first entry activity, send a seventh message to the second software module, where the seventh message carries the identifier of the first entry activity and a second identifier, the first mode is different from the second mode, the first entry activity is an entry activity other than the main entry activity, and the second identifier is used to request to establish a second task stack; the second software module, further configured to: in response to receiving the seventh message, establish the second task stack based on the identifier of the first entry activity and the second identifier, where the second task stack is different from a task stack corresponding to the second application in the task list; the second software module, further configured to send an eighth message to the third software module, where the eighth message carries the identifier of the first entry activity and an identifier of the second task stack; and the third software module, further configured to: in response to receiving the eighth message, start the first entry activity in the second task stack based on the identifier of the first entry activity and the identifier of the second task stack, to run a second instance of the second application. In some embodiments, the first application is configured to send the seventh message to the second software module, where the seventh message carries the identifier of the first entry activity, the second identifier, and a second window mode, and the second window mode is related to a type of the first operation; the second software module is configured to: in response to receiving the seventh message, establish the second task stack based on the identifier of the first entry activity, the second identifier, and the second window mode; the second software module is configured to send the eighth message to the third software module, where the eighth message carries the identifier of the first ingress activity, an identifier of the second task stack, and the second window mode; and the third software module is configured to: in response to receiving the eighth message, start the first entry activity in the second task stack based on the identifier of the first entry activity, the identifier of the second task stack, and the second window mode, where a page display manner of the first entry activity is related to the second window mode. In an embodiment, when the type of the first operation is a tap operation, the window mode is displaying a page in a window form; or when the type of the first operation is a drag operation, the window mode is displaying a page on a full screen or a split screen. In some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the display module is configured to: display, in a first area of a display, a page corresponding to the first instance of the second application in a window form, and display, in a second area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, on a full screen, a page corresponding to the first instance of the second application, and display, in a third area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, in a fourth area of the display, a page corresponding to the first instance of the second application, and display, in a fifth area of the display, a page corresponding to the second instance of the second application, where the fourth area and the fifth area do not overlap; or the display module is configured to: display, in a sixth area of the display, a page corresponding to the first instance of the second application, display, in a seventh area of the display, a page corresponding to the second instance of the second application, and display, in an eighth area of the display, a page corresponding to the third instance of the second application in a window form, where the sixth area and the seventh area do not overlap, and the eighth area partially overlaps an area jointly formed by the sixth area and the seventh area, and the third instance of the second application is different from both the first instance of the second application and the second instance of the second application; or the display module is configured to: display, in a ninth area of the display, a page corresponding to the first instance of the second application, display, in a tenth area of the display, a page corresponding to a third application, and display, in an eleventh area of the display, a page corresponding to the second instance of the second application in a window form, where the ninth area and the tenth area do not overlap, and the third application is different from both the first application and the second application. In some embodiments, when the display apparatus displays the page corresponding to the second instance of the second application in the window form, the page includes a first control and a second control, and the first control is different from the second control; the display apparatus is further configured to: when receiving a second operation performed on the first control, display, on a full screen in response to the second operation, the page corresponding to the second instance of the second application; and the display apparatus is further configured to: when receiving a third operation performed on the second control, close, in response to the third operation, the page corresponding to the second instance of the second application. In some embodiments, the first interface is located in a side area of the display. In some embodiments, the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, and the first configuration item indicates that the second application does not support the multi-instance feature, shield the first operation; or the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, and the second application does not support starting of the multi-instance feature by using the main entry activity, shield the first operation. In some embodiments, the first application is further configured to: when the task corresponding to the instance of the second application does not exist in the task list, start the second application. In some embodiments, the first application is a sidebar application Dock, the second application is any application program other than the sidebar application Dock, the first software module is a package manager, the second software module is an activity manager, the third software module is a multi-window task manager, the first mode is standard or singleTop, and the second mode is singleTask. In an embodiment, the first software module is further configured to send a twelfth message to the second application, where the twelfth message is used to request a configuration file of the second application; and the second application is further configured to: in response to receiving the twelfth message, send a thirteenth message to the first software module, where the thirteenth message carries the first configuration item of the second application and the starting mode of the main entry activity of the second application. The application starting apparatus in an embodiment of the application may be configured to perform the technical solutions of the electronic device in the foregoing application starting method embodiments. Implementation principles and technical effects of the application starting apparatus are similar to those in the foregoing application starting method embodiments. For operations implemented by the modules, reference may further be made to related descriptions in the method embodiments, and details are not described herein again. The module herein may alternatively be replaced with a component or a circuit. For example, an embodiment of this application provides an application starting apparatus, including: a display apparatus, configured to display a first interface of a first application, where the first interface includes an icon of a second application, and the first application is different from the second application; the first application, configured to: in response to receiving a first operation performed on an icon of the second application, send a first message to a first software module, and send a second message to a second software module; the first software module, configured to: in response to receiving the first message, send a third message to the first application, where the third message carries a first configuration item of the second application and a starting mode of a main entry activity of the second application; the second software module, configured to: in response to receiving the second message, send a fourth message to the first application, where the fourth message carries a task list, and the task list is used to store a task corresponding to an application in the electronic device; the first application, further configured to: when determining that a task corresponding to a first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, the second application supports starting of the multi-instance feature by using the main entry activity, a starting mode of the main entry activity is a second mode, and the third message does not carry an identifier of the first entry activity, send a ninth message to the second application, where the ninth message carries a third identifier, the first mode is different from the second mode, the first entry activity is an entry activity other than the main entry activity, and the third identifier is used to request the second application to start a new instance of the second application; the second application, configured to: in response to receiving the ninth message, send a tenth message to the second software module, where the tenth message carries an identifier of a second entry activity and a fourth identifier, the second entry activity is any entry activity, and the fourth identifier is used to request to establish a third task stack; the second software module, further configured to: in response to receiving the tenth message, establish the third task stack based on the identifier of the second entry activity and the fourth identifier, where the third task stack is different from a task stack corresponding to the second application in the task list; the second software module, further configured to send an eleventh message to the third software module, where the eleventh message carries the identifier of the second entry activity and an identifier of the third task stack; and the third software module, configured to: in response to receiving the eleventh message, start the second entry activity in the third task stack based on the identifier of the second entry activity and the identifier of the third task stack, to run a second instance of the second application. In some embodiments, the first application is configured to send the ninth message to the second application, where the ninth message carries the third identifier and a third window mode, and the third window mode is related to a type of the first operation; the second application is configured to: in response to receiving the ninth message, send the tenth message to the second software module, where the tenth message carries the identifier of the second entry activity, the fourth identifier, and the third window mode; the second software module is configured to: in response to receiving the tenth message, establish the third task stack based on the identifier of the second entry activity, the fourth identifier, and the third window mode; the second software module is configured to send the eleventh message to the third software module, where the eleventh message carries the identifier of the second entry activity, the identifier of the third task stack, and the third window mode; and the third software module is configured to: in response to receiving the eleventh message, start the second entry activity in the third task stack based on the identifier of the second entry activity, the identifier of the third task stack, and the third window mode, where a page display manner of the second entry activity is related to the third window mode. In some embodiments, when the type of the first operation is a tap operation, the window mode is displaying a page in a window form; or when the type of the first operation is a drag operation, the window mode is displaying a page on a full screen or a split screen. In some embodiments, when the first instance of the second application and the second instance of the second application are run in the electronic device, the display module is configured to: display, in a first area of a display, a page corresponding to the first instance of the second application in a window form, and display, in a second area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, on a full screen, a page corresponding to the first instance of the second application, and display, in a third area of the display, a page corresponding to the second instance of the second application in a window form; or the display module is configured to: display, in a fourth area of the display, a page corresponding to the first instance of the second application, and display, in a fifth area of the display, a page corresponding to the second instance of the second application, where the fourth area and the fifth area do not overlap; or the display module is configured to: display, in a sixth area of the display, a page corresponding to the first instance of the second application, display, in a seventh area of the display, a page corresponding to the second instance of the second application, and display, in an eighth area of the display, a page corresponding to the third instance of the second application in a window form, where the sixth area and the seventh area do not overlap, and the eighth area partially overlaps an area jointly formed by the sixth area and the seventh area, and the third instance of the second application is different from both the first instance of the second application and the second instance of the second application; or the display module is configured to: display, in a ninth area of the display, a page corresponding to the first instance of the second application, display, in a tenth area of the display, a page corresponding to a third application, and display, in an eleventh area of the display, a page corresponding to the second instance of the second application in a window form, where the ninth area and the tenth area do not overlap, and the third application is different from both the first application and the second application. In some embodiments, when the display apparatus displays the page corresponding to the second instance of the second application in the window form, the page includes a first control and a second control, and the first control is different from the second control; the display apparatus is further configured to: when receiving a second operation performed on the first control, display, on a full screen in response to the second operation, the page corresponding to the second instance of the second application; and the display apparatus is further configured to: when receiving a third operation performed on the second control, close, in response to the third operation, the page corresponding to the second instance of the second application. In some embodiments, the first interface is located in a side area of the display. In some embodiments, the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, and the first configuration item indicates that the second application does not support the multi-instance feature, shield the first operation; or the first application is further configured to: when determining that the task corresponding to the first instance of the second application exists in the task list, the first configuration item indicates that the second application supports the multi-instance feature, and the second application does not support starting of the multi-instance feature by using the main entry activity, shield the first operation. In some embodiments, the first application is further configured to: when the task corresponding to the instance of the second application does not exist in the task list, start the second application. In some embodiments, the first application is a sidebar application Dock, the second application is any application program other than the sidebar application Dock, the first software module is a package manager, the second software module is an activity manager, the third software module is a multi-window task manager, the first mode is standard or singleTop, and the second mode is singleTask. In an embodiment, the first software module is further configured to send a twelfth message to the second application, where the twelfth message is used to request a configuration file of the second application; and the second application is further configured to: in response to receiving the twelfth message, send a thirteenth message to the first software module, where the thirteenth message carries the first configuration item of the second application and the starting mode of the main entry activity of the second application. The application starting apparatus in an embodiment of the application may be configured to perform the technical solutions of the electronic device in the foregoing application starting method embodiments. Implementation principles and technical effects of the application starting apparatus are similar to those in the foregoing application starting method embodiments. For operations implemented by the modules, reference may further be made to related descriptions in the method embodiments, and details are not described herein again. The module herein may alternatively be replaced with a component or a circuit. For example, this application provides an electronic device, including a memory and a processor, where the memory is configured to store program instructions; and the processor is configured to invoke the program instructions in the memory, so that the electronic device performs the application starting methods in the foregoing embodiments. For example, this application provides a chip system. The chip system is applied to an electronic device including a memory, a display, and a sensor. The chip system includes a processor. When the processor executes a computer instruction stored in the memory, the electronic device performs the application starting methods in the foregoing embodiment. For example, this application provides a computer readable storage medium storing a computer program, where when the computer program is executed by a processor, an electronic device is enabled to implement the application starting method according to the foregoing embodiments. For example, this application provides a computer program product, including an executable instruction, where the executable instruction is stored in a readable storage medium; at least one processor of the electronic device may read the executable instruction from the readable storage medium; and the at least one processor executes the executable instruction, so that the electronic device implements the application starting methods in the foregoing embodiments. In the foregoing embodiments, all or some of the functions may be implemented by using software, hardware, or a combination of software and hardware. When software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or some of the procedures or functions according to embodiments of this application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions may be stored in a computer-readable storage medium. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk (SSD)), or the like. One of ordinary skilled in the art may understand that all or some of the processes of the methods in embodiments may be implemented by a computer program instructing relevant hardware. The program may be stored in a computer-readable storage medium. When the program runs, the processes of the methods in embodiments are performed. The foregoing storage medium includes any medium that can store program code, such as a ROM, a random access memory RAM, a magnetic disk, or an optical disc.
224,766
11861384
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation. DETAILED DESCRIPTION In various software applications, various recommendations may be made based on similarities between users of the software application. For example, these recommendations may include offers presented to a user of a software application, content related to how to use the software application, and so on. To generate these recommendations, various recommendation engines can be used to first identify a specific group of users into which a current user falls and then to generate recommendations based on membership in the specific group. These recommendation engines may include, for example, various machine learning-based recommendation engines or the like. The classification of a user into a specific group and generation of offers based on membership in the specific group may, however, not be accurate. While a machine learning model may be capable of classifying a user into one of a plurality of groups for use in generating recommendations for the user, it may be difficult to determine, from the model architecture, whether the classification is correct or incorrect. For example, it may not be known whether the features selected by a designer of a machine learning model as representative of users of the software application are, in fact, representative and useful for classifying a user into one of a plurality of groups. Additionally, while many machine learning models rely on generating embeddings for different users (e.g., low-dimensional data representing the users), it generally may not be known whether an embedding includes relevant information for assigning a user of the software application to a specific group. Thus, the outputs generated by these machine learning models may be assumed to be accurate without any manner by which such accuracy can be determined or even estimated, and thus, irrelevant recommendations may be generated for a user of the software application. The delivery of these recommendations may thus impose resource costs (e.g., bandwidth, processing, etc. for delivering offers to users of the software application) that could be used to support other operations within a software application. Aspects of the present disclosure provide techniques for generating and using decision trees representative of users of a software application to identify similar users and to generate recommendations based on similarities between users of the software application. As discussed in further detail herein, the decision trees may be generated based on records in a transaction history associated with the user and a plurality of other users. The resulting decision tree, which may include various paths indicating whether a target user is similar to or different from the user associated with the resulting decision tree, may be used to identify similar users based on similarities between these decision trees, based on an assumption that similar users will include transaction data with similar counterparties and will not include transaction data with drastically different counterparties. Based on a determined difference between different decision trees, similar users of a software application can be identified, and various actions can be taken to generate recommendations based on the identified set of similar users. Because the decision trees leverage transaction history data for various users and are inherently explainable structures that can be validated, aspects of the present disclosure may allow for improved accuracy in identifying similar users in a software application to a subject user of the software application, and may thus improve the relevance of recommendations generated for the subject user. Thus, aspects of the present disclosure improve the user experience of a software application by presenting recommendations that are relevant to the user based on differences between explainable decision tree constructs, which may improve the accuracy with which recommendations are generated for the user of the software application. Further, because the identification of similar users to the user of the software application and the resulting recommendations may be more accurate, embodiments of the present disclosure may reduce the amount of bandwidth used in delivering application content to users of the software application. Example Training and Using Decision Trees for Generating Recommendations in a Software Application FIG.1illustrates an example computing environment100in which decision trees for users of a software application are trained and used to generate recommendations for users of a software application. As illustrated, computing environment100includes a decision tree generator110, application server120, and transaction history repository130. Decision tree generator110generates data sets that can be used to train decision trees representative of different users of a software application and deploys these decision trees for use in generating recommendations for a user of a software application. Decision tree generator110may be any of a variety of computing devices that can generate training data sets and train predictive models based on these training data sets, such as a server computer, a cluster of computers, cloud computing instances, or the like. As illustrated, decision tree generator110includes a data set generator112and a decision tree trainer114 Data set generator112may be configured to retrieve transaction history data for a plurality of users of a software application from transaction history repository130and generate one or more training data sets from the transaction history data. In some cases the one or more training data sets may include training data sets for each user of a plurality of users of the software application, and each training data set may be used (as discussed in further detail below) to train a decision tree for a specific user of the software application. To generate a transaction history data set for a user of the software application, data set generator112can retrieve, from transaction history repository130, information about the counterparties in a user's transaction history for use in generating a plurality of grouped data sets. Based on the information about the counterparties in a user's transaction history, data set generator112can retrieve transaction history data from the transaction history repository for the counterparties and for a randomly selected group of non-counterparties (e.g., parties with which a user does not have a relationship recorded in the user's transaction history) for use in generating a labeled training data set to use in generating a decision tree for a specific user of the software application. Each grouped data set may include transactions grouped by counterparty or class of counterparty in the transaction history data set for a user of the software application. For example, to generate the grouped data set, data set generator112may be configured to generate groups of transactions based on the user's identifying information (e.g., email address, national identification number, etc.) as a primary key and the counterparty's identifying information as a secondary key. In some aspects, each grouped data set may be organized based on other characteristics, such as commonalities between different counterparties (e.g., similar party classifications, as embodied in classification codes (e.g., the first two digits of a North American Industry Classification System (NAICS) code) assigned to these counterparties, similar sizes (e.g., in terms of numbers of employees, annual revenue, annual profit, etc.), and the like)). In some aspects, a grouped data set may further be organized based on information characterizing a relationship between a user and a group of users in the transaction history data set, such as how the user settles transactions with the group of users. One grouped data set may correspond to a group of users for whom the user settles transactions in cash or check, while another grouped data set may correspond to a group of users for whom the user settles transactions by credit card, while still another grouped data set may correspond to a group of users for whom the user settles transactions by a wire transfer (e.g., via Fedwire ACH, SWIFT, etc.). After sorting a transaction history for a user into a plurality of grouped data sets, a plurality of feature vectors may be generated from the grouped data sets. Each feature vector, which may be associated with a particular user, may include information derived from a respective grouped data set corresponding to the transactions between a user and a given counterparty in the user's transaction history. For example, a feature vector that describes a user's relationship with a given counterparty may include information such as frequency information identifying a periodicity of transactions with the counterparty in the transaction history, frequency information identifying a periodicity of transactions with similar-sized counterparties in the transaction history data set, volume information for a number of transactions between the user and a counterparty, payment information for transactions between the user and a counterparty, and the like. The features included in feature vectors generated for the user of the software application may be selected, for example, based on assumptions that similar users will have similar interactions with similar-sized counterparties, similarly sized transactions, and similar numbers of transactions with these counterparties, while users that are different from a specific user will have different interactions with these counterparties or interact with other counterparties than those with which the specific user interacts. Of course, it should be recognized that these are but examples of data that may be included in a feature vector, and other data points summarizing a user's relationship with its counterparties may additionally or alternatively be included in a feature vector. For example, features included in a feature vector may be selected so that the subsequently generated decision trees characterize a risk metric for a user of the software application, can be used to generate recommendations to improve a user's financial state, or the like. Decision tree trainer114generally trains a plurality of decision trees based on the feature vectors generated by data set generator112. Generally, decision tree trainer114may generate a decision tree for each user of a plurality of users in the software application, and each respective decision tree may be considered representative of a user of the software application with which the respective decision tree is associated. To generate a decision tree that characterizes a user of a plurality of users in the software application, decision tree generator114can select, from the set of feature vectors generated by data set generator112, feature vectors associated with the user and a randomly selected set of feature vectors. The randomly selected set of feature vectors may be selected so that a suitable universe of users that are different from the user of the software application are used in generating the decision tree for the user of the software application. For example, the randomly selected set of feature vectors may include feature vectors associated with the user may include feature vectors for counterparties of the user of the software application and feature vectors for non-counterparties to the user (e.g., users for whom no records exist in the user's transaction history). The feature vectors may include, for example, values for the features discussed above which represent characteristics of transactions with a counterparty or group of counterparties, labeled with an indication of whether the feature vector is associated with the user or with a different user. A decision tree for the user may then be trained or generated based on the selected set of feature vectors. To train the decision tree for the user, decision tree trainer114can select a feature as a root node and generate a tree by progressively splitting the decision tree based on the values of other features in the set of feature vectors. A feature, and the value on which the feature is split to generate different paths in the decision tree, may be selected based on various metrics, such as an entropy metric that characterizes a level of uncertainty in a group of observations, an information gain metric which characterizes a measure of an amount of information that is provided by a feature, or the like. For example, a feature and its associated splitting value may be selected based on a minimization of an entropy metric or maximization of an information gain metric. Because the decision tree may be generated based on explainable metrics, such as minimization of entropy or maximization of information gain, the decision tree generated by decision tree trainer114may be considered an explainable structure, or explainable embedding, that characterizes the unique features of a user of the software application. Further the decision tree may be considered explainable because of the inherent structure of these trees, where paths in a decision tree including different combinations of values represent different classifications or characterizations of a user of the software application. In some aspects, the decision tree may be trained up to a defined tree depth which may generate a compact tree representing a user of the software application. For example, the decision tree may be trained to a depth of four or five edges from a root node of the tree to a terminal node of the tree, which may promote rapid generation (or training) of the decision tree for a user and allow for the generation of a compact, explainable structure that characterizes the user of the software application. In some aspects, decision tree generator110may generate the feature vectors and decision trees for users of a software application on demand and may generate a set of decision trees for other users of the software application in advance. The other users for whom decision trees are generated may, for example, include users with extensive transaction histories, randomly selected users from the software application, or the like. After generating the decision trees for a plurality of users, decision tree trainer114can deploy the decision trees to an application server120for use. Application server120generally hosts an application which may be accessed by users of the application and may provide a set of functions to users of the application. As illustrated, application server120includes an application122and recommendation engine124. In some aspects, during execution of the application122, application122may determine that a user should be presented a recommendation based on the user's similarity to other users of the software application. Such a determination may be made, for example, based on user interaction with the application122indicating that a user is transitioning from one workflow in the application122to another workflow in the application122, based on an amount of time spent within the application, or the like. When such a determination is made, application122can provide information about the user to recommendation engine124and instruct recommendation engine124to identify users who are similar to the user of the application122and generate recommendations to the user based on the identified similar users. Recommendation engine124generally receives the user information from application122and determines whether a decision tree exists for the user (e.g., from a set of decision trees deployed to application server120). If a decision tree does not exist for the user, recommendation engine124can request that decision tree generator110generate a decision tree for the user, as discussed above. Recommendation engine124can proceed with identifying similar users to the user after receiving the decision tree for the user from decision tree generator110. Because the decision tree for a user characterizes that user in terms of specific features and values of those features that indicate whether a specific user is the same as or different from the user associated with the decision tree, comparisons between decision trees can be used to determine whether two users (represented by their respective decision trees) are similar. Various distance metrics can be used to determine whether two users are similar to each other. In one example, a Jaccard index may be calculated on the features included in the decision trees associated with the user of application122and another user. The Jaccard index may be calculated based on the number of features that overlap between the decision trees associated with the user of application122and another user and the number of features that appear in at least one of the decision trees. Generally, larger values may indicate a closer match between the decision tree for the user of the application122and another user for whom a decision tree has already been generated. In another example, the distance metric may be calculated based on the number of similar predictions that are made by each decision tree. Each decision tree associated with a respective user in a universe of other users of the software application may be associated with a subset of counterparties that are classified as similar to the respective user. To determine, thus, whether the respective user is similar to the user of application122, the subset of counterparties may be analyzed against the decision trees for the respective user and the user of application122. The number of these counterparties in the subset of counterparties that result in the generation of a similar classification using both the decision trees for the respective user and the user of application122may be recorded and used as a distance metric. The distance metric may be a raw number, a proportion of counterparties that result in the generation of a similar classification using both the decision trees to the total number of counterparties in the subset of counterparties, or the like. Generally, if a distance metric is less than a threshold value, recommendation engine124can determine that the user of application122is similar to another user of the application. Recommendation engine124may aggregate the information about the users identified as similar to the user of application122and output that information to the user of application122. For example, recommendation engine124may output information identifying the similar users and information explaining a level of similarity between the user and the similar users based on the calculated distance metrics between the user and the similar users. By doing so, recommendation engine124can provide information to a user of application122showing information about similar users and, in some aspects, information about actions that have been taken within the software application by these similar users that may also be relevant to the user of application122. In some aspects, recommendation engine124may additionally generate one or more recommendations for the user of the application122based on the identified set of similar users to the user of the application122. These recommendations may include, for example, suggestions of actions to take within the software application (e.g., generating reports that similar users have generated previously; viewing help content that similar users have found helpful, etc.), actions to take with respect to the user's transaction history (e.g., applying for a loan product), and so on. In doing so, recommendation engine124can examine a set of recommendations that may have previously been presented to the identified set of similar users and select one or more recommendations from the set of recommendations to present to the user of the application122. The set of recommendations may be selected, for example, based on an assumption that recommendations relevant to the users in the identified set of similar users will also be relevant to the user of the application122. In some aspects, one or more additional recommendation engines may be used to select a specific recommendation to present to the user of application122based on other information, such as other user characteristics (e.g., from a user profile used within application122to customize the user's experience when using application122), transactions in the user's transaction history, and the like. Example Decision Tree Representing a User of a Software Application FIG.2illustrates an example decision tree200representing a user of a software application, according to aspects of the present disclosure. As illustrated, decision tree200is a tree with a depth of 2 (as in, two edges from root node to leaf node); however, it should be recognized that decision tree200may be of any suitable depth that allows for a user to be represented by the decision tree. Generally, depth and compactness may be inversely related; a deeper tree may be less compact but may include more information that can be used to classify a user of the software application as similar to or different from the user with which the decision tree is associated, while a shallower tree may be more compact but include less information that can be used to classify a user. As illustrated, decision tree200begins at a root node210, in which the split value for a given feature (feature1) is set at 34 percent. If the value for feature1for a user being analyzed through decision tree200is less than or equal to 34 percent, then the decision tree may proceed down the left side of the tree to node220. Otherwise, the decision tree may proceed down the right side of the tree to node222. At node220, the split value for feature30is set a 12 percent. Like at root node210, a value for feature30for a user being analyzed through decision tree200being less than the split value may cause the decision tree to proceed down the left side to node230. Otherwise, the decision tree may proceed down the right side of the tree to node232. At node230, the split value for feature17is 4%. If the value for feature17for the user being analyzed exceeds 4%, the decision tree may result in a classification of the user being analyzed being the same user as that associated with the decision tree. At node232, meanwhile, if the value for feature31exceeds the split value for this feature, the decision tree may result in a classification of the user being analyzed as a user different from the user associated with the decision tree. Similar decisions may be made with respect to nodes222,234, and236to result in a decision of whether a user is similar to or different from the user associated with decision tree200. Another user of the software application may be associated with a decision tree that includes different feature values and/or different split values for given features within the user's transaction history. Generally, a user with different split values, but the same features, in the decision tree may be considered more similar to a target user than a user with different features in the decision tree. Further, as discussed herein, classifications of users into similar and different classifications using different decision trees can be used to determine a level of similarity between users of the software application. For example, large degrees of overlap between the classifications generated using different decision trees may indicate that two users, represented by two different decision trees, are similar, while small degrees of overlap or no overlap between the classifications generated using different decision trees may indicate that the users represented by these decision trees are different. Further, whileFIG.2illustrates a binary tree, it should be recognized that a decision tree generated according to the techniques described herein may be an n-ary tree, with each node being associated with any number of child nodes. Example Methods for Training Decision Trees Representing Users of a Software Application and Identifying Similar Users of a Software Application Using Decision Trees FIG.3illustrates example operations300that may be performed to generate decision trees representing users of a software application, according to aspects of the present disclosure. Operations300may be performed, for example, by decision tree generator110illustrated inFIG.1, system500illustrated inFIG.5, and/or other computing systems on which decision trees can be generated. As illustrated, at block310, operations300begin with generating, from a transaction history data set for a plurality of users of a software application, a plurality of grouped data sets. Generally, the plurality of grouped data sets may include transactions grouped by counterparty in the transaction data set. In some aspects, to generate the plurality of grouped data sets, a plurality of records may be generated for each respective user of a plurality of users of the software application. Each record may include an identifier of the respective user as a primary key and an identifier of a unique counterparty as a secondary key. By doing so, transactions between different users in the software application may be grouped together in a single group of transactions. At block320, operations300proceed with generating a plurality of feature vectors from the plurality of grouped data sets. Each feature vector of the plurality of feature vectors may correspond to a specific user of the plurality of users. A feature vector generally includes a plurality of features describing relationships between the user and a plurality of counterparties in a transaction history associated with the user. These feature vectors, in some aspects, be generated, for a respective grouped data set from the plurality of grouped data set, with information derived from counterparties in the transaction history data set. Generally, the vector may include a variety of data points representing information about a user's relationships with a counterparty or group of counterparties in the user's transaction history. For example, the vector may include frequency information for a group of counterparties in the transaction history data set. The frequency information may indicate, for example, a periodicity at which a user interacts with counterparties in the group of counterparties, a periodicity at which transactions with different sets of sizes are performed between the user and counterparties in the group of counterparties, or the like. The vector may, in some aspects, include volume information for a number of transactions performed between the user and each group of counterparties in the transaction history set. In some aspects, the vector may include payment information for transactions between the user and the group of counterparties in the transaction history data set. For example, the vector may include information identifying a number or proportion of transactions settled between the user and the group of counterparties using cash or check, using a credit card, using electronic payment mechanisms such as Fedwire ACH or SWIFT, and the like. At block330, operations300proceed with training a decision tree based on the plurality of feature vectors. These decision tress may then be deployed, for example, to an application server for use in identifying similar users to a given user of a software application hosted on the application server, generating recommendations for users of the software application based on an identification of similar users, and the like. The decision tree may include a plurality of paths terminating in a similar or different classification. Each path of the plurality of paths may distinguish a user associated with the decision tree with other users of the software application. In some aspects, the decision tree may be trained based on a feature vector for a selected user of the software application and a randomly selected set of feature vectors from the plurality of feature vectors. The randomly selected set of feature vectors may include a first set of feature vectors associated with counterparties of the selected user and a second set of feature vectors identified as non-counterparties to the selected user. The decision tree may be trained for a specified tree depth based on the feature vector for the selected user and the randomly selected set of feature vectors. This specified tree depth may be defined a priori as a tradeoff between an amount of detail in the decision trees generated for users in the software application and a size of these decision trees. FIG.4illustrates example operations400that may be performed to identify similar users in a software application based on similarities between decision trees representing different users of the software application. Operations400may be performed, for example, by decision tree generator110and application server120illustrated inFIG.1, system500illustrated inFIG.5, and/or other computing systems on which decision trees can be generated and used to determine a similarity between different users of the software application. As illustrated, at block410, operations400begin with generating, from a transaction history data set for a user of a software application, a grouped data set. The grouped data set generally includes transactions grouped by counterparty in the transaction history data set. In some aspects, to generate the grouped data set for the user of the software application, a plurality of records may be generated for the user of the software application. Each record may include an identifier of the respective user as a primary key and an identifier of a unique counterparty as a secondary key. By doing so, transactions between different users in the software application may be grouped together in a single group of transactions. At block420, operations400proceed with generating, from the grouped data set, a feature vector representing the user of the software application and including a plurality of features describing relationships between the user and a plurality of counterparties in the transaction history data set. These feature vectors, in some aspects, be generated, for a respective grouped data set from the plurality of grouped data set, with information derived from counterparties in the transaction history data set. Generally, the vector may include a variety of data points representing information about a user's relationships with a counterparty or group of counterparties in the user's transaction history. For example, the vector may include frequency information for a group of counterparties in the transaction history data set. The frequency information may indicate, for example, a periodicity at which a user interacts with counterparties in the group of counterparties, a periodicity at which transactions with different sets of sizes are performed between the user and counterparties in the group of counterparties, or the like. The vector may, in some aspects, include volume information for a number of transactions performed between the user and each group of counterparties in the transaction history set. In some aspects, the vector may include payment information for transactions between the user and the group of counterparties in the transaction history data set. For example, the vector may include information identifying a number or proportion of transactions settled between the user and the group of counterparties using cash or check, using a credit card, using electronic payment mechanisms such as Fedwire ACH or SWIFT, and the like. At block430, operations400proceed with generating, using a decision tree classifier, a first decision tree for the user of the software application based on the feature vector. A plurality of second decision trees may also be generated for other users of the software application. The first decision tree and the plurality of second decision trees may generally comprise trees having a plurality of paths terminating in a similar or different classification, and wherein the plurality of paths distinguishes a user associated with a decision tree with other users of the software application. The first decision tree and the plurality of second decision trees may generally be trees generated to a defined depth (e.g., a depth of four edges or a depth of five edges between the root node of the decision tree and a terminal node of the decision tree) At block440, operations400proceed with calculating, for each respective decision tree of the plurality of second decision trees for other users of the software application, a distance metric between decision trees and identifying users associated with decision trees as similar users to the user of the software application based on the calculated distance metric. As discussed, a distance metric between different decision trees may be calculated based on a Jaccard index or based on the number of similar predictions that are made by each decision tree. If the distance metric calculated between the first decision tree and the respective decision tree is less than a threshold distance, the user associated with the respective decision tree may be deemed similar to the user of the software application. Otherwise, the user associated with the respective decision tree may be deemed to not be sufficiently similar. At block450, operations400proceed with outputting, to the user of the software application, information identifying similar users from the other users of the software application. In some aspects, the information identifying these similar users may include information explaining a level of similarity between the user and the identified similar users. This information may be based on the calculated distance metrics between the user and the identified similar users. In some aspects, the information identifying these similar users may be output with users with the highest degree of similarity to the user of the software application output before users with lower degrees of similarity. In some aspects, one or more recommendations may be generated and output to the user of the software application based on the information identifying similar users from the other users of the software application. For example, recommendations may be made based on recommendations presented to these similar users, based on an assumption that recommendations that are relevant to similar users will also be relevant to the user of the software application. In some aspects, one or more additional recommendation engines may be used to select a specific recommendation to present to the user of the software application from a universe of potentially relevant recommendations based on other information, such as other user characteristics (e.g., from a user profile used within the software application to customize the user's experience when using the software application), transactions in the user's transaction history, and the like. Example Systems for Training Decision Trees Representing Users of a Software Application and Identifying Similar Users of a Software Application Using Decision Trees FIG.5illustrates an example system500in which decision trees are trained and used to identify similar users of a software application. System500may correspond to one or both of decision tree generator110and application server120illustrated inFIG.1. WhileFIG.5illustrates a system in which decision trees can be generated (trained) and used to identify similar users of the software application on a same system, a single system need not implement both components for generating (training) these decision trees and using these decision trees to identify similar users of the software application. As shown, system500includes a central processing unit (CPU)502, one or more I/O device interfaces504that may allow for the connection of various I/O devices514(e.g., keyboards, displays, mouse devices, pen input, etc.) to the system500, network interface506through which system500is connected to network590(which may be a local network, an intranet, the internet, or any other group of computing devices communicatively connected to each other), a memory508, and an interconnect512. CPU502may retrieve and execute programming instructions stored in the memory508. Similarly, the CPU502may retrieve and store application data residing in the memory508. The interconnect512transmits programming instructions and application data, among the CPU502, I/O device interface504, network interface506, and memory508. CPU502is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like. Memory508is representative of a volatile memory, such as a random access memory, or a nonvolatile memory, such as nonvolatile random access memory, phase change random access memory, or the like. As shown, memory508includes a data set generator520, decision tree trainer530, application540, recommendation engine550, and transaction history repository560. Data set generator520generally corresponds to data set generator112illustrated inFIG.1. Generally, data set generator520uses a transaction history data set from transaction history repository560to generate feature vectors that can be used to train decision tree models representing users of a software application. The feature vectors may be generated based on grouped data sets in which transactions from the transaction history repository560are grouped based on counterparties in the transaction history repository. The feature vectors generally include features that describe relationships between a user of the software application and different counterparties or groups of counterparties. Decision tree trainer530generally corresponds to decision tree trainer114illustrated inFIG.1. Generally, decision tree trainer530uses the feature vectors generated by data set generator520to train decision trees that represent users of a software application. Generally, each user of a set of users may be associated with a unique decision tree and may be generated up to a defined depth based on feature vectors for the selected user and a randomly selected set of feature vectors for counterparties to the selected user and non-counterparties to the selected user. Application540generally corresponds to application122illustrated inFIG.1. Generally, application540receives requests from users of the application540for various features or functionality of the application and presents recommendations generated by recommendation engine550to the users of the application. Recommendation engine550generally corresponds to recommendation engine124illustrated inFIG.1. Generally, recommendation engine550uses the decision trees trained by decision tree trainer530and user transaction data retrieved from transaction history repository560to identify users who are similar to a selected user of the software application. Users may be identified as similar to the selected user of the software application based on various distance metrics between decision trees associated with the users of the software application. Based on the identified set of similar users to the selected user of the software application, recommendation engine550can output information about the identified set of similar users as well as recommendations that are potentially relevant to the selected user of the software application. These recommendations may be selected from a set of recommendations presented to users in the identified set of similar users based on a presumption that these recommendations will also be relevant to the selected user, and the recommendations may be selected at random or by one or other recommendation generation models that use additional information about the selected user to identify relevant recommendations to deliver to the selected user of the software application. Note thatFIG.5is just one example of a system, and other systems including fewer, additional, or alternative components are possible consistent with this disclosure. Example Clauses Implementation examples are described in the following numbered clauses: Clause 1: A method, comprising: generating, from a transaction history data set for a plurality of users of a software application, a plurality of grouped data sets including transactions grouped by counterparty in the transaction history data set; generating, from the plurality of grouped data sets, a plurality of feature vectors, each feature vector corresponding to a user of the plurality of users and including a plurality of features describing relationships between the user and a plurality of counterparties in a transaction history associated with the user; and training decision trees for each user of the plurality of users based on the plurality of feature vectors, wherein each decision tree comprises a plurality of paths terminating in a similar or different classification, and wherein the plurality of paths distinguishes a user associated with a decision tree of the plurality of decision trees with other users of the software application. Clause 2: The method of Clause 1, wherein generating the plurality of grouped data sets comprises, for each respective user of the plurality of users of the software application, generating a plurality of records, each record including an identifier of the respective user as a primary key and a unique counterparty as a secondary key. Clause 3: The method of any one of Clauses 1 or 2, wherein generating the plurality of feature vectors comprises generating, for a respective grouped data set from the plurality of grouped data sets, a vector including information derived from counterparties in the transaction history data set. Clause 4: The method of Clause 3, wherein the vector comprises one or more of: frequency information for a group of counterparties in the transaction history data set, frequency information for transactions with different sizes of counterparties in the transaction history data set, volume information for a number of transactions performed between the user and each group of counterparties in the transaction history data set, or payment information for transactions between the user and the group of counterparties in the transaction history data set. Clause 5: The method of any one of Clauses 1 through 4, wherein training the decision trees comprises training the decision trees based on a feature vector for a selected user and a randomly selected set of feature vectors from the plurality of feature vectors, wherein the randomly selected set of feature vectors include a first set of feature vectors identified as counterparties of the selected user and a second set of feature vectors identified as non-counterparties to the selected user. Clause 6: The method of Clause 5, wherein training the decision trees comprises training the decision trees for a specified tree depth based on the feature vector for the selected user and the randomly selected set of feature vectors. Clause 7: The method of any one of Clauses 1 through 6, further comprising deploying the decision trees. Clause 8: A method, comprising: generating, from a transaction history data set for a user of a software application, a grouped data set including transactions grouped by counterparty in the transaction history data set; generating, from the grouped data set, a feature vector representing the user of the software application and including a plurality of features describing relationships between the user and a plurality of counterparties in the transaction history data set; generating a first decision tree for the user of the software application based on the feature vector and a plurality of second decision trees for other users of the software application, wherein the first decision tree and the plurality of second decision trees comprise trees having a plurality of paths terminating in a similar or different classification, and wherein the plurality of paths distinguishes a user associated with a decision tree with other users of the software application; for each respective decision tree of the plurality of second decision trees for other users of the software application: calculating a distance metric between the first decision tree to the respective decision tree, and identifying a user associated with the respective decision tree as a similar user based on the calculated distance metric and a threshold distance metric; and outputting, to the user of the software application, information identifying similar users from the other users of the software application. Clause 9: The method of Clause 8, wherein generating the grouped data set comprises generating a plurality of records from the transaction history data set, each record including an identifier of the respective user as a primary key and a unique counterparty as a secondary key. Clause 10: The method of any one of Clauses 8 or 9, wherein generating the feature vector representing the user of the software application comprises generating, based on the grouped data set, a vector including information derived from counterparties in the transaction history data set. Clause 11: The method of Clause 10, wherein the vector comprises one or more of: frequency information for a group of counterparties in the transaction history data set, frequency information for transactions with different sizes of counterparties in the transaction history data set, volume information for a number of transactions performed between the user and each group of counterparties in the transaction history data set, or payment information for transactions between the user and the group of counterparties in the transaction history data set. Clause 12: The method of any one of Clauses 8 through 11, wherein the decision trees comprise trees generated to a defined depth. Clause 13: The method of any one of Clauses 8 through 12, wherein outputting the information identifying similar users from the other users of the software application comprises outputting information explaining a level of similarity between the user and the identified similar users based on the calculated distance metrics between the user and the identified similar users. Clause 14: The method of any one of Clauses 8 through 13, further comprising outputting, to the user of the software application, recommendations related to the software application based on recommendations delivered the identified similar users. Clause 15: A system, comprising: a memory having executable instructions stored thereon; and a processor configured to execute the executable instructions to perform the methods of any one of Clauses 1 through 14. Clause 16: A system, comprising: means for performing the methods of any one of Clauses 1 through 14. Clause 17: A computer-readable medium having instructions stored thereon which, when executed by a processor, performs the methods of any one of Clauses 1 through 14. Additional Considerations The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and input/output devices, among others. A user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media, such as any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the computer-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the computer-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the computer-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.
56,770
11861385
DETAILED DESCRIPTION According to certain embodiments, a cloud management system may be set up using a client to display a user interface and enable user interaction, an external proxy to proxy communications within the system, a patroller in communication with a server and with networked devices, a server to facilitate communication between the client and the patroller, and at least two networked devices to be managed by the patroller through user interaction with the user interface on the client. In some embodiments, the external proxy may be operatively configured to perform some of the tasks required of the system, thereby reducing load on other parts of the system. Embodiments may be configured such that the patroller may scan across a network to detect devices that are part of the network. The client may then virtualize those devices, receive user input related to the devices, and/or transmit control signals to the devices. Accordingly, embodiments provide a method of virtualization, the method comprising communication between the client and the server, and updating of the user interface of the client in response to information received from the server. Multiple devices of different types and/or from multiple physical locations may be virtualized and/or accessed from one interface. Embodiments may also be configured with additional features such as grouping of ports across devices and/or creating schedules to power cycle the groups at specific dates and/or times. The detailed description set forth below in connection with the appended drawings is intended as a description of various aspects of certain exemplary embodiments and is not intended to represent the only aspects of those embodiments. Each aspect described in this disclosure is provided merely as an example or illustration, and should not necessarily be construed as preferred or advantageous over other aspects. The detailed description includes specific details for providing a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. Acronyms and other descriptive terminology may be used merely for convenience and/or clarity and are not intended to limit the scope of the present disclosure. Any steps in a method should not be construed as needing to be carried out in the order listed, unless stated otherwise. In this Detailed Description, the term “may” generally refers to something that is possible or is permitted. In the following description, “virtualization environment” may refer to the screen or user interface of the client, or a portion of the screen or user interface. “Virtualizing” may refer to displaying, in the virtualization environment, information and/or controls related to the networked devices. Not all networked devices need to be virtualized, but the networked devices that are virtualized may be referred to as “virtualized devices.” If not all of the networked devices are virtualized, then the “virtualized devices” may be a subset of the “networked devices.” When a device is “virtualized,” it may mean that the device is displayed in the virtualization environment, or that the device has been added to a group. The terms “virtualized device” and “networked device” may in some cases be used interchangeably, with the understanding that a “virtualized device” has already been virtualized. “Virtualizing” may also refer to the general concept of enabling, in a remote environment such as from a client, all the features and/or controls one would have when interacting directly with the device. “Virtualization” may refer to the concept of virtualizing devices, or it may refer to a specific instance of virtualizing devices in the virtualization environment. Embodiments of the present disclosure relate generally to managing networks of electronic devices. Particular embodiments relate to simultaneously displaying multiple devices in a virtualization environment for easy visualization and/or control of the devices. The virtualization may be displayed in many different ways so as to enable the control of multiple networked devices from one interface. In particular embodiments, the control may be performed from one screen. The networked devices may be of different types and/or may be located across multiple physical locations. The networked devices may be controlled in many different ways, such as by powering the device itself on or off, and/or powering specific ports on or off. Any variety of other features may be controlled, including, but not limited to, device firmware, configuration, categorization, permissions, and scheduling. This control may be achieved by sending control signals from the client to the devices. Control signals may be simply electronic signals through which the devices may be controlled. A wide variety of electronic devices may be virtualized. As non-limiting examples, the devices may include routers, access points, controllers, and power devices such as power distributions units (PDUs) and power-over-ethernet (PoE) switches. Power devices generally have multiple ports that they may route power to. Consequently, each port of the power device may be controlled in the virtualization. In an exemplary embodiment, instead of power cycling an entire PDU, a user may choose to only power cycle one port of the PDU, thereby power cycling the device attached to that port. Power cycling refers to the concept of turning power off and then back on to effectively reset, restart, or reboot a device. Power cycling may also refer to powering a device on or off, and the powering on doesn't have to be immediately after the powering off. In some embodiments, a user may choose to power cycle multiple ports across multiple devices of different types. In other embodiments, the user may choose to power cycle the device itself. The implementation of the virtualization and/or the user interface may differ across different contexts. Various features may be displayed on the same screen or on different screens. Embodiments may be configured for easy navigation and/or streamlined control. The virtualization itself may show multiple devices, along with relevant options, on one screen, and other devices on another screen. Other options may be located on different screens. Referring toFIG.1, in an exemplary embodiment, the cloud management system100may have a client105, an external proxy110, a cloud server115, a patroller120, and at least two networked devices130. When a networked device130is virtualized, it may be referred to as a virtualized device135. Not all networked devices130need to be virtualized, however, so the subset of virtualized devices135may be smaller than the total of networked devices130. The client105may be any number of devices having a user interface and capable of communicating with the cloud server115. As non-limiting examples, the client may be a computer or a smartphone. In some embodiments, the system may use an external proxy110as another server onto which some tasks may be offloaded. Using a second server such as the external proxy110may increase performance of the system by reducing load on the cloud server115. This may be accomplished by the external proxy110performing some of the tasks that the cloud server115might normally perform. In some embodiments, more than one external proxy110may be used. In some embodiments, the client105may establish a connection with the external proxy110before the cloud server115may send or receive commands. The external proxy110may proxy connections as depicted inFIG.1, between the client105and the cloud server115, and between the cloud server115and the patroller120. In some cases, the communication between the client105and the cloud server115and/or between the client105and the patroller120may not need to be proxied. The cloud server115acts as a central location for storing information and/or facilitating communication between components of the cloud management system100. The patroller120is a device that acts as a remote cloud server sitting directly on the network. When the cloud server115signals the external proxy110to open a connection to the patroller120, the cloud functionality may be enabled. The patroller120may also communicate with and control networked devices130that are selected by the user to be managed by the patroller120. The patroller120will be located on site, at a customer location, and therefore will have access to the local network. The cloud server115might not. Therefore, in an exemplary operation of an embodiment, the cloud server115may tell the patroller120what actions to perform. The cloud management system100may be set up when a user uses a client105to create an account and register their patroller120with a cloud server115. The user may input network information into the client105, which would then use the patroller120to scan across the network. In some embodiments, the network information inputted may be VLAN information, and the patroller120would then scan across the VLANs125and search for networked devices130on those VLANs125. In other embodiments, the network may be a LAN or another suitable network. After the scan, the networked devices130are identified, and information such as the hostname, IP address, VLAN125ID, and/or MAC address may be displayed and stored in a database on the cloud server115. The scanned network devices130may be displayed to the user in the client105in a variety of manners. As non-limiting examples, they may be sorted by manufacturer and/or device type. The user may use the client105to add any of the found devices to a profile. The networked devices130added by the user will then be managed by the patroller120. The user may create a virtualization from at least a portion of the networked devices130, and these may be called virtualized devices135. In this manner, the relationship between the client105, the cloud server115, the patroller120, and the networked devices130is set up. The external proxy110may be used to proxy the communications in the cloud management system100. Once the cloud management system100has been set up, the networked devices130may be virtualized and/or controlled through the client105. The user may connect directly to the network's management layer without the need for port forwarding or dynamic DNS service. The user may use the client105to add devices to virtualization groups. These groups may display the status of and/or options related to multiple devices, thereby enabling the control of multiple networked devices130from a single user interface. To accomplish this control, the client105may send commands to the cloud server115, and those commands may then be forwarded to and carried out by the patroller120. The client105may create the virtualization and the patroller120may perform any interaction ordered by the client105. The interaction may include, as non-limiting examples, power cycling, monitoring, creating schedules, and creating categories. This interaction may be done from anywhere in the world, from any web connected device. In operation, the client105may send a signal to the cloud server115to detect the quantity of devices managed by the patroller120. After the cloud server115signals the patroller120to scan across VLANs125and identify networked devices130, the patroller120sends this information to the cloud server115and the cloud server115sends it to the client105. The user may then specify if they want to group the networked devices130, and the client105may send a signal to the cloud server115to make the association. As a non-limiting example, the signal may also comprise information such as the name the user wants to assign to the group. The cloud server115may store this information for later use. When the user has selected at least a portion of the networked devices130and grouped them, they may be called virtualized devices135. The client105may continue receiving information such as port status, power consumption, and/or the like for the virtualized devices135. After the networked devices130are grouped, the client105may interact with the cloud server115and/or the patroller120as needed. Automatic signals may be sent to request information such as port status. Manual signals may be sent to rename a group, power cycle a port, and/or the like. Alternatively, the communication between the client105and the networked devices130may occur without the need for creating a group of networked devices130. From the cloud server115side, the cloud server115may receive the signal from the client105to detect the quantity of networked devices130. The signal may request information such as, but not limited to, IP addresses, MAC addresses, system uptime, and port status. The cloud server115may then query the networked devices130, through the patroller120that has been registered with the cloud server115, to get their status. Information received by the patroller120from the networked devices130may be stored on the cloud server115and may be sent to the client105. The cloud server115may receive a signal from the client105to group at least a portion of the networked devices130. The cloud server115may then create the grouping and store this information in a database. Once the cloud management system100has been set up according to this exemplary embodiment, the cloud server115may interact with the client105and/or the patroller120as needed. Referring toFIG.2, an exemplary method of managing virtualized devices135in a virtualization environment is shown. Step205may comprise transmitting a signal from the client105to the cloud server115. The signal may include instructions to be carried out by the patroller120, such that when the cloud server115signals the patroller120, the patroller120will carry out the command initiated by the client105. Step210may comprise the client105receiving, from the cloud server115, a signal with information obtained from the virtualized devices135. The signal may have information from at least one virtualized devices135, but in the preferred embodiment the signal will have information from at least two virtualized devices135. This information will have been obtained by the patroller120in communication with the virtualized devices135and forwarded to the cloud server115. When the information from the virtualized devices135is received at the client105, step215may comprise the client105updating a user interface in response to the received information. In such a manner, a user may interact with the user interface of the client105to control or manage the virtualized devices135, as shown in step220. Referring toFIG.3, a preferred embodiment of an exemplary method of managing networked devices130in a virtualization environment is shown. Step305may comprise initializing, by the client105, a scan across a network to detect networked devices130. The scan may be performed by the patroller120across any appropriate network, such as a VLAN125or a LAN. Step310may comprise virtualizing, on the client105, at least a portion of the detected networked devices130. Not all of the networked devices130need to be virtualized, as not all of them may be desired to be controlled, viewed, and/or managed. Step315may comprise determining whether one of the virtualized devices135is a power device, such as a PDU or PoE switch. If none of the virtualized devices135are power devices, the user may be presented with an exemplary option of rebooting any of the virtualized devices135, according to step320. Other exemplary options may include, but are not limited to, accessing the device portal for the device or creating a lava tunnel. A lava tunnel may allow the patroller120to open connections to the networked devices130on any port, using services such as Telnet, SSH, and RDP. If the virtualized device135has the capability, the device portal may be used to log into the device's native GUI. This may be accomplished without port forwarding or VPNs. From the native GUI, the user may see information about or control the virtualized device135at a more specific level. If at least one of the virtualized devices135is a power device, according to step325, the user may additionally be presented with a graphical representation of the ports of the power device. In step330, the cloud management system100may then enable power cycling of each port of the power device through interaction with the graphical representation. Consequently, the user may be presented with standard options such as rebooting the device, and may also be presented with options for specific control of each port of any virtualized power devices. In step335, after the client105has created and displayed the virtualization, the client105may accept user input in the virtualization. In step340, the client105may transmit a control signal to the virtualized device135corresponding to the user input in the virtualization. Exemplary control signals may include instructions to reboot the virtualized device135, to power cycle a specific port of the virtualized device135, or to categorize at least two virtualized devices135and manage them as a group. However, a variety of other control signals may also be transmitted. Referring toFIG.4, exemplary features and display components of the client105are shown. For example, the client105may first present a dashboard405to the user when the user logs in to interact with the cloud management system100. The dashboard405may link to a variety of pages. As a non-limiting example, the pages may include a Virtualization410page, a Profile415page, and an Administration420page. These pages may be interrelated and navigated between by any number of means. Exemplary modules of the cloud management system100may also be presented on some of these pages. The term “module” may refer to a feature or a function, and modules may be permanently built into a page, selectively enabled, or optionally built into the cloud management system100in a particular embodiment. For example, a Firmware Management425module may be displayed on the Virtualization410page. Other modules may also be displayed, such as Configuration Management430, Power Control435, Self-Healing440, and/or Schedule Management445. A variety of modules may be associated with a variety of pages, and consequently, the features and/or display components of the client105may be different across implementations. On the Virtualization410page, many different kinds of functionality may be enabled. As a non-limiting example, power cycling, scheduling, configuration, and/or firmware updates may be enabled for all virtualized devices135in the network. The user may add any of the networked devices130to the virtualization. As a non-limiting example, the devices may include routers, power devices, access points, and any other devices that have configurations that may be backed up or restored, or devices that may accept firmware updates. When the user accesses the Virtualization410page, they may be presented with a list of profiles associated with various patrollers120, and may then choose any networked devices130from among the profiles. To accomplish this, the cloud server115may send a command to the external proxy110asking it to open a connection with a patroller120. The patroller120may send the cloud server115information on the networked devices130associated with the patroller120. The cloud server115may then send this information to the client105to display all of the profiles to the user. The term “profile” may refer to different customers at different physical locations. These physical locations may have their own patroller120managing the networked devices130for that particular profile. The user may choose networked devices130across the different physical locations and create a virtualization. The user may choose to create different types of virtualizations. As a non-limiting example, the types of virtualizations may be related to some of the modules described in more detail below, such as Firmware Management425, Configuration Management430, Power Control435, Self-Healing440, and/or Schedule Management445. For example, a Firmware Management425module may allow a user to create a firmware management virtualization, through which the user may manage the firmware of the networked devices130in the virtualization. This may include viewing and/or updating the firmware of the devices in the virtualization. However, firmware management may also be performed for all networked devices130, and not just virtualized devices135. On the Profile415page, exemplary modules may include a map module showing the physical location of each profile. The profile may represent a customer, and the customer may have a picture associated with their profile. An example picture may be the picture of the residence or the commercial space where the network is set up. This profile and picture may be incorporated into a marker which is placed in the map module. The marker may have an indicator such as a colored band around it. A red band may indicate that the network is down (or offline). An orange band may indicate that at least one device in the network is down. A green band may indicate that all devices in the network are up. The colored bands may be used in interfaces other than the map module. As a non-limiting example, they may also be used in a Network Map460module, or any other module. The Network Map460module is discussed below, and is different than the map module. Various symbols may be used in other modules as well. For example, the Power Control435module that may be located on the Virtualization410page may use different symbols as a port status indicator to indicate whether a port is active/powered or not. On the Administration420page, exemplary modules may include a Permissions455module, and subscription and inventory management modules for managing patroller120licenses and the like. A Firmware Management425module may allow a user to update the firmware of capable devices in the network. Capable devices may be devices from specific manufacturers. The devices do not need to be virtualized for the Firmware Management425module to be usable. In some embodiments, a user may create a category comprising any number of devices located across any number of physical locations. The locations may be separated by large geographic distances and need not be in the same room or building. As an example, the category may be called “Routers” and the user may add any router in the network to this category. The user may then specify when to check firmware, when to download firmware, and/or when to update firmware for the category. The Firmware Management425module may display which router in the category has which version of the firmware, so the user may see which one is up to date and which one is not. It may also display, for example, whether the category as a whole has up-to-date firmware, whether a schedule has been created to monitor the firmware for the category, and the like. At a particular specified time, the Firmware Management425module may push firmware simultaneously to all of the routers in the category. In some embodiments, the patroller120may periodically check the cloud server115for the latest firmware. This could happen once a day or any other interval. The patroller120may signal to the cloud server115asking for the latest firmware version for a particular virtualized device135. If the firmware on the virtualized device135is outdated, the patroller120may send a request to the cloud server115to obtain the latest firmware, and the user may be notified through the client105of the availability of new firmware. A Configuration Management430module may allow a user to manage the configuration of a particular virtualized device135. However, the device does not need to be virtualized for the Configuration Management430module to be usable. An exemplary configuration may be the particular settings associated with a virtualized device135. The user may create a category and add multiple devices from multiple physical locations to the category. The user may then apply one set of configurations to the category. Additionally, the user may back up the configurations of the virtualized devices135in the category at a location such as the cloud server115. The user may also restore configurations to the category as needed, if for example, the devices in the category have been replaced. Alternatively, the user may have different configurations for all of the different virtualized devices135. These configurations may be backed up in the same manner, and may also be restored in case a particular virtualized device135malfunctions and/or needs to be replaced or reset. In some embodiments, to backup the configurations of the virtualized devices135, the external proxy110may signal the patroller120to connect to the virtualized devices135and run a command to download a configuration file. The configuration file may then be sent from the patroller120to the cloud server115, where it may be stored and tagged with an ID to identify which configuration belongs to which device. To restore a configuration, the patroller120may request the configuration file for a specific virtualized device135from the cloud server115. The cloud server115may send the file to the patroller120and then the patroller120may open a connection to the specific virtualized device135and upload the configuration file to the specific virtualized device135. A Power Control435module may allow a user to control features related to power for the virtualized devices135. Exemplary features may include power cycling specific devices or ports. In an exemplary embodiment, a category may be created for power devices such as PDUs or PoE switches. The category may include power devices located at different physical locations. These devices may be virtualized and certain information related to the devices may be displayed. For example, when the user hovers over a port of the device, the information that may be displayed may include power consumption for the port, the device's mapping, and/or other information such as the current and/or voltage across the port. The user may choose which PDU or PoE switch ports to map to which device on the network. In some embodiments, the mapping may be automatic. Once mapped, the user may view the device, for example in a network map described below, and may be presented with an option to reboot the device. When that option is selected, the mapped port on the power device will be power cycled, thereby rebooting the device. A user may select multiple virtualized devices135that are part of the category to be able to view the ports of each virtualized device135on the same screen. The user may then view and control all selected devices from the category in one virtualization. If the user clicks on a port, exemplary options that may be displayed may include turning off the port, power cycling the port, mapping the port to a device, and removing the mapping. Additional features of the Power Control435module may include an option to reboot the virtualized device135if, for example, the virtualized device135is not a power device. As a non-limiting example, if the virtualized device135is a router, it may be rebooted either directly through a signal from the patroller120, or indirectly by mapping of the router to a port of a power device. One or both of these options may be presented to the user in the virtualization. A Self-Healing445module may allow a user to enable and control automatic ping and reboot functionality. In an exemplary embodiment, the Self-Healing445module may present the user with an option to specify any virtualized device135to ping at a particular time or a particular interval. If no response is received from the virtualized device135during a particular waiting period after the virtualized device135is pinged, the user may specify an exemplary instruction to reboot the virtualized device135a number of times up to a reboot limit. The user may also set a particular time frame for the self-healing instructions to be active, and/or may choose the option of receiving alerts related to the self-healing of a particular device. An exemplary alert may be one that is sent out whenever a particular port is power cycled. The Self-Healing445module may help ensure total uptime of the virtualized devices135while simultaneously keeping users aware of any network issues. A Schedule Management445module may allow the user to create schedules for any of the virtualized devices135. The schedules may also be created for categories of devices, as described below. Some non-limiting examples of things that may be scheduled may include firmware updates, configuration updates, power controls, and self-healing timings. For example, the user may create a schedule to download and/or install the latest firmware, to restore or backup device configurations, or to power cycle ports at specific times. An exemplary schedule may be created by specifying a port, a schedule name, a schedule option, a day, and a time. Schedule options may include, but are not limited to, power cycling, updating firmware, and backing up configurations. The power cycling option may be set to reboot a virtualized device135at a particular time, or it may be set to power a device on or off at particular times. The schedules may be displayed by any number of means. In some embodiments, the schedule may be in tabular form with different days as the columns and different devices as the rows. Each cell of the table may be populated with a schedule to perform a specific action at a specific time. Actions may be performed on individual virtualized devices135or on categories of devices. A Categories450module may allow a user to categorize networked devices130and manage them as a group. In an exemplary configuration of the cloud management system100, to create a category, the client105may receive information on the networked devices130from the patroller120, and the user may choose any number of networked devices130to add to the category. Alternatively, the cloud server115may store information for devices from different physical locations. These devices may be managed by different patrollers120. The client105may ask the cloud server115to provide a list of physical locations. Once the list of physical locations is available, the client105may send a command through the cloud server115to the different patrollers120to display all the networked devices130at those locations. The user may then pick which networked device130they want to add to the category. The category will be created and all of the networked devices130that the user selected will be in the category. Alternatively, the devices selected to be added to the category may have already previously been virtualized. In a non-limiting example, the user may pick any number of ports among the different virtualized devices135in the virtualization, group them into categories, and then create schedules for each category. The schedule would apply to all of the devices in the category. As described above, the scheduling may be performed using a Schedule Management445module. In an exemplary embodiment, if a user has Christmas lights connected to different ports on three different PDUs, they may turn ON/OFF the lights simultaneously by using a category created with the Categories450module instead of having to go to each specific port on each PDU to power cycle it individually. The cloud management system100may enable different kinds of access for different kinds of users. A customer may be a user who has purchased a patroller120and/or is using the cloud management system100to control networked device130on their network. A dealer may be a user who sells patrollers120. A network technician may be a user who monitors the networked devices130to make sure they are functioning properly or that the network itself is functioning property. A Permissions455module may be displayed on the Administration420page, where a user that is a dealer (or in some embodiments, other users) may be allowed to control the permissions given to network technicians. The dealer may create an account for a network technician, and then assign the network technician to a group of customers. The customers may each have their own network of devices that may be managed by the network technician. The dealer may create network technician groups, and may specify detailed permissions for each network technician or network technician group. As a non-limiting example, the dealer may specify that a particular network technician may only power cycle two particular ports for a particular device of a particular customer. Permissions may be assigned in a variety of ways. As non-limiting examples, they may be assigned by groups of technicians, groups of customer, groups of patrollers, and/or may be assigned specifically by device and/or by port. In some embodiments, a group of network technicians may be assigned to a particular patroller120to manage the devices associated with that patroller120. Alternatively, a specific network technician may be selected, so as to enable assignment of his or her permissions specifically. Different types of permissions may be assigned to the specific network technician, including, but not limited to, global permissions, group permissions, patroller permissions, device permissions, and port permissions. The specific network technician may be part of a technician group, assigned to a particular patroller, and/or assigned to a particular device. The dealer may choose to allow or deny the technician permission to perform various tasks, such as rebooting routers, creating network maps, removing or adding devices, and/or controlling alerts and/or notifications. Additionally, the dealer may choose to allow the technician the same permissions as another technician, for example a parent technician. In one embodiment, global permissions may apply to every patroller that a technician is assigned to. In another embodiment, group permissions may also be used. In that case, the permissions would be for the group that the technician belongs to. A Network Map460module may present networked devices130in a network map, such that the user may see in a graphical representation, for example, what power devices are connected to a patroller120, and what other devices are connected to the power devices. Generally, the network map may show the connectivity of the networked devices130. In some embodiments, this may be done with a tree structure. The tree structure may display, for example, multiple devices connected to one switch. Exemplary information that may be displayed in the network map may include, as a non-limiting example, device name, IP, model, and manufacturer. A Device Portal465module may, in some embodiments, enable the user to access the native device GUI of any virtualized device135. As a non-limiting example, the Device Portal465module may be presented as an option on the Virtualization410page or on the Profile415page in relation to any virtualized device135listed on these pages. Referring toFIG.5, an exemplary Virtualization410page layout is shown. In the present embodiment, an exemplary page is shown for a power control virtualization enabled by a Power Control435module. Virtualizations using other modules, or even the Power Control435module, may look different in other embodiments. InFIG.5, the virtualization may have a PDU Group505, a PoE Group540, a Router Group555, and an Other Group565. The devices in any of the groups, such as the PDU Group505or PoE Group540, may be located at different physical locations, but may all still be controlled in the same virtualization environment. When a group such as the PDU Group505is selected, the client105may display an expanded view530of the PDU Group505. The expanded view530may display more information and/or options to the user than an unexpanded view535. The PDU Group505may consist of a number of PDU devices added to the PDU virtualization. A selected device510may be one such device. When it is selected, its information may be displayed as part of the PDU Group505. For example, the ports of the PDU may be displayed in the group, so as to enable the user to interact with the particular ports of the selected device510. An unselected device515may also be part of the PDU Group505, but in the present embodiment, the ports of the unselected device515may not be displayed to the user. For the selected device510, various information may be displayed. As a non-limiting example, an active PDU port520may be displayed with a colored graphic, and/or an inactive PDU port525may be displayed with an uncolored graphic. The PoE Group540may consist of a number of PoE switches added to the PoE virtualization. If there are multiple selected devices510, a graphical representation may be shown for each device. In the exemplary embodiment, the virtualization may display a first graphical representation552for a first selected PoE switch, and below it a second graphical representation554for a second selected PoE switch. In this exemplary manner, the user may be presented with a virtualization of multiple devices in the same interface, thus enabling their control and/or management from one interface. The user may control one or more devices from the PDU Group505, one or more devices from the PoE Group540, and/or a number of other devices that may be displayed in the same interface. For the PoE Group540, an active PoE port545may be displayed with a different colored graphic, and/or an inactive PoE port550may be displayed with a different uncolored graphic. The Router Group555may consist of a number of router devices added to the router virtualization. When a router is selected, the user may be presented with an Option560. An exemplary Option560may be an option to reboot the router. The Other Group565may consist of other devices which may be added to the virtualization. An exemplary option for a device in the Other Group565may also be to reboot the device. However, in an unexpanded view535, this option might not be displayed. In the Virtualization410page, some of the exemplary options for the virtualized devices135may include power cycling the virtualized devices135, power cycling specific ports of the virtualized devices135, creating schedules to perform various functions at specified times, and/or programming self-healing instructions for the virtualized devices135or specific ports of the virtualized devices135. Exemplary information that may be displayed about the virtualized devices135may include, for example when a port is hovered over, power consumption, current, and/or voltage information. Referring toFIG.6, an exemplary Profile415page layout is shown. This page may list all of the virtualized devices135that are part of a profile, and thus that are associated with a particular patroller120. In the present embodiment, the profile has a PDU1605device, a PoE1620device, a Router1625device, and a PDU2630device. In an exemplary embodiment, when a virtualized device135is hovered over, the user may be shown the status of the virtualized device135. The user may also be shown other information and/or options, such as action that needs to be taken for the device, an option to remove the virtualized device135from the profile, and the like. These devices may be viewed in an expanded view530or in an unexpanded view535. The expanded view530may present more information to the user, such as a graphical representation of all ports of any power device, and/or Information610and/or Options615. In the unexpanded view535, the user might only be presented with Information610. All of this and the following information may also be presented to the user in the Virtualization410page. As in the Virtualization410page, the user may manage and/or control the devices, and may view related information. In an exemplary embodiment, the Information610that may be displayed to the user may include hardware information related to the particular device, such as its IP address, model name, and/or MAC address. It may also include information on whether the device is online or offline. Some of the graphical information that may be presented in the expanded view530may include a particular graphic for each of an active PDU port520, an inactive PDU port525, an active PoE port545, and/or an inactive PoE port550. Additionally, the PoE ports may have some kind of indicator, such as a color indicator, indicating whether the port is passing only data or both data and power. Other information which may be presented may include an indicator showing which VLAN a particular port belongs to. In addition to the port-related options that may be presented, exemplary Options615may include an option to reboot the device, an option to access the native device GUI through a Device Portal465, and a lava tunnel option. Referring toFIG.7, an exemplary Administration420page layout is shown. This page may have an Inventory Management705option and/or a Subscription management710option. These options may be used, for example, to manage licenses for the patrollers120. In an exemplary embodiment, The Administration420page may also incorporate the Permissions455module, which is described above. The Permissions455module may be incorporated into an Accounts and Permissions715option. The Accounts and Permissions715option may further have options related to Group Selection720, Technician Selection725, Permission Type730, Patroller Selection735, Device Selection740, and/or List of Specific Permissions745. As in the Permissions455module, the user may make any number of selections and/or specifications using these options, thereby enabling a network technician or other user to perform particular actions on particular devices.
42,004
11861386
DETAILED DESCRIPTION Generally described, aspects of the present disclosure relate to an on-demand code execution system. The on-demand code execution system enables rapid execution of code, which may be supplied by users of the on-demand code execution system. More specifically, embodiments of the present disclosure relate to an on-demand code-execution gateway, which facilitates access to the on-demand code execution system. As described in detail herein, the on-demand code execution system may provide a network-accessible service enabling users to submit or designate computer-executable code to be executed by isolated execution environments on the on-demand code execution system. Each set of code on the on-demand code execution system may define a “task,” and implement specific functionality corresponding to that task when executed on an execution environment, such as a virtual machine instance, of the on-demand code execution system. Individual implementations of the task on the on-demand code execution system may be referred to as an “execution” of the task (or a “task execution”). The on-demand code execution system can further enable users to trigger execution of a task based on a variety of potential events, such as detecting new data at a network-based storage system, transmission of an application programming interface (“API”) call to the on-demand code execution system, or transmission of a specially formatted hypertext transport protocol (“HTTP”) packet to the on-demand code execution system. Thus, users may utilize the on-demand code execution system to execute any specified executable code “on-demand,” without requiring configuration or maintenance of the underlying hardware or infrastructure on which the code is executed. Further, the on-demand code execution system may be configured to execute tasks in a rapid manner (e.g., in under 100 milliseconds [ms]), thus enabling execution of tasks in “real-time” (e.g., with little or no perceptible delay to an end user). The on-demand code execution system may thus allow users to execute code in a “serverless” environment (e.g., one in which the underlying server is not under user control), but may require that user requests to execute code in the environment meet criteria that would not otherwise be applicable. For example, the on-demand code execution system may require that code execution requests be authenticated with a cryptographic signature, submitted in a particular format, submitted via an API, or meet other requirements. In some aspects, satisfying these criteria may require computing resources that a computing device does not have. For example, an “Internet of Things” (“IoT”) device may have limited processing power or memory, and thus may not have sufficient computing resources to generate a cryptographic signature or convert a request to a particular format. Additionally, in some aspects, the on-demand code execution system may provide output in a particular format, and a computing device with limited computing resources may not understand the format or have the resources to translate it. An on-demand code execution gateway may thus provide an interface that allows computing devices to interact with an on-demand code execution system regardless of whether the computing devices are capable of providing input in the format expected by the system or parsing output in the format provided by the system. The on-demand code execution gateway may thus allow computing devices to interact with code executing in the serverless on-demand environment as though the code were executing on a conventional server, and may thereby allow the on-demand code execution system to be utilized more efficiently. In some embodiments, computing devices may request a network resource or service, such as access to a web page, web-based application, database, file, image, media content, data stream, or the like. The on-demand code execution gateway may determine whether to fulfill the request by sending it to a server specifically configured to handle the request, or by generating and sending a request for on-demand code execution and then processing the resulting output. The term “serverless environment,” as used herein, is intended to refer to an environment in which responsibility for managing generation, configuration, and state of an underlying execution environment is abstracted away from a user, such that the user need not, for example, create the execution environment, install an operating system within the execution environment, or manage a state of the environment in order to execute desired code in the environment. Similarly, the term “server-based environment” is intended to refer to an environment in which a user is at least partly responsible for managing generation, configuration, or state of an underlying execution environment in addition to executing desired code in the environment. One skilled in the art will thus appreciate that “serverless” and “server-based” may indicate the degree of user control over execution environments in which code is executed, rather than the actual absence or presence of a server. In some embodiments, a user who submits a task to an on-demand code execution system may register the task with the on-demand code execution gateway or otherwise configure the gateway to invoke the on-demand code execution system. For example, the user may provide credentials that the on-demand code execution gateway may use to authenticate itself to the on-demand code execution system and submit a request to execute a task. As a further example, the user may specify one or more uniform resource locators (“URLs”) corresponding to requests that the gateway can fulfill by invoking on-demand code execution of a specified task. The on-demand code execution gateway may thus identify requests that can be fulfilled by invoking on-demand code execution of a user-submitted task. As will be appreciated by one of skill in the art in light of the present disclosure, the embodiments disclosed herein improves the ability of computing systems, such as on-demand code execution systems, to execute code in an efficient manner. Moreover, the presently disclosed embodiments address technical problems inherent within computing systems; specifically, the problem of devices with limited computing resources being unable to utilize on-demand code execution systems due to computationally expensive requirements for providing input and output to these systems. These technical problems are addressed by the various technical solutions described herein, including the provisioning of an on-demand code execution gateway. Thus, the present disclosure represents an improvement on existing data processing systems and computing systems in general. As described in more detail below, the on-demand code execution system may include a worker manager configured to receive user code (threads, programs, etc., composed in any of a variety of programming languages) and execute the code in a highly scalable, low latency manner, without requiring user configuration of a virtual machine instance. Specifically, the worker manager can, prior to receiving the user code and prior to receiving any information from a user regarding any particular virtual machine instance configuration, create and configure virtual machine instances according to a predetermined set of configurations, each corresponding to any one or more of a variety of run-time environments. Thereafter, the worker manager receives user-initiated requests to execute code, and identifies a pre-configured virtual machine instance to execute the code based on configuration information associated with the request. The worker manager can further allocate the identified virtual machine instance to execute the user's code at least partly by creating and configuring containers inside the allocated virtual machine instance, and provisioning the containers with code of the task as well as an dependency code objects. Various embodiments for implementing a worker manager and executing user code on virtual machine instances is described in more detail in U.S. Pat. No. 9,323,556, entitled “PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE,” and filed Sep. 30, 2014 (the “'556 Patent”), the entirety of which is hereby incorporated by reference. As used herein, the term “virtual machine instance” is intended to refer to an execution of software or other executable code that emulates hardware to provide an environment or platform on which software may execute (an “execution environment”). Virtual machine instances are generally executed by hardware devices, which may differ from the physical hardware emulated by the virtual machine instance. For example, a virtual machine may emulate a first type of processor and memory while being executed on a second type of processor and memory. Thus, virtual machines can be utilized to execute software intended for a first execution environment (e.g., a first operating system) on a physical device that is executing a second execution environment (e.g., a second operating system). In some instances, hardware emulated by a virtual machine instance may be the same or similar to hardware of an underlying device. For example, a device with a first type of processor may implement a plurality of virtual machine instances, each emulating an instance of that first type of processor. Thus, virtual machine instances can be used to divide a device into a number of logical sub-devices (each referred to as a “virtual machine instance”). While virtual machine instances can generally provide a level of abstraction away from the hardware of an underlying physical device, this abstraction is not required. For example, assume a device implements a plurality of virtual machine instances, each of which emulate hardware identical to that provided by the device. Under such a scenario, each virtual machine instance may allow a software application to execute code on the underlying hardware without translation, while maintaining a logical separation between software applications running on other virtual machine instances. This process, which is generally referred to as “native execution,” may be utilized to increase the speed or performance of virtual machine instances. Other techniques that allow direct utilization of underlying hardware, such as hardware pass-through techniques, may be used as well. While a virtual machine executing an operating system is described herein as one example of an execution environment, other execution environments are also possible. For example, tasks or other processes may be executed within a software “container,” which provides a runtime environment without itself providing virtualization of hardware. Containers may be implemented within virtual machines to provide additional security, or may be run outside of a virtual machine instance. Embodiments of the disclosure will now be described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described. FIG.1is a block diagram of an illustrative operating environment100in which an on-demand code execution gateway170may operate based on communications with an on-demand code execution system110, web servers180, computing devices102, auxiliary services106, and network-based data storage services108. In general, the computing devices102can be any computing device such as a desktop, laptop or tablet computer, personal computer, wearable computer, server, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, set-top box, voice command device, camera, digital media player, and the like. The on-demand code execution gateway170may provide the computing devices102with one or more user interfaces for invoking user-provided code (e.g., submitting a request to execute the user code on the on-demand code execution system110). In some embodiments, the on-demand code execution gateway170may provide the computing devices102with an interface that allows the on-demand code execution gateway170to determine whether requests to execute code will be fulfilled by the on-demand code execution system110or one or more web servers180. For example, the on-demand code execution gateway170may provide an interface that accepts input in a format understood by the web servers180(e.g., an HTTP “POST” method), and may determine whether to pass this input to the web servers180or translate it into a format understood by the on-demand code execution system110. The on-demand code execution gateway170includes a load balancer174, which implements aspects of the present disclosure including, for example, providing an interface to the on-demand code execution system110that allows computing devices102to request execution of code on the system110without performing such actions as authenticating the request, generating the request into a format expected by the system110, buffering and serializing the request, and other actions as described in more detail below. The on-demand code execution gateway170further includes a request serializer172, which may serialize input and de-serialize output of the system110to facilitate communication between the system110and the computing devices102. In some embodiments, the request serializer172may manage connections to the on-demand code execution system110. For example, the request serializer172may maintain a connection to a frontend120to reduce the overhead costs associated with setting up and tearing down connections on a per-request basis. In some embodiments, the load balancer174may interact with and distribute requests between a number of web servers180. In further embodiments, as described in more detail below, the load balancer174may distribute requests to the on-demand code execution system110based on the workload of the web servers180or other criteria. The on-demand code execution gateway170may thus receive requests that can be fulfilled by the web servers180, and the load balancer174may determine that the request should instead be fulfilled by the on-demand code execution system110. In some embodiments, the on-demand code execution system110may provide one or more user interfaces, command-line interfaces (CLIs), application programing interfaces (APIs), and/or other programmatic interfaces for generating and uploading user-executable code (e.g., including metadata identifying dependency code objects for the uploaded code), invoking the user-provided code (e.g., submitting a request directly to the on-demand code execution system110, in a format understood by that system, to execute user-submitted code), scheduling event-based jobs or timed jobs, tracking the user-provided code, and/or viewing other logging or monitoring information related to their requests and/or user code. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces. The illustrative environment100further includes one or more network-based data storage services108, configured to enable the on-demand code execution system110to store and retrieve data from one or more persistent or substantially persistent data sources. Illustratively, the network-based data storage services108may enable the on-demand code execution system110to store information corresponding to a task, such as code or metadata, to store additional code objects representing dependencies of tasks, to retrieve data to be processed during execution of a task, and to store information (e.g., results) regarding that execution. The network-based data storage services108may represent, for example, a relational or non-relational database. In another example, the network-based data storage services108may represent a network-attached storage (NAS), configured to provide access to data arranged as a file system. The network-based data storage services108may further enable the on-demand code execution system110to query for and retrieve information regarding data stored within the on-demand code execution system110, such as by querying for a number of relevant files or records, sizes of those files or records, file or record names, file or record creation times, etc. In some instances, the network-based data storage services108may provide additional functionality, such as the ability to separate data into logical groups (e.g., groups associated with individual accounts, etc.). While shown as distinct from the auxiliary services106, the network-based data storage services108may in some instances also represent a type of auxiliary service106. The computing devices102, auxiliary services106, and network-based data storage services108may communicate with the on-demand code execution gateway170via a network104, which may include any wired network, wireless network, or combination thereof. For example, the network104may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. As a further example, the network104may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the network104may be a private or semi-private network, such as a corporate or university intranet. The network104may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network104can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network104may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein. In some embodiments, the on-demand code execution gateway170may communicate with the web servers180or the on-demand code execution system110via the network104or another network. The on-demand code execution system110, on-demand code execution gateway170, and web servers180are depicted inFIG.1as operating in a distributed computing environment including several computer systems that are interconnected using one or more computer networks (not shown inFIG.1). The system110, gateway170, and servers180could also operate within a computing environment having more or fewer devices than are illustrated inFIG.1. Additionally, while shown as separate systems, the system110, gateway170, and servers180(or any combination thereof) may in some embodiments be implemented as a single system. Thus, the depictions of the system110, gateway170, and servers180inFIG.1should be taken as illustrative and not limiting to the present disclosure. For example, the on-demand code execution system110, the gateway170, and/or the servers180(or various constituents thereof) could implement various Web services components, hosted or “cloud” computing environments, and/or peer to peer network configurations to implement at least a portion of the processes described herein. Further, the on-demand code execution system110, the on-demand code execution gateway170, and the web servers180may be implemented directly in hardware or software executed by hardware devices and may, for instance, include one or more physical or virtual servers implemented on physical computer hardware configured to execute computer executable instructions for performing various features that will be described herein. The one or more servers may be geographically dispersed or geographically co-located, for instance, in one or more data centers. In some instances, the one or more servers may operate as part of a system of rapidly provisioned and released computing resources, often referred to as a “cloud computing environment.” In some embodiments, any of the components within the on-demand code execution system110can communicate with other components of the on-demand code execution system110via the network104. In other embodiments, not all components of the on-demand code execution system110are capable of communicating with other components of the environment100. In one example, only the frontend120(which may in some instances represent multiple frontends120) may be connected to the gateway170or the network104, and other components of the on-demand code execution system110may communicate with other components of the environment100via the frontends120. The on-demand code execution system110includes one or more frontends120, which enable interaction with the on-demand code execution system110. In an illustrative embodiment, the frontends120serve as an interface allowing the on-demand code execution gateway170to request execution of user-submitted code. In some embodiments, the frontends120also serve as a “front door” to other services provided by the on-demand code execution system110, enabling users to, for example provide computer executable code. The frontends120include a variety of components to enable interaction between the on-demand code execution system110and other computing devices. For example, each frontend120may include a request interface providing computing devices102with the ability to upload or otherwise communicate user-specified code to the on-demand code execution system110, and may enable computing devices102that are capable of doing so to request execution of that code without going through the gateway170. In one embodiment, the request interface communicates with external computing devices (e.g., computing devices102, auxiliary services106, etc.) via a graphical user interface (GUI), CLI, or API. The frontends120process the requests and makes sure that the requests are properly authorized. For example, the frontends120may determine whether the user associated with the request is authorized to access the user code specified in the request. In the illustrated embodiment ofFIG.1, the frontends120may determine whether the on-demand code execution gateway170has been authorized to access the user code specified in a request. References to user code as used herein may refer to any program code (e.g., a program, routine, subroutine, thread, etc.) written in a specific program language. In the present disclosure, the terms “code,” “user code,” and “program code,” may be used interchangeably. Such user code may be executed to achieve a specific function, for example, in connection with a particular web application or mobile application developed by the user. As noted above, individual collections of user code (e.g., to achieve a specific function) are referred to herein as “tasks,” while specific executions of that code (including, e.g., compiling code, interpreting code, or otherwise making the code executable) are referred to as “task executions” or simply “executions.” Tasks may be written, by way of non-limiting example, in JavaScript (e.g., node.js), Java, Python, and/or Ruby (and/or another programming language). Tasks may be “triggered” for execution on the on-demand code execution system110in a variety of manners. In one embodiment, a user or other computing device may transmit a request to execute a task may, which can generally be referred to as “call” to execute of the task. Such calls may include the user code (or the location thereof) to be executed and one or more arguments to be used for executing the user code. For example, a call may provide the user code of a task along with the request to execute the task. In another example, a call may identify a previously uploaded task by its name or an identifier. In yet another example, code corresponding to a task may be included in a call for the task, as well as being uploaded in a separate location (e.g., storage of an auxiliary service106or a storage system internal to the on-demand code execution system110) prior to the request being received by the on-demand code execution system110. As noted above, the code for a task may reference additional code objects maintained at the on-demand code execution system110by use of identifiers of those code objects, such that the code objects are combined with the code of a task in an execution environment prior to execution of the task. The on-demand code execution system110may vary its execution strategy for a task based on where the code of the task is available at the time a call for the task is processed. A request interface of the frontend120may receive calls to execute tasks as Hypertext Transfer Protocol Secure (HTTPS) requests from a user. Also, any information (e.g., headers and parameters) included in the HTTPS request may also be processed and utilized when executing a task. As discussed above, any other protocols, including, for example, HTTP, MQTT, and CoAP, may be used to transfer the message containing a task call to the request interface. To manage requests for code execution, the frontend120can include an execution queue (not shown inFIG.1), which can maintain a record of requested task executions. Illustratively, the number of simultaneous task executions by the on-demand code execution system110is limited, and as such, new task executions initiated at the on-demand code execution system110(e.g., via an API call, via a call from an executed or executing task, etc.) may be placed on the execution queue and processed, e.g., in a first-in-first-out order. In some embodiments, the on-demand code execution system110may include multiple execution queues, such as individual execution queues for each user account. For example, users of the on-demand code execution system110may desire to limit the rate of task executions on the on-demand code execution system110(e.g., for cost reasons). Thus, the on-demand code execution system110may utilize an account-specific execution queue to throttle the rate of simultaneous task executions by a specific user account. In some instances, the on-demand code execution system110may prioritize task executions, such that task executions of specific accounts or of specified priorities bypass or are prioritized within the execution queue. In other instances, the on-demand code execution system110may execute tasks immediately or substantially immediately after receiving a call for that task, and thus, the execution queue may be omitted. The frontend120can further include an output interface (not shown inFIG.1) configured to output information regarding the execution of tasks on the on-demand code execution system110. Illustratively, the output interface may transmit data regarding task executions (e.g., results of a task, errors related to the task execution, or details of the task execution, such as total time required to complete the execution, total data processed via the execution, etc.) to the on-demand code execution gateway170, computing devices102, or to auxiliary services106, which may include, for example, billing or logging services. The output interface may further enable transmission of data, such as service calls, to auxiliary services106. For example, the output interface may be utilized during execution of a task to transmit an API request to an external service106(e.g., to store data generated during execution of the task). To execute tasks, the on-demand code execution system110includes one or more worker managers140that manage the instances used for servicing incoming calls to execute tasks. In the example illustrated inFIG.1, each worker manager140manages an active pool of virtual machine instances154A-B, which are currently assigned to one or more users and are implemented by one or more physical host computing devices150. The physical host computing devices150and the virtual machine instances154A-B may further implement one or more containers158A-C, which may contain and execute one or more user-submitted codes160A-G. Containers are logical units created within a virtual machine instance, or on a host computing device, using the resources available on that instance or device. For example, each worker manager140may, based on information specified in a call to execute a task, create a new container or locate an existing container158A-C and assign the container to handle the execution of the task. The containers156A-C, virtual machine instances154A-B, and host computing devices150may further include language runtimes, code libraries, or other supporting functions (not depicted inFIG.1) that facilitate execution of user-submitted code160A-C. The physical computing devices150and the virtual machine instances154A-B may further include operating systems152and156A-B. In various embodiments, operating systems152and156A-B may be the same operating system, variants of the same operating system, different operating systems, or combinations thereof. Although the virtual machine instances154A-B are described here as being assigned to a particular user, in some embodiments, an instance154A-B may be assigned to a group of users, such that the instance is tied to the group of users and any member of the group can utilize resources on the instance. For example, the users in the same group may belong to the same security group (e.g., based on their security credentials) such that executing one member's task in a container on a particular instance after another member's task has been executed in another container on the same instance does not pose security risks. Similarly, the worker managers140may assign the instances and the containers according to one or more policies that dictate which requests can be executed in which containers and which instances can be assigned to which users. An example policy may specify that instances are assigned to collections of users who share the same account (e.g., account for accessing the services provided by the on-demand code execution system110). In some embodiments, the requests associated with the same user group may share the same containers (e.g., if the user codes associated therewith are identical). In some embodiments, a task does not differentiate between the different users of the group and simply indicates the group to which the users associated with the task belong. Once a triggering event to execute a task has been successfully processed by a frontend120, the frontend120passes a request to a worker manager140to execute the task. In one embodiment, each frontend120may be associated with a corresponding worker manager140(e.g., a worker manager140co-located or geographically nearby to the frontend120) and thus the frontend120may pass most or all requests to that worker manager140. In another embodiment, a frontend120may include a location selector configured to determine a worker manager140to which to pass the execution request. In one embodiment, the location selector may determine the worker manager140to receive a call based on hashing the call, and distributing the call to a worker manager140selected based on the hashed value (e.g., via a hash ring). Various other mechanisms for distributing calls between worker managers140will be apparent to one of skill in the art. In accordance with embodiments of the present disclosure, the worker manager140can determine a host computing device150or a virtual machine instance154A-B for executing a task. As shown inFIG.1, various combinations and configurations of host computing devices150, virtual machine instances154A-B, and containers158A-C may be used to facilitate execution of user submitted code160A-C. In the illustrated example, the host computing device150implements two virtual machine instances154A and154B. Virtual machine instance154A, in turn, implements two containers158A and158B, which contain user-submitted code160A and160B respectively. Virtual machine instance154B implements a single container158C, which contains user-submitted code160C. It will be understood that these embodiments are illustrated for purposes of example, and that many other embodiments are within the scope of the present disclosure. While some functionalities are generally described herein with reference to an individual component of the on-demand code execution system110, other components or a combination of components may additionally or alternatively implement such functionalities. For example, a worker manager140may operate to provide functionality associated with execution of user-submitted code as described herein with reference to an on-demand code execution gateway170. FIG.2depicts a general architecture of a computing system (referenced as on-demand code execution gateway170) that operates to provide an interface to the on-demand code execution system110. The general architecture of the on-demand code execution gateway170depicted inFIG.2includes an arrangement of computer hardware and software modules that may be used to implement aspects of the present disclosure. The hardware modules may be implemented with physical electronic devices, as discussed in greater detail below. The on-demand code execution gateway170may include many more (or fewer) elements than those shown inFIG.2. It is not necessary, however, that all of these generally conventional elements be shown in order to provide an enabling disclosure. Additionally, the general architecture illustrated inFIG.2may be used to implement one or more of the other components illustrated inFIG.1. As illustrated, the on-demand code execution gateway170includes a processor202, input/output device interfaces204, a network interface206, and a data store208, all of which may communicate with one another by way of a communication bus. The network interface206may provide connectivity to one or more networks or computing systems. The processor202may thus receive information and instructions from other computing systems or services via the network104. The processor202may also communicate to and from a memory220and further provide output information for an optional display (not shown) via the input/output device interfaces204. The input/output device interfaces204may also accept input from an optional input device (not shown). The memory220may contain computer program instructions (grouped as modules in some embodiments) that the processor202executes in order to implement one or more aspects of the present disclosure. The memory220generally includes random access memory (RAM), read only memory (ROM) and/or other persistent, auxiliary or non-transitory computer readable media. The memory220may store an operating system222that provides computer program instructions for use by the processor202in the general administration and operation of the on-demand code execution gateway170. The memory220may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory220includes a user interface module224that generates interfaces (and/or instructions therefor) that enable access to the on-demand code execution server110. In addition, the memory220may include and/or communicate with one or more data repositories (not shown), for example, to access user program codes and/or libraries. In addition to and/or in combination with the user interface module224, the memory220may include a request serializer172and a load balancer174that may be executed by the processor202. In one embodiment, the request serializer172and load balancer174individually or collectively implement various aspects of the present disclosure, e.g., processing request for network resources and serializing them into a format understood by an on-demand code execution server110, as described further below. While the request serializer172and load balancer174are shown inFIG.2as part of the on-demand code execution gateway170, in other embodiments, all or a portion of the request serializer172and load balancer174may be implemented by other components of the on-demand code execution system110and/or another computing device. For example, in certain embodiments of the present disclosure, another computing device in communication with the on-demand code execution system110may include several modules or components that operate similarly to the modules and components illustrated as part of the on-demand code execution gateway170. The memory220may further include user requests226, which may be loaded into memory in conjunction with a user-submitted request that can be fulfilled by executing a task on the on-demand code execution system110. The memory220may further include execution output228, which may be received from the on-demand code execution system110after a task has been executed. In some embodiments, the on-demand code execution gateway170may further include components other than those illustrated inFIG.2. For example, the memory220may further include information regarding various user-submitted codes that are available for execution, authentication information for accessing various user-submitted codes, or metadata or other information that was submitted with the request.FIG.2is thus understood to be illustrative but not limiting. FIGS.3A and3Bdepict illustrative interactions for fulfilling requests for computing resources, such as requests to access a web page or a web-based application, via an on-line code execution gateway. With reference now toFIG.3A, at (1), a computing device102requests a network resource. Illustratively, the request may be in the form of a Uniform Resource Locator (“URL”), which may be transmitted by the computing device to the load balancer174. At (2), in some embodiments, the load balancer174assesses the current workloads of the servers it balances (which are not depicted inFIG.3A) to determine whether any of these servers have capacity to fulfill the request. In some embodiments, the load balancer174may obtain server load information from the servers in the form of processor utilization metrics, memory usage, and other such measurements. In other embodiments, the load balancer174may determine server load based on the volume and frequency of requests that it has assigned. In some embodiments, the load balancer174determines that one of its servers has sufficient capacity to fulfill the request, and assigns the request to the server. In other embodiments, at (3), the load balancer174determines that none of its servers currently have sufficient capacity to fulfill the request, and thus determines to fulfill the request using on-demand execution. In some embodiments, the load balancer174may determine to use on-demand execution for reasons other than server load. For example, the load balancer174may determine that on-demand code execution will make better use of computing resources, will provide better performance (e.g., faster results), provide lower latency for certain requests, or apply other criteria to make the determination to use on-demand execution. Having made such a determination, at (4), the load balancer174then passes the request to the request serializer172. In some embodiments, the load balancer174may act as a firewall that prevents malformed or malicious requests from reaching an on-demand code execution system and/or other servers. For example, the load balancer174may authenticate a request it receives by, e.g., exchanging tokens or otherwise verifying the source of the request. In further embodiments, the load balancer174may throttle requests to the on-demand code execution system or otherwise protect the integrity of the on-demand code execution system. In some embodiments, the load balancer174may determine that the numbers of servers in its server pool should be increased based on the number of requests that the servers are unable to fulfill due to load, or may determine the number of servers may be decreased if few or no requests are being fulfilled via on-demand code execution. The load balancer174may analyze the quantity and timing of the requests it receives, and may assess the cost-benefit tradeoff of instantiating additional servers. For example, the load balancer174may determine that it is experiencing a temporary “spike” or increase in traffic, and that the spike will be over before it can bring additional servers online. As a further example, the load balancer174may determine that few or no requests are being fulfilled via on-demand code execution, and server workloads are such that the number of servers can be reduced. In some embodiments, the number of servers may be reduced to zero (e.g., a determination may be made that all requests should be fulfilled via on-demand code execution). In some embodiments, the load balancer174or another component of the on-demand code execution gateway170may perform a cost-benefit analysis of adding or removing a server, and may consider factors such as request response times, idle capacity, costs associated with on-demand code execution, costs associated with maintaining a server, and other factors. At (4), the load balancer174may pass the request for a network resource to the request serializer172, which may encode the request into a format accepted by an on-demand code execution system. Illustratively, the on-demand code execution system may require that requests be in a particular format. For example, the system may require that a request include certain headers or other metadata in a particular format, or that the body of the request be formatted as a base64-encoded JavaScript Object Notation (“JSON”) string or blog. At (5), the request serializer172serializes the request. Illustratively, the request may be serialized by converting it to a format that is accepted by an on-demand code execution system, or by generating a “blank” request in an accepted format and populating it with information from the originally received request. In some embodiments, the request serializer172may generate a hash key, signature, token, or other identifier to allow the on-demand code execution system to authenticate the request. The request serializer172may also provide other information that is absent from the originally received request but required by the on-demand code execution system, such as information identifying the particular task or user-submitted code that may be executed to fulfill the request. In some embodiments, the request serializer172or the load balancer174may determine the appropriate task to execute based on characteristics of the request, such as an originating IP address, destination IP address, information contained in a URL string or in HTTP headers, or other characteristics. In some embodiments, as described above, the request for a network resource may not be received all at once. For example, the request may be to process an image, data file, or other binary object, and the body of the request may include the object and may be distributed across multiple packets or messages. The request serializer172may thus buffer portions of the request until a complete request has been received, so that the entire request can be signed and provided to the on-demand code execution system. At (6), the serialized request, which may also be referred to herein as an “encoded input,” is transmitted to a frontend120of an on-demand code execution system. The frontend120processes the serialized request, identifies a suitable worker manager140, and at (7) requests that the worker manager140assign a worker to execute the requested code. At (8), the worker manager140identifies a host computing device150that can instantiate a “worker” execution environment (e.g., a virtual machine instance or a container within a virtual machine instance) to execute the task, and assigns the task to the execution environment on the host computing device150. In some embodiments, the worker manager140may identify an existing execution environment to execute the task and assign the task accordingly. At (9), the execution environment on the host computing device150executes the task. In some embodiments, the load balancer174or the request serializer172may interact with multiple frontends120or multiple code on-demand code execution systems, and may assign requests to different frontends, different on-demand code execution systems, or different tasks within an on-demand code execution system. For example, the load balancer174may assign requests to be fulfilled by a high-performance task that consumes more computing resources when load on the on-demand code execution system is low, and may assign requests to be fulfilled by a task that consumes fewer resources but still produces acceptable results when load is high. The load balancer174or the request serializer172may, in some embodiments, perform a periodic or demand-driven health check on the frontends120, on-demand code execution systems, or executing tasks, and may fail over to a different frontend120, on-demand code execution system, or task if the health check indicates a problem with task execution. With reference now toFIG.3B, at (10), the host computing device150provides the output of executing the task to the worker manager140, who at (11) reports the output to the frontend120. At (12), the frontend120provides the output to the request serializer172. In some embodiments, the host computing device150or the worker manager140may communicate directly with the request serializer172, and some or all of the interactions at (10), (11), and (12) may be combined. In some embodiments, the output may be encoded or serialized. For example, the output may be in a format that corresponds to the encoded input, such as a response to an API call, or may have headers or metadata that correspond to headers or metadata in the encoded input. At (13), the request serializer172de-serializes the output. Illustratively, de-serializing the output may convert the output to a format expected by the computing device102, such as an HTTP response that corresponds to the original request. In some embodiments, the request serializer172may remove or convert metadata associated with the output. For example, the request serializer172may move metadata into optional HTTP headers, or may make the output similar or identical to the output that a server-based application would have generated. In some embodiments, the output may include status messages or error messages that are specific to the on-demand code execution system, which may be translated or converted into status or error messages in another format (e.g., into the equivalent message that would have been generated by a server-based application), or may be retained in the converted output as indications that the request was fulfilled by an on-demand code execution system. At (14), the request serializer provides the decoded or de-serialized output to the load balancer174, when at (15) provides the output to the requesting computing device102as a response to the original request. In some embodiments, the ordering and implementation of operations described above may be modified, or these interactions may be carried out by additional or alternative elements of the on-demand code execution gateway170. For example, in some embodiments, the interactions at (2) and (3) may be omitted and the load balancer174may fulfill all requests by utilizing the on-demand code execution system. As a further example, in some embodiments, the request serializer172may bypass the frontend120and communicate directly with the worker manager140. The interactions depicted inFIGS.3A-3Bare thus understood to be illustrative and not limiting. FIG.4is a flow diagram of an illustrative routine400for processing requests for computing resources by using an on-demand code execution gateway. The routine400may be carried out, for example, by the on-demand code execution gateway170depicted inFIG.1or various components thereof. At block402, at least part of a request for a network resource may be obtained. Illustratively, the request may be to access web content, interact with an application, read or write to a storage volume, or access other resources. In various embodiments, as described above, the request may be received in its entirety or received in stages or portions. At decision block404, a determination may be made as to whether a complete request has been obtained. If not, then the routine400branches to block406, where the portions of the request that have been obtained thus far are stored in a memory buffer until the rest of the request is received. The routine400then returns to block402and awaits further portions of the request. In some embodiments, the routine400may process multiple requests in parallel, and may determine which request is associated with the portion received at block402and whether the portion completes that request. Additionally, in some embodiments, the size of a complete request may exceed the size of the memory buffer for storing requests. If so, then in various embodiments the routine400may reject the request, truncate the request, assign the request to a web server (e.g., the web server(s)180as depicted inFIG.1), stream all or part of the request to an on-demand code execution system, divide the request into smaller requests, or otherwise process the request. If the determination at decision block404is that a complete request has been obtained, then the routine400branches to block408, where the complete request may be serialized. As described above, all or part of the request may be serialized by converting or encoding the request into a format accepted by the on-demand code execution system. The request may, for example, be converted into a JSON object or objects, an HTTP method, an API call, or otherwise encoded into another format or notation. At block410, the serialized request may be provided as encoded input to an on-demand code execution system, and at block412the resulting encoded output may be obtained in a serialized format. Illustratively, as described above, the routine400may include authentication information as part of the request, or in some embodiments may authenticate separately from submitting a request to execute the task. For example, the routine400may provide credentials that confirm the user who submitted the code has authorized access via an on-demand code execution gateway. In other embodiments, the routine400may authenticate the request itself by signing the request or including a hash key as part of the request. At block414, the output from executing the task may be de-serialized and converted into a format understood by the requesting computing device. For example, the output may be converted from a JSON object or objects into an HTTP response, or all or part of the output may be converted from base64 notation into a binary notation. In some embodiments, the output may be made similar or identical to output that a server would provide if executing the task or an analogous task. In other embodiments, information indicating that the request was fulfilled by an on-demand code execution may be included in the output. At block416, the de-serialized output may be provided in response to the original request. FIG.5if a flow diagram of an illustrative routine500for utilizing an on-demand code execution system to provide load balancing. Offloading requests to an on-demand code execution system may allow more efficient management of computing resources. For example, on-demand code execution may be utilized to address portions of a workload that exceed the capacity of a pool of servers, but that are not sufficient to justify increasing the size of the pool. As further examples, on-demand code execution may be utilized to allow underused servers to be removed from the pool, or to handle a sudden increase in requests when servers cannot be added to the pool quickly enough. At block502, a request for a computing resource may be obtained as described above. At block504, information regarding the current load of servers that can fulfill the request may be obtained. In some embodiments, as described above, server load information may be obtained from the servers themselves as metrics representing resource utilization or consumption. In other embodiments, server load may be determined or estimated based on the number and rate at which previous iterations of the routine500have assigned requests to the servers. At decision block506, a determination may be made as to whether a server is available to fulfill the request. In some embodiments, the incremental workload that the request represent may be determined, and the determination may be as to whether a server can accept the incremental workload and still meet performance targets (e.g., response times or resource utilization targets). In other embodiments, the capacity of each server may be determined or predetermined as a number of requests that can be processed in parallel, and this threshold may be compared to the number of requests that a server is currently processing. If the determination is that a server is available, then at block508the request is assigned to the server. At block510, server load information may be compared to historical server load information, and an assessment may be made as to whether the servers are underutilized or whether it would be more efficient to fulfill more requests using on-demand code execution. At decision block512, a determination may be made as to whether server workloads and utilization of on-demand code execution are such that the number of servers should be reduced. If so, then at block514one or more servers may be released. Illustratively, a server may be released by deactivating a virtual machine instance, de-allocating physical hardware that has been allocated to a resource pool, or otherwise removing computing resources from the server pool. If the determination is that the number of servers should not be reduced, then the routine500ends without taking further action. In some embodiments, the routine500is executed continuously and instead branches to block502to await further requests for resources. If the determination at decision block506is that no web server has sufficient available capacity to fulfill the request, then the routine500branches to block516. In some embodiments, the determination at decision block506may be based on criteria other than the available capacity of the servers. For example, a determination may be made that on-demand code execution is likely to provide a faster response given the current server workloads, or that the characteristics of a particular request make on-demand code execution preferable. For example, on-demand code execution may be faster (or may provide acceptable performance) for certain types of requests, while other types of requests may require server resources. In further embodiments, the code that is executed by the on-demand code execution service may differ from the code executed by the servers, and may provide different results under certain conditions. The determination at decision block506may therefore be as to whether the conditions are met. At block516, the request may be assigned to an on-demand code execution server and fulfilled by executing user-submitted code, as described above. At block518, usage of the on-demand code execution server may be analyzed relative to historical usage of on-demand code execution or usage of the servers. In some embodiments, the server load information obtained at block504may be analyzed. At decision block520, a determination may be made as to whether the usage of the on-demand code execution server is such that the size of the server pool should be increased. Illustratively, the determination may be that the usage exceeds a threshold for a specified time interval, or that the usage trend is increasing at a threshold rate. If the determination is that adding a server or server(s) to the pool is justified, then at block522one or more servers are added. If not, then the routine500ends (or in some embodiments returns to block502and awaits further input). The blocks of the routines described above may vary in embodiments of the present disclosure. For example, in some embodiments of the routine400, block414may be omitted and the output of the on-demand code execution may be provided in a serialized format. As a further example, blocks510-514and516-522of the routine500may be carried out independently of obtaining a request for a computing resource. For example, these blocks may be carried out periodically or in response to detecting that server loads are above or below a threshold. The routines may further include additional blocks, or the blocks of the routines may be rearranged or combined, according to various embodiments. In further embodiments, all or part of the routines may be combined. It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein. All of the processes described herein may be embodied in, and fully automated via, software code modules, including one or more specific computer-executable instructions, that are executed by a computing system. The computing system may include one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware. Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together. The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few. Conditional language such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B, and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
63,594
11861387
DETAILED DESCRIPTION OF EMBODIMENTS In the following description, for purposes of explanation, specific details are set forth in order to provide an understanding of the disclosure. It will be apparent, however, to one skilled in the art that the disclosure can be practiced without these details. Furthermore, one skilled in the art will recognize that embodiments of the present disclosure, described below, may be implemented in a variety of ways, such as a process, an apparatus, a system/device, or a method on a tangible computer-readable medium. Components, or modules, shown in diagrams are illustrative of exemplary embodiments of the disclosure and are meant to avoid obscuring the disclosure. It shall also be understood that throughout this discussion that components may be described as separate functional units, which may comprise sub-units, but those skilled in the art will recognize that various components, or portions thereof, may be divided into separate components or may be integrated together, including, for example, being in a single system or component. It should be noted that functions or operations discussed herein may be implemented as components. Components may be implemented in software, hardware, or a combination thereof. Furthermore, connections between components or systems within the figures are not intended to be limited to direct connections. Rather, data between these components may be modified, re-formatted, or otherwise changed by intermediary components. Also, additional or fewer connections may be used. It shall also be noted that the terms “coupled,” “connected,” “communicatively coupled,” “interfacing,” “interface,” or any of their derivatives shall be understood to include direct connections, indirect connections through one or more intermediary devices, and wireless connections. It shall also be noted that any communication, such as a signal, response, reply, acknowledgement, message, query, etc., may comprise one or more exchanges of information. Reference in the specification to “one or more embodiments,” “preferred embodiment,” “an embodiment,” “embodiments,” or the like means that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the disclosure and may be in more than one embodiment. Also, the appearances of the above-noted phrases in various places in the specification are not necessarily all referring to the same embodiment or embodiments. The use of certain terms in various places in the specification is for illustration and should not be construed as limiting. The terms “include,” “including,” “comprise,” and “comprising” shall be understood to be open terms and any examples are provided by way of illustration and shall not be used to limit the scope of this disclosure. A service, function, or resource is not limited to a single service, function, or resource; usage of these terms may refer to a grouping of related services, functions, or resources, which may be distributed or aggregated. The use of memory, database, information base, data store, tables, hardware, cache, and the like may be used herein to refer to system component or components into which information may be entered or otherwise recorded. The terms “data,” “information,” along with similar terms, may be replaced by other terminologies referring to a group of one or more bits, and may be used interchangeably. The terms “packet” or “frame” shall be understood to mean a group of one or more bits. The term “frame” shall not be interpreted as limiting embodiments of the present invention to Layer2networks; and, the term “packet” shall not be interpreted as limiting embodiments of the present invention to Layer3networks. The terms “packet,” “frame,” “data,” or “data traffic” may be replaced by other terminologies referring to a group of bits, such as “datagram” or “cell.” The words “optimal,” “optimize,” “optimization,” and the like refer to an improvement of an outcome or a process and do not require that the specified outcome or process has achieved an “optimal” or peak state. It shall be noted that: (1) certain steps may optionally be performed; (2) steps may not be limited to the specific order set forth herein; (3) certain steps may be performed in different orders; and (4) certain steps may be done concurrently. Any headings used herein are for organizational purposes only and shall not be used to limit the scope of the description or the claims. Each reference/document mentioned in this patent document is incorporated by reference herein in its entirety. In one or more embodiments, a stop condition may include: (1) a set number of iterations have been performed; (2) an amount of processing time has been reached; (3) convergence (e.g., the difference between consecutive iterations is less than a first threshold value); (4) divergence (e.g., the performance deteriorates); and (5) an acceptable outcome has been reached. A. General Overview and System Embodiments As noted above, as multi-cloud environments have become very pervasive—especially with differing cloud environments being built on different hypervisors. Because operating systems are primarily designed to operate on actual hardware—as opposed to virtual hardware in the case of virtual machines—each hypervisor environment usually includes a suite of utilities to enhance guest OS performance in a virtualized environment. These tools help configure settings to improve performance of the guest OS in the virtualized environment. For example, special confirmation may be required for a myriad of issues such as time synchronization, ability to communicate with the hypervisor, connections to physical port, and ability to run scripts—just to name a few. However, as noted above, because each vendor's hypervisor environment differs and because each vendor's guest OS utility tools vary, it is a difficult, if not practically impossible, process that is potentially error-prone to migrate a guest OS optimization tool settings of a virtual machine when migrating the virtual machine from one hypervisor environment to another hypervisor environment. Preferably, the migrated virtual machine should function the same as (or have the same functionality) as before migration. The migration of guest OS optimization tool/configuration tool settings is far from a straightforward process because there is no standardization; thus, there are no direct correlations between setting for different hypervisors and guest OS optimization utility tools. As a practical matter, guest OS optimization tool/configuration tool settings migration cannot efficiently be performed manually since the inventory of settings is not the same between different providers/vendors, and there is usually no obvious logical correlation that would allow for manual mapping between feature settings. Given that that task is extremely challenging to perform manually, there are some tools that exist for virtual machine migration between difference hypervisors. Examples include VMware vCenter Converter, VMware HCX, and StarWind V2V Converter. These tools attempt to deal with the guest OS optimization utility suites and do so in different ways, but their abilities are limited. For example, the VMware tools allow customers to install VMware tools with default settings but has no ability to comprehend existing settings in other hypervisor environments. And, the StarWind tool by StarWind of Beverly, MA does not deal with guest OS optimization utility suites. None of these offerings involves intelligent automation nor offer analytics-based migration. Accordingly, embodiments herein facilitate analytics-based migration of utility suite settings during an inter-hypervisor migration. In one or more embodiments, correlations between configuration settings for the guest OS optimization utility suites associated with different hypervisors are detected. Where correlation exists, in one or more embodiments, the relevant setting on the destination utility suite may be automatically configured to the required value(s) as part of the inter-hypervisor migration. Thus, embodiments herein provide systems and methods to intelligently automate the migration of settings between the guest OS optimization utility suites for different hypervisors; no other marketplace workload migration utilities attempt to deal with such scenarios. FIG.1depicts a system architecture that facilitates migration of guest OS optimization tool settings for a virtual machine migration in a multi-hypervisor data center environment, according to embodiments of the present disclosure. The system100may be used to provide an analytics-based approach to translate settings used by the guest OS optimization tool on one cloud/hypervisor105to settings used by the guest OS optimization tool on the other cloud/hypervisor110. Depicted inFIG.1, a virtual machine (VM) or virtual desktop102-soperates on a source cloud/hypervisor environment105. The virtual machine102-sis to be migrated145to a different hypervisor110as virtual machine102-d. As part of the migration process, settings for the virtual machine102will need to be configured so that it will operate properly on the destination cloud/hypervisor environment110. Also depicted inFIG.1is a guest OS optimization tool settings migration system local component or system115and a guest OS optimization tool settings migration system centralized component or system135. In one or more embodiments, a guest OS optimization tool settings migration system local component115comprises three main components. It should be noted that embodiments of the present disclosure may operate in various settings, including cloud settings; therefore, it shall be noted that the use of term “local” indicates that the settings are specific to a certain client, organization, deployment, virtual machine instance, etc. It does not require that the local component or system be physically local to the virtual machine instance. Rather, it may be located on one or more remote computing systems. As shown in the depicted embodiment, the local system115includes a local repository of correlation data130. In one or more embodiments, the local repository130comprises migration data related to a specific client, organization, deployment, virtual machine instance, etc. In one or more embodiments, the data may include historical data related to previous and/or current virtual machine migrations. As will discussed in more detail below, the historical data may be used to identify correlations between configuration settings for the guest OS optimization utility suites associated with different hypervisors. In one or more embodiments, the local repository130also comprises, based at least in part on the correlations, the relevant mappings between the configuration settings for the virtual machine on the source hypervisor and the configuration settings for the virtual machine on the destination hypervisor for an inter-hypervisor migration from the source hypervisor to the destination hypervisor. In one or more embodiments, the local repository130is communicatively coupled155to a central external repository140, which may be part of the guest OS optimization tool settings migration system centralized system135, and exchanges data with the central external repository140. In one or more embodiments, the exchanged data155may be the historical data, the correlation data, rules, or a combination thereof. In one or more embodiments, the local system115comprises a rules table or repository125, which stores one or more user-definable sets of rules. The user-definable rules may be used to override and/or supplement correlation models defined in the local and/or central correlation repositories. In one or more embodiments, the local system115comprises a migration tool120that uses one or more application program interfaces (APIs) or other functions or scripts to obtain150information about the hypervisor and/or virtual machine configurations that may be used to help determine migration settings, to implement150a desired set of settings, or both. Consider, by way of illustration, the following example. Assume that source cloud/hypervisor environment105is a Microsoft Hyper-V environment. Hyper-V integration services contains a list of settings that may be set by a user/administrator to activate different features of the integration services. Some examples of services include Data Exchange, Heartbeat, and Time Synchronization. Also assume that the destination cloud/hypervisor environment110is vSphere, a VMware-based hypervisor environment. Like Hyper-V, VMware's tools also contain a list of similar—but not identical—settings, such as VMCI (Virtual Machine Communication Interface) driver, Drive Sync (Filesystem Sync driver), Host Time Synchronization and Mouse (VMware Mouse Driver). In one or more embodiments, historical correlation may be obtained or detected using monitoring tools, such as perfmon (a performance monitoring tool), to collect data from existing cloud environments before and after inter-hypervisor migration. In one or more embodiments, for correlation purposes, the specific direction of migration is noted with the data collection. Because there is no standardization related to these tools and settings, there may instances where settings do not correlate one-to-one. Thus, in one or more embodiments, correlations of settings are considered migration direction specific (e.g., Hyper-V to vSphere correlation would not necessarily imply correlation for vSphere to Hyper-V). For example, a single setting in Hyper-V may be correlated to many settings in vSphere (thereby being a one-to-many correlation), but if the virtual machine was being migrated from a vSphere hypervisor environment to a Hyper-V hypervisor environment, it may be a many-to-one correlation. In one or more embodiments, the migration-direction-specific information is collected and used to provide guidance on correlated parameters in the destination environment—that is, based on what correlations exist in the operational environment after initial configuration has been completed. It should be noted that, in one or more embodiments, correlation may also be detected on an ongoing basis during usage in the environment. As well as historical correlation from the various environments, in one or more embodiments, internal migration decisions may be weighted to allow local configuration setting migration requirements to override correlation detected from prior or centralized historical datasets. For example, the rules table125may have one or more rules related to one or more settings that have precedence over any local or global correlations rules. In one or more embodiments, one or more rules may supplement the set of correlation settings. Given a final set of configurations settings, the guest OS optimization tool settings migration system115may implement or configure the values of the settings for the migrated virtual machine102-d. In one or more embodiments, the correlated settings may be implemented via a destination guest OS tool. B. Methodology Embodiments 1. Correlation Method Embodiments FIG.2depicts a methodology for generating correlations between different guest OS optimization tool settings, according to embodiments of the present disclosure. A core of the methodology depicted inFIG.2is the collection of data related to migrated virtual machines. In one or more embodiments, the data collection may be local (e.g., specific to a particular entity/organization), may be global (e.g., collected across a number of different entities/organizations), or both. It should be noted that there are a number of benefits of collecting data globally but noting the source of the data, which can allow for local correlations, comparisons of correlations across different data sources, and the like. In so doing, more data is collected, which tends to provide for better correlation predictions. Another benefit of global collection is that it affords potential correlation information for an entity even if that entity has not performed that specific migration. For example, if entity X has never migrated to a particular hypervisor type but others have, entity X can utilize the correlation results from data collected from others who have done that type of migration. And, because the results are at the correlation level, it anonymizes the underlying data—no company-specific information is shared. Yet another benefit of capturing the data source information of multiple entities/organizations, more unique or distinction correlations can be generated. For example, correlations may be predicted for a specific entity/organization, for a set of similarly situated entities/organizations, and/or globally. As illustrated inFIG.2, in one or more embodiments, both the settings used by a guest OS optimization tool on the source cloud/hypervisor for a virtual machine being migrated or that has been migrated and the settings used by the guest OS optimization tool on the destination cloud/hypervisor are recorded (205). As noted above, because the different guest OS tools have different features and differently configured or offered features, there is no readily apparent correlations for some features. Furthermore, the feature correlations may be complex—one-to-one correlations, one-to-many correlations, many-to-one correlations, no correlations, partial correlations (e.g., one-to-one partial correlation, one-to-many partial correlation, many-to-one partial correlation), etc. In addition, the parameter(s) associated with the correlation can vary and may add further complexity. For example, some features may be represented by an activated/inactivate parameter value, while others may require an entry, such as a numerical value or a setting command. In the case of a numerical value, a value within a numerical range may be acceptable; and thus, in one or more embodiments, the correlation process may determine the range, and alternatively or additionally, provide an estimated preferred value. Thus, in one or more embodiments, historical data related to migrated virtual machines is collected. In one or more embodiments, settings-related information is also collected (210). That is, in addition to the guest OS tool settings that are collected, hypervisor setting information may also be collected. While it may be that at least one or more of these hypervisor settings may not be adjusted, at least by the guest OS tool, the hypervisor setting information may still be useful in automating the migration correlation process as these underlying features or settings may affect what features are offered or enabled at the guest OS tool level. Because different cloud providers may enable or disable different features of the hypervisors or configure the hypervisors in specific ways, the information may be used to create more accurate and nuisance correlations related to the guest OS tool level features. In one or more embodiments, the data collection processes are continued until sufficient data has been collected (215). It shall be noted that even after sufficient data has been collected, in one or more embodiments, additional data may be collected to update correlation models, improve correlation models, add or remove correlation models to reflect changes to the underlying technology, and the like. In one or more embodiments, one or more correlation methods may be used to determine correlations and implementation-related settings for direction-specific migrations. In one or more embodiments, deep learning methodologies may be employed to develop correlations for direction-specific migrations. For example, the settings for the guest OS tool and the source hypervisor settings may be used as input features into a neural network model, such as a fully connected neural network, an recurrent neural network, a convolution neural network, or a combination thereof, which uses the corresponding guest OS tools settings for the migrated virtual machine and the destination hypervisor settings as ground truth data to train the model to determine the correct correlations. In one or more embodiments, other machine learning techniques may also additionally or alternatively be employed. For example, regressions, classification methods, clustering methods, reinforcement learning methods, decision trees, random forests, support vector machines, and Pearson correlation coefficient methods may be employed to develop correlation models. In one or more embodiments, other unsupervised methodologies, such as inference prediction, may be used to generate correlation predictions/models. Consider the following example, which is provided by way of illustration. Assume that multiple integration services, such as Hyper-V Data Exchange Service, Hyper-V Guest Service Interface, and Hyper-V PowerShell Direct Service, have been disabled at the source hypervisor. These features being disabled indicate a very conservative or secure posture; thus, an equally conservative security posture should be adopted in the destination/target hypervisor environment. In the case of VMware tools, this may result in the disablement of the capability for the guest OS to monitor certain performance/resource utilization parameters on the host, the exclusion of all filesystems from the quiesced snapshots list, etc. It shall be noted that by applying an analytics approach, there may be correlations detected which are intuitively not obvious. For example, it may be determined by the data that disconnecting a network adapter on the VMware side and allowing direct management with PowerShell without a network connection on the HyperV side may be appropriate correlation. In one or more embodiments, the data collection processes ofFIG.2may include collection of data related to migrations that used one or more correlation models. Differences in settings between what was suggested in a correlation model and the actual deployment may be used to refine the model. Related to step220and step215, in one or more embodiments, the sufficiency of the correlations may be used to determine whether sufficient data has been collected, in which the collected data is used as ground truth data to verify accuracy of predicted correlations. Having developed a set of one or more migration-direction-specific correlation models, the guest OS tool correlations and hypervisor implementation-related settings for specific directional migrations may be output/stored (225). In one or more embodiments, the correlations may be stored locally (e.g., local repository130ofFIG.1), globally/centrally (e.g., central repository140ofFIG.1), or both. In one or more embodiments, embodiments of the data collection and correlation modeling may be performed by the migration tool120. In one or more embodiments, the centralized system135may additionally or alternatively include a migration tool to perform correlation modeling. 2. Analytics-Based Migration Method Embodiments FIG.3depicts a methodology for analytics-based migrating of guest OS optimization tool settings in a multi-hypervisor data center environment, according to embodiments of the present disclosure. In one or more embodiments, a virtual machine operating on a first hypervisor environment from a first vendor is selected (305) to be migrated to a second hypervisor environment from a second vendor. In one or more embodiments, the analytics-based migration system gathers (310) data regarding the source guest OS tool settings and hypervisor environment settings-related data for the source hypervisor environment, the destination hypervisor environment, or both. A check is made (315) whether the correlation repository that is to be used is to be from a local repository or from a central repository. Depending upon the selection, the analytics-based migration system obtains (320/325) correlation models and implementation-related settings from the selected repository for the direction-specific migration. In one or more embodiments, given the specific data gathered related to the migration and the selected correlations repository, the analytics-based migration system determines (330) an appropriate set of correlated settings and implementation-related settings for the direction-specific virtual machine migration. In one or more embodiments, the analytics-based migration system may receive (335) input related to the set of correlated settings and implementation-related settings for the direction-specific virtual machine migration. For example, the analytics-based migration system may check a rules dataset to determine whether any of the set of correlated settings and implementation-related settings for the direction-specific virtual machine migration should be adjusted based upon one or more rules. For example, the rules dataset may overrule certain aspects of a correlation as defined by an applied rule or rules. Whether there are any applicable rules or not, a finalized set of correlated settings and implementation-related settings for the direction-specific virtual machine migration may then be applied (340) at the destination hypervisor for the virtual machine migration. In one or more embodiments, the system may use a remote call to an interface, such as a command line utility like VMwareToolboxCmd.exe, PowerShell, etc., to set the relevant parameter or parameters to the required value or values. It shall be noted that, with the exception of the selection of the virtual machine for migration, the embodiments ofFIG.3may be applied programmatically (i.e., with limited or no user input) once the correlation repository and rules have been set. However, it shall be noted that one or more of the steps ofFIG.3may include prompting a user for input.FIG.4depicts example embodiments in which user input is provided as part of the process. It shall be noted that in the embodiments depicted related toFIG.3, implementation-related settings about the source hypervisor implementation, the destination hypervisor implementation or both were obtained and considered as part of the migration. In one or more embodiments, gathering data about the source hypervisor implementation, the destination hypervisor implementation or both and/or obtaining implementation settings for the destination hypervisor may be optionally performed. FIG.4depicts another methodology for analytics-based migration of guest operating system (OS) optimization tool settings in a multi-hypervisor data center environment, according to embodiments of the present disclosure. In one or more embodiments, given the specific data gathered related to the migration and the applicable correlation models in the repository, a number of correlations may be appropriate. Thus, in one or more embodiments, the analytics-based migration system presents (405) a plurality of correlations options (and, in embodiments, potentially implementation-related settings options) for the direction-specific virtual machine migration to a user for input. In addition to there being more than one correlation option available, one or more of the correlations may accept a range of values. Thus, the input may also include requesting that a value for a parameter be supplied (although a default value may be set if none is supplied). Accordingly, in one or more embodiments, the analytics-based migration system may request that the user provide input. Accordingly, in one or more embodiments, a user may provide (410) addition data to help narrow the options, to supply values for numerical ranges, to supply data that was not available or that the system was unable to gather (e.g., certain hypervisor implementation-related data), or some combination thereof. Given the inputted data, the number of correlations that are available may be reduced. If the available correlation data and implementation-related settings is sufficiently reduced (415) to be implementable, the process may apply (420) the finalized set of correlated settings and implementation-related settings to the directional-specific virtual machine migration. Otherwise, the process may present (405) the reduced set of correlated settings and request further input (410). While not depicted inFIG.4, it should be noted that, like embodiments discuss in reference toFIG.3, any aspect of the finalized correlated data settings and hypervisor implementation settings may be overridden by a user, either directly or via a set of predefined rules. It shall be noted that in the embodiments depicted related toFIG.4, implementation-related settings about the source hypervisor implementation, the destination hypervisor implementation or both were obtained and considered as part of the migration. In one or more embodiments, gathering data about the source hypervisor implementation, the destination hypervisor implementation or both and/or obtaining implementation-related settings for the destination hypervisor may be optionally performed. C. System Embodiments In one or more embodiments, aspects of the present patent document may be directed to, may include, or may be implemented on one or more information handling systems (or computing systems). An information handling system/computing system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data. For example, a computing system may be or may include a personal computer (e.g., laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA), smart phone, phablet, tablet, etc.), smart watch, server (e.g., blade server or rack server), a network storage device, camera, or any other suitable device and may vary in size, shape, performance, functionality, and price. The computing system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, read only memory (ROM), and/or other types of memory. Additional components of the computing system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, mouse, stylus, touchscreen, and/or video display. The computing system may also include one or more buses operable to transmit communications between the various hardware components. FIG.5depicts a simplified block diagram of an information handling system (or computing system), according to embodiments of the present disclosure. It will be understood that the functionalities shown for system500may operate to support various embodiments of a computing system—although it shall be understood that a computing system may be differently configured and include different components, including having fewer or more components as depicted inFIG.5. As illustrated inFIG.5, the computing system500includes one or more central processing units (CPU)501that provides computing resources and controls the computer. CPU501may be implemented with a microprocessor or the like, and may also include one or more graphics processing units (GPU)519and/or a floating-point coprocessor for mathematical computations. In one or more embodiments, one or more GPUs519may be incorporated within the display controller509, such as part of a graphics card or cards. The system500may also include a system memory502, which may comprise RAM, ROM, or both. A number of controllers and peripheral devices may also be provided, as shown inFIG.5. An input controller503represents an interface to various input device(s)504, such as a keyboard, mouse, touchscreen, and/or stylus. The computing system500may also include a storage controller507for interfacing with one or more storage devices508each of which includes a storage medium such as magnetic tape or disk, or an optical medium that might be used to record programs of instructions for operating systems, utilities, and applications, which may include embodiments of programs that implement various aspects of the present disclosure. Storage device(s)508may also be used to store processed data or data to be processed in accordance with the disclosure. The system500may also include a display controller509for providing an interface to a display device511, which may be a cathode ray tube (CRT) display, a thin film transistor (TFT) display, organic light-emitting diode, electroluminescent panel, plasma panel, or any other type of display. The computing system500may also include one or more peripheral controllers or interfaces505for one or more peripherals506. Examples of peripherals may include one or more printers, scanners, input devices, output devices, sensors, and the like. A communications controller514may interface with one or more communication devices515, which enables the system500to connect to remote devices through any of a variety of networks including the Internet, a cloud resource (e.g., an Ethernet cloud, a Fiber Channel over Ethernet (FCoE)/Data Center Bridging (DCB) cloud, etc.), a local area network (LAN), a wide area network (WAN), a storage area network (SAN) or through any suitable electromagnetic carrier signals including infrared signals. In the illustrated system, all major system components may connect to a bus516, which may represent more than one physical bus. However, various system components may or may not be in physical proximity to one another. For example, input data and/or output data may be remotely transmitted from one physical location to another. In addition, programs that implement various aspects of the disclosure may be accessed from a remote location (e.g., a server) over a network. Such data and/or programs may be conveyed through any of a variety of machine-readable medium including, for example: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store or to store and execute program code, such as application specific integrated circuits (ASICs), programmable logic devices (PLDs), flash memory devices, other non-volatile memory (NVM) devices (such as 3D XPoint-based devices), and ROM and RAM devices. FIG.6depicts an alternative block diagram of an information handling system, according to embodiments of the present disclosure. It will be understood that the functionalities shown for system600may operate to support various embodiments of the present disclosure—although it shall be understood that such system may be differently configured and include different components, additional components, or fewer components. The information handling system600may include a plurality of I/O ports605, a network processing unit (NPU)615, one or more tables620, and a central processing unit (CPU)625. The system includes a power supply (not shown) and may also include other components, which are not shown for sake of simplicity. In one or more embodiments, the I/O ports605may be connected via one or more cables to one or more other network devices or clients. The network processing unit615may use information included in the network data received at the node600, as well as information stored in the tables620, to identify a next device for the network data, among other possible activities. In one or more embodiments, a switching fabric may then schedule the network data for propagation through the node to an egress port for transmission to the next destination. Aspects of the present disclosure may be encoded upon one or more non-transitory computer-readable media with instructions for one or more processors or processing units to cause steps to be performed. It shall be noted that the one or more non-transitory computer-readable media shall include volatile and/or non-volatile memory. It shall be noted that alternative implementations are possible, including a hardware implementation or a software/hardware implementation. Hardware-implemented functions may be realized using ASIC(s), programmable arrays, digital signal processing circuitry, or the like. Accordingly, the “means” terms in any claims are intended to cover both software and hardware implementations. Similarly, the term “computer-readable medium or media” as used herein includes software and/or hardware having a program of instructions embodied thereon, or a combination thereof. With these implementation alternatives in mind, it is to be understood that the figures and accompanying description provide the functional information one skilled in the art would require to write program code (i.e., software) and/or to fabricate circuits (i.e., hardware) to perform the processing required. It shall be noted that embodiments of the present disclosure may further relate to computer products with a non-transitory, tangible computer-readable medium that have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present disclosure, or they may be of the kind known or available to those having skill in the relevant arts. Examples of tangible computer-readable media include, for example: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store or to store and execute program code, such as application specific integrated circuits (ASICs), programmable logic devices (PLDs), flash memory devices, other non-volatile memory (NVM) devices (such as 3D XPoint-based devices), and ROM and RAM devices. Examples of computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer using an interpreter. Embodiments of the present disclosure may be implemented in whole or in part as machine-executable instructions that may be in program modules that are executed by a processing device. Examples of program modules include libraries, programs, routines, objects, components, and data structures. In distributed computing environments, program modules may be physically located in settings that are local, remote, or both. One skilled in the art will recognize no computing system or programming language is critical to the practice of the present disclosure. One skilled in the art will also recognize that a number of the elements described above may be physically and/or functionally separated into modules and/or sub-modules or combined together. It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the scope of the present disclosure. It is intended that all permutations, enhancements, equivalents, combinations, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It shall also be noted that elements of any claims may be arranged differently including having multiple dependencies, configurations, and combinations.
39,642
11861388
The foregoing and other features of the present disclosure will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure. In a non-persistent (e.g., stateless) desktop account of a virtual desktop infrastructure (VDI), sessions are stateless. That is, changes made to a profile, such as application settings, desktop preferences and/or login credentials, of a virtual machine (VM) instance are deleted from the VM instance after the session is closed. The VM instance can then be returned to a pool where it waits to be served to the next user. Users desire to preserve the profile in a VDI account. Conventional technologies that attempt personalization of non-persistent desktops do not work out-of-box for non-domain joined instance (non-DJI) VMs, which are VMs that lack of group policy for management and domain trust for authentication/authorization. Disclosed herein are embodiments of a system, method, and computer readable media that allow personalization of a user profile while still delivering virtual desktops from a master image on non-DJI virtual machines. In some embodiments, the system collects user profile data on session end and saves the user profile data to a secure disk tied to a specific non-persistent desktop user. In some embodiments, when the user logs into a stateless session, their profile disk is automatically attached to the virtual machine and made available to their session. Advantageously, the system provides seamless per-user customization for non-DJI instances without losing the management benefits of stateless instances. Moreover, the system decouples the user profile from the operating system (OS), making user profiles universally portable. In some embodiments, the end user logged into a platform and performs an input/output (I/O) action (e.g., clicks) to start their Desktop or Application. The I/O action can instruct a backplane to look for a disk associated with a profile of the end user and attach it to an incoming session. In some embodiments, once the physical disk is successfully attached, the user is logged in to a session on a VM, and the user profile is mounted in an operating system of the VM. Once the session is closed, the user session can be logged off, any changes can be persisted in the user profile, and profile disk can be detached. In some embodiments, administrators can backup/restore user profile disks. Virtualization Technology and Environment Referring now toFIG.1, a virtual computing system100is shown, in accordance with some embodiments of the present disclosure. The virtual computing system100includes a plurality of nodes, such as a first node105, a second node110, and a third node115. Each of the first node105, the second node110, and the third node115may also be referred to as a “host” or “host machine.” The first node105includes user virtual machines (“user VMs”)120A and120B (collectively referred to herein as “user VMs120”), a hypervisor125configured to create and run the user VMs, and a controller VM130configured to manage, route, and otherwise handle workflow requests between the various nodes of the virtual computing system100. Similarly, the second node110includes user VMs135A and135B (collectively referred to herein as “user VMs135”), a hypervisor140, and a controller VM145, and the third node115includes user VMs150A and150B (collectively referred to herein as “user VMs150”), a hypervisor155, and a controller VM160. The controller VM130, the controller VM145, and the controller VM160are all connected to a network165to facilitate communication between the first node105, the second node110, and the third node115. Although not shown, in some embodiments, the hypervisor125, the hypervisor140, and the hypervisor155may also be connected to the network165. The virtual computing system100also includes a storage pool170. The storage pool170may include network-attached storage (NAS)175and direct-attached storage (DAS)180A,180B, and180C (collectively referred to herein as DAS180). The NAS175is accessible via the network165and, in some embodiments, may include cloud storage185, as well as local storage area network190(also referred to as networked storage190). In contrast to the NAS175, which is accessible via the network165, the DAS180includes storage components that are provided internally within each of the first node105, the second node110, and the third node115, respectively, such that each of the first, second, and third nodes may access its respective DAS without having to access the network165. It is to be understood that only certain components of the virtual computing system100are shown inFIG.1. Nevertheless, several other components that are needed or desired in the virtual computing system100to perform the functions described herein are contemplated and considered within the scope of the present disclosure. Although three of the plurality of nodes (e.g., the first node105, the second node110, and the third node115) are shown in the virtual computing system100, in other embodiments, greater than or fewer than three nodes may be used. Likewise, although only two of the user VMs (e.g., the user VMs120, the user VMs135, and the user VMs150) are shown on each of the respective first node105, the second node110, and the third node115, in other embodiments, the number of the user VMs on each of the first, second, and third nodes may vary to include either a single user VM or more than two user VMs. Further, the first node105, the second node110, and the third node115need not always have the same number of the user VMs (e.g., the user VMs120, the user VMs135, and the user VMs150). In some embodiments, each of the first node105, the second node110, and the third node115may be a hardware device, such as a server. For example, in some embodiments, one or more of the first node105, the second node110, and the third node115may be an NX-1000 server, NX-3000 server, NX-6000 server, NX-8000 server, etc. provided by Nutanix, Inc. or server computers from Dell, Inc., Lenovo Group Ltd. or Lenovo PC International, Cisco Systems, Inc., etc. In other embodiments, one or more of the first node105, the second node110, or the third node115may be another type of hardware device, such as a personal computer, an input/output or peripheral unit such as a printer, or any type of device that is suitable for use as a node within the virtual computing system100. In some embodiments, the virtual computing system100may be part of a data center. Each of the first node105, the second node110, and the third node115may also be configured to communicate and share resources with each other via the network165. For example, in some embodiments, the first node105, the second node110, and the third node115may communicate and share resources with each other via the controller VM130, the controller VM145, and the controller VM160, and/or the hypervisor125, the hypervisor140, and the hypervisor155. One or more of the first node105, the second node110, and the third node115may be organized in a variety of network topologies. Also, the first node105may include one or more processing units192A, the second node110may include one or more processing units192B, and the third node115may include one or more processing units192C. The processing units192A,192B, and192C are collectively referred to herein as the processing units192. The processing units192may be configured to execute instructions. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits of the first node105, the second node110, and the third node115. The processing units192may be implemented in hardware, firmware, software, or any combination thereof. The term “execution” is, for example, the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. The processing units192, thus, execute an instruction, meaning that they perform the operations called for by that instruction. The processing units192may be operably coupled to the storage pool170, as well as with other elements of the first node105, the second node110, and the third node115to receive, send, and process information, and to control the operations of the underlying first, second, or third node. The processing units192may retrieve a set of instructions from the storage pool170, such as, from a permanent memory device like a read only memory (“ROM”) device and copy the instructions in an executable form to a temporary memory device that is generally some form of random access memory (“RAM”). The ROM and RAM may both be part of the storage pool170, or in some embodiments, may be separately provisioned from the storage pool. The RAM may be stand-alone hardware such as RAM chips or modules. Further, each of the processing units192may include a single stand-alone processing unit, or a plurality of processing units that use the same or different processing technology. With respect to the storage pool170and particularly with respect to the DAS180, each of the DAS180may include a variety of types of memory devices. For example, in some embodiments, one or more of the DAS180may include, but is not limited to, any type of RAM, ROM, flash memory, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (“CD”), digital versatile disk (“DVD”), etc.), smart cards, solid state devices, etc. Likewise, the NAS175may include any of a variety of network accessible storage (e.g., the cloud storage185, the local storage area network190, etc.) that is suitable for use within the virtual computing system100and accessible via the network165. The storage pool170, including the NAS175and the DAS180, together form a distributed storage system configured to be accessed by each of the first node105, the second node110, and the third node115via the network165, the controller VM130, the controller VM145, the controller VM160, and/or the hypervisor125, the hypervisor140, and the hypervisor155. In some embodiments, the various storage components in the storage pool170may be configured as virtual disks for access by the user VMs120, the user VMs135, and the user VMs150. Each of the user VMs120, the user VMs135, and the user VMs150is a software-based implementation of a computing machine in the virtual computing system100. The user VMs120, the user VMs135, and the user VMs150emulate the functionality of a physical computer. Specifically, the hardware resources, such as processing unit, memory, storage, etc., of the underlying computer (e.g., the first node105, the second node110, and the third node115) are virtualized or transformed by the respective hypervisor125, the hypervisor140, and the hypervisor155, into the underlying support for each of the user VMs120, the user VMs135, and the user VMs150that may run its own operating system and applications on the underlying physical resources just like a real computer. By encapsulating an entire machine, including CPU, memory, operating system, storage devices, and network devices, the user VMs120, the user VMs135, and the user VMs150are compatible with most standard operating systems (e.g. Windows, Linux, etc.), applications, and device drivers. Thus, each of the hypervisor125, the hypervisor140, and the hypervisor155is a virtual machine monitor that allows a single physical server computer (e.g., the first node105, the second node110, third node115) to run multiple instances of the user VMs120, the user VMs135, and the user VMs150, with each user VM sharing the resources of that one physical server computer, potentially across multiple environments. By running the user VMs120, the user VMs135, and the user VMs150on each of the first node105, the second node110, and the third node115, respectively, multiple workloads and multiple operating systems may be run on a single piece of underlying hardware computer (e.g., the first node, the second node, and the third node) to increase resource utilization and manage workflow. The user VMs120, the user VMs135, and the user VMs150are controlled and managed by their respective instance of the controller VM130, the controller VM145, and the controller VM160. The controller VM130, the controller VM145, and the controller VM160are configured to communicate with each other via the network165to form a distributed system195. Each of the controller VM130, the controller VM145, and the controller VM160may also include a local management system configured to manage various tasks and operations within the virtual computing system100. For example, in some embodiments, the local management system may perform various management related tasks on the user VMs120, the user VMs135, and the user VMs150. The hypervisor125, the hypervisor140, and the hypervisor155of the first node105, the second node110, and the third node115, respectively, may be configured to run virtualization software, such as, ESXi from VMWare, AHV from Nutanix, Inc., XenServer from Citrix Systems, Inc., etc. The virtualization software on the hypervisor125, the hypervisor140, and the hypervisor155may be configured for running the user VMs120, the user VMs135, and the user VMs150, respectively, and for managing the interactions between those user VMs and the underlying hardware of the first node105, the second node110, and the third node115. Each of the controller VM130, the controller VM145, the controller VM160, the hypervisor125, the hypervisor140, and the hypervisor155may be configured as suitable for use within the virtual computing system100. The network165may include any of a variety of wired or wireless network channels that may be suitable for use within the virtual computing system100. For example, in some embodiments, the network165may include wired connections, such as an Ethernet connection, one or more twisted pair wires, coaxial cables, fiber optic cables, etc. In other embodiments, the network165may include wireless connections, such as microwaves, infrared waves, radio waves, spread spectrum technologies, satellites, etc. The network165may also be configured to communicate with another device using cellular networks, local area networks, wide area networks, the Internet, etc. In some embodiments, the network165may include a combination of wired and wireless communications. Referring still toFIG.1, in some embodiments, one of the first node105, the second node110, or the third node115may be configured as a leader node. The leader node may be configured to monitor and handle requests from other nodes in the virtual computing system100. For example, a particular user VM (e.g., the user VMs120, the user VMs135, or the user VMs150) may direct an input/output request to the controller VM (e.g., the controller VM130, the controller VM145, or the controller VM160, respectively) on the underlying node (e.g., the first node105, the second node110, or the third node115, respectively). Upon receiving the input/output request, that controller VM may direct the input/output request to the controller VM (e.g., one of the controller VM130, the controller VM145, or the controller VM160) of the leader node. In some cases, the controller VM that receives the input/output request may itself be on the leader node, in which case, the controller VM does not transfer the request, but rather handles the request itself. The controller VM of the leader node may fulfil the input/output request (and/or request another component within the virtual computing system100to fulfil that request). Upon fulfilling the input/output request, the controller VM of the leader node may send a response back to the controller VM of the node from which the request was received, which in turn may pass the response to the user VM that initiated the request. In a similar manner, the leader node may also be configured to receive and handle requests (e.g., user requests) from outside of the virtual computing system100. If the leader node fails, another leader node may be designated. Furthermore, one or more of the first node105, the second node110, and the third node115may be combined together to form a cluster (e.g., storage cluster, physical cluster, cluster of nodes, cluster of nodes in a network, etc.). Generally speaking, all of the nodes (e.g., the first node105, the second node110, and the third node115) in the virtual computing system100may be divided into one or more clusters. One or more components of the storage pool170or the processing units192may be part of the cluster as well. For example, the virtual computing system100as shown inFIG.1may form one cluster in some embodiments. Multiple clusters may exist within a given virtual computing system (e.g., the virtual computing system100). The user VMs120, the user VMs135, and the user VMs150that are part of a cluster are configured to share resources with each other. In some embodiments, multiple clusters may share resources with one another. Additionally, in some embodiments the virtual computing system100includes a central management system197that is configured to manage and control the operation of the various clusters in the virtual computing system. In some embodiments, the central management system197may be configured to communicate with the local management systems on each of the controller VM130, the controller VM145, the controller VM160for controlling the various clusters. Again, it is to be understood again that only certain components and features of the virtual computing system100are shown and described herein. Nevertheless, other components and features that may be needed or desired to perform the functions described herein are contemplated and considered within the scope of the present disclosure. It is also to be understood that the configuration of the various components of the virtual computing system100described above is only an example and is not intended to be limiting in any way. Rather, the configuration of those components may vary to perform the functions described herein. User Profile Management for Non-domain Joined Instance Virtual Machines Referring now toFIG.2, an example block diagram of a non-domain joined instance (non-DJI) system200is shown, in accordance with some embodiments of the present disclosure. The non-DJI system200includes a user (e.g., a computing device of the user, a client)201, a platform202in communication with the user201, and a plurality of virtual machines206. The platform202(e.g., server) includes a control panel (e.g., a launchpad, a dashboard, a front-end of the platform)203, a workload manager (e.g., a backplane, a back-end of the platform, a profile disk manager, an infrastructure as a service (IaaS) orchestrator, a gateway to the infrastructure provider)204, and a broker205. In some embodiments, the control panel203has programmed instructions to receive one or more inputs from a user to initiate a session. The workload manager204has programmed instructions to communicate with infrastructure providers by, e.g., making calls to the infrastructure provider application programming interface (API). The workload manager204has programmed instructions to (e.g., makes an API call to the infrastructure provider API to) attach a disk (e.g., a profile disk, a physical disk) for storing a profile to a reserved VM at a session start and detach a disk at a session end. The profile includes registry and file system artifacts/data (e.g., views, shortcuts, wallpapers, screen savers, color schemes, supplementary files, dictionaries, signatures, auto-complete files, MRU lists, cookies, history, toolbars, auto text, connection settings, etc.) that are associated with the individual user's session and shape an OS/application environment within that. The workload manager204has programmed instructions to backup and restore a user's profile disk, as well as adjust a size of the user's profile disk. The plurality of virtual machines206includes a virtual machine (VM) image (e.g., master image, gold image)207and a plurality of workload VMs (e.g., production VMs, pool of VMs, etc.)208. Each workload VM208(e.g., a workload VM208A) includes an operating system (OS)209and applications such as a guest agent210, a credential provider211, and a disk service212(e.g., a profile disk service212). In some embodiments, the workload manager204has programmed instructions to create (e.g., clone, generate) the plurality of workload VMs208from the VM image207. The credential provider211has programmed instructions to initiate a login for a user (e.g., a OS209user) logged into an account (e.g., a OS account) of the OS209. The disk service212has programmed instructions to, in some embodiments, suspend the OS209login sequence to perform (e.g., automatically, transparently) a mapping of an external profile stored on a profile disk using the file system filter driver and registry filter driver. The guest agent210has programmed instructions to orchestrate tasks on managed VMs including the calls to credential provider211or disk service212. The credential provider211enables a remote, programmatic interactive session logon with supplied local or domain user credentials. In some embodiments, profile management for non-domain joined instance (non-DJI) VMs uses local (e.g., to the platform) credentials. In some embodiments, credentials are generated (e.g., randomly) on a session start by the guest agent210(e.g., not passed remotely). In some embodiments, credentials are discarded (e.g., not persisted) on a session end. With reference toFIG.2, a first connection/communication channel (e.g., a web-socket connection) is established between the user201and the platform202(e.g., the control panel203). The user201logs into the control panel203. In some embodiments, the user201is an individual (e.g., an employee, a user without administrative rights, an administrator) or another service/application. In some embodiments, the control panel203is a web application (e.g., accessible in a browser). The control panel203has programmed instructions to generate a request based on interaction from the user201or receive a request from the user201to start an application or desktop session. In some embodiments, the control panel203has programmed instructions to generate the request by receiving a single input (e.g., single click) from the user201. In some embodiments, to generate/receive the request, the user201directs a cursor to an icon or other visual element and clicks on it using an input device (e.g., a mouse). In some embodiments, to generate/receive the request, the request is generated/received automatically responsive to the user201logging into the control panel203. The request includes information about the user201such as a user name, whether the non-DJI feature is enabled (e.g., whether the user201is part of a workgroup instead of a domain), and the like. The control panel203forwards the request or a subset of the information from the request to the workload manager204. In some embodiments, the workload manager204, the broker205, or some other component of the platform202, has programmed instructions to assign (e.g., select, reserve) one of the powered-up workload VMs208(e.g., VM208A) to the user. If none of the workload VMs208is on, the workload manager204, has programmed instructions to (e.g., makes an API call to the infrastructure provider API to) power up (e.g., turn on, boot up, enable, activate, make available, etc.) one of the workload VMs208(e.g., VM208A). In some embodiments, the workload VM208A notifies (e.g., indicates to, specifies to, alerts, informs) the platform202that the workload VM208A is powered-up. Once powered-up, the platform202(e.g., the workload manager204) sends a set of tasks to the workload VM208A (e.g., the guest agent210) using, in some embodiments, command-and-control protocol (e.g., a hypertext transfer protocol secure (https) protocol over the port tcp/8112) over a second connection/communication channel. The workload VM208A (e.g., the guest agent210) sends a response using a response protocol (e.g., a https protocol over the port tcp/443) over a third connection/communication channel. In some embodiments, the communication of tasks is done asynchronously. In some embodiments, the workload manager204sends a first task, the guest agent210executes the task, the guest agent210responds to confirm the task was completed, the workload manager204sends a second task, and so on. In some embodiments, the set of tasks include logging off all users currently logged into the OS209, attaching a disk, logging on a non-DJI user, and mounting a profile to the VM. In some embodiments, the set of tasks are given in a predetermined order. The platform202(e.g., the control panel203or the workload manager204) has programmed instructions to determine that the user201has a non-DJI feature enabled (e.g., that the user201is a non-DJI user, that the user201has requested a session on a non-DJI VM, etc.). In some embodiments, a user having a non-DJI feature can gain access to a VM even though the user is a non-DJI user (e.g., the user or an account of the user is not part of an enterprise/organization domain/directory managed by a directory service (e.g., Active Directory)). In some embodiments, a non-DJI VM is a standalone VM and not part of an organization's directory, at least with respect to the non-DJI user. A non-DJI VM lacks of group policy for management and domain trust for authentication/authorization. A non-DJI VM does not have access to network/corporate resources (e.g., servers, processors, memory, storage). All the resources of a non-DJI VM are local resources. The workload manager204has programmed instructions to determine that the user has the non-DJI feature enabled based on the information, or subset of information, in the user session request that is received from the control panel203. For example, the information may include that the user is part of a workgroup, as opposed to a domain, or the information may include that the user name/identity is a local name/identity, e.g., does not include a domain or enterprise name (or an “@” or other special character, followed by the domain or enterprise name). In some embodiments, the platform202(e.g., the workload manager204) has programmed instructions to notify the workload VM208A (e.g., the guest agent210) of a new user session. Notification of the new user session includes that user session is pending and that the user has the non-DJI feature enabled. The workload manager204sends/forwards the session request, or a subset of information from the session request, to the guest agent210. The guest agent210has programmed instructions to determine that the user has the non-DJI feature enabled. The guest agent210has programmed instructions to determine that the user has the non-DJI feature enabled based on the information, or subset of information, in the user session request that is received from the workload manager204. For example, the information may include that the user is part of a workgroup, as opposed to a domain, or the information may include that the user name/identity is a local name/identity, e.g., does not include a domain or enterprise name (or an “@” or other special character, followed by the domain or enterprise name). In some embodiments, the user/session information is sent as part of the first task (e.g., to log off all users). In some embodiments, the workload manager204has programmed instructions to send a request/command to the guest agent210to log off all users. In some embodiments, the guest agent210has programmed instructions to notify the platform202that the guest agent210has off all users currently logged into the OS209and/or that the platform202is to attach a profile disk. The platform202(e.g., the workload manager204) has programmed instructions to check if the user201has a profile disk (e.g., a disk assigned to the user). The platform202has programmed instructions to determine if the session is a first session or a repeat (e.g., second) session. If the platform202determines that the session is a first session (e.g., chronologically first session, first time, an earliest session, the disk is not yet created, etc.), the platform202makes a call to the infrastructure provider API in order to create a new disk for the user. If the platform202determines that the session is a repeat session (e.g., the disk is already created), the platform202proceeds to the next step. The platform202(e.g., the workload manager204) makes a call to the infrastructure provider API to attach the disk to the workload VM208A. In some embodiments, the workload manager204has programmed instructions to send a request/command to the guest agent210to check if the user201has a profile disk, the guest agent210responds, and based on the response, the workload manager204decides whether to create a new disk or to attach the pre-existing disk. Once the disk is successfully attached, the workload manager204has programmed instructions to notify the guest agent210that disk is attached to (e.g., is included in, is present in, or is otherwise associated with) the VM208A and to validate the disk (e.g., check whether the disk is formatted). The guest agent210has programmed instructions to determine that the disk has been attached by, e.g., receiving the notification from the workload manager204. The guest agent210has programmed instructions to check a file system on the attached. The guest agent210has programmed instructions to determine if the session is a first session or a repeat (e.g., second) session. Responsive to determining that the session is a first session, the guest agent210has programmed instructions to format the disk to an appropriate format, such as new technology file system (NTFS), create a volume identifier/label (e.g., “ProfileDisk”) and set drive letter (e.g., to drive U:). Responsive to determining that the session is a repeat session, the guest agent210proceeds to the next step. Once the disk is successfully validated and configured by the guest agent210, in some embodiments, the workload manager204has programmed instructions to send a request/command to the guest agent210to initiate login a non-DJI user. Once the disk is successfully validated and configured by the guest agent210, the guest agent210has programmed instructions to enable the disk service212. In some embodiments, the disk service212is disabled by default in the master image207. In some embodiments, the guest agent210has programmed instructions to assign any non-DJI user a same user account (locally created in the OS209) that is, in some embodiments, exclusively for the non-DJI feature. Responsive to determining that the session is a first session, the guest agent210has programmed instructions to (e.g., randomly) generate, or request generation of, a password (e.g., a random string). The generated password is linked to the OS209account. In some embodiments, a name identifying the OS209account is predetermined. In some embodiments, the name is configurable. Responsive to determining that the session is a repeat session, the guest agent210has programmed instructions to identify the password (e.g., fetch it from the appropriate memory or storage). After generating or identifying the password, the guest agent210has programmed instructions to set (e.g., reset) the password for the OS209account. The guest agent210has programmed instructions to pass credentials (e.g., name and the reset password) to the credential provider211. The credential provider211has programmed instructions to initiate/perform a user login for the OS209account. The disk service212has programmed instructions to detect a user login process and check if a profile associated with the OS209account is located on the profile disk. The disk service212has programmed instructions to determine if the session is a first session or a repeat (e.g., second) session. Responsive to determining that the session is a first session (e.g., a profile has not been created), the disk service212has programmed instructions to redirect, to the external profile disk, writes intended for the profile (e.g., decouple the profile from the OS209drive). Responsive to determining that the session is a repeat session or after redirecting writes, the disk service212has programmed instructions to mount (e.g., load, make accessible) the profile from the profile disk and to the OS209account. The guest agent210has programmed instructions to detect that the disk service212has finished profile the mount process and notify the platform202(e.g., the workload manager204) that a machine (e.g., the workload VM208A, a machine/hardware associated with the workload VM208A) is ready for the session (e.g., that login is complete). In some embodiments, the workload manager204instructs the control panel203that the VM208A is ready. In some embodiments, the control panel203loads a terminal into a browser of the computing device of the user201. The terminal establishes fourth connection/communication channel (web socket connection) between the user201and the VM208A using, for example, a remoting (e.g., streaming) protocol. The remoting protocol can be https or a proprietary protocol. Traffic is bidirectional on the fourth connection. For example, the OS209or a predetermined subset of applications running on the OS209are rendered on the terminal of user201, and user inputs in the terminal are sent to the workload VM208A. Whether the OS209or a predetermined subset of applications running on the OS209are rendered, or otherwise made accessible to the user201, can be predetermined by an administrator. Once the session is closed, the guest agent210has programmed instructions to logoff of the OS209account, and the changes to the profile are saved. Once the interactive session is closed, the guest agent210has programmed instructions to inform the platform202that a session is terminated (e.g., doesn't exist) and request for the disk to be detached from the workload VM208A. Responsive to the request, the platform202has programmed instructions to detach the profile disk. Referring now toFIG.3, a flowchart of an example method300for starting a session is illustrated, in accordance with some embodiments of the present disclosure. The method300may be implemented using, or performed by, non-DJI system200, one or more components of the non-DJI system200(e.g., the user201, the platform202, the workload manager204, the broker205, the workload VM208A, the operating OS209, the guest agent210, the credential provider211, and the disk service212), or a processor associated with the non-DJI system200or the one or more components of the non-DJI system200. Additional, fewer, or different operations may be performed in the method300depending on the embodiment. At operation302, the control panel203requests a session. In some embodiments, the control panel203requests, or receives a request for, the session responsive to a single input (e.g., click) from a user (e.g., a computing device of the user)201. In some embodiments, the remaining operations execute automatically responsive to the session request. At operation304, the workload manager204, running on the platform202, reserves the workload VM208A for (e.g., assigns to) a user201. At operation306, the guest agent210, running on the OS209of the workload VM208A, determines that user201is associated with a non-DJI user. At operation308, the guest agent210logs off all users from the OS209of the workload VM208A. In some embodiments, the guest agent210notifies the workload manager204that the guest agent210is ready for the session (e.g., ready for attaching a disk, that all users from the OS209of the workload VM208A). In some embodiments, the notification includes a request to attach a disk and/or to check if the user201has a profile. At operation310, the workload manager204checks if the user201has a profile. If the workload manager204determines that the user201does not have the profile, at operation312, the workload manager204causes (e.g., sends an API call to) an infrastructure provider API to create a profile (e.g., assign space on a disk to the user). If the workload manager204determines that the user201has the profile, or the workload manager204completes operation312, then, at operation314, the workload manager204causes (e.g., sends an API call to) an infrastructure provider API to attach the disk. At operation316, the guest agent210assigns a non-DJI account to the user. At operation318, the guest agent210sets/resets a session password for the non-DJI account. In some embodiments, the guest agent210sends the session password to the credential provider211of the OS209. At operation320, the guest agent210causes the credential provider211to initiate the user login. In some embodiments, the guest agent210sends a notification to the disk service212of the OS209that the profile is ready to be mounted. At operation322, the guest agent210causes the disk service212mounts a profile onto the VM208A. In some embodiments, the disk service212intercepts writes intended for the OS209related to the profile, and redirects them to the disk. At operation324, the workload manager204notifies the control panel203that the session is ready. At operation326, the control panel203loads a terminal within a browser of the user201. The terminal establishes communication with the workload VM208A (e.g., the guest agent210and/or the OS209). In some embodiments, further communications are supported with the connection. For example, the OS209and the applications of the workload VM208A are rendered on the computing device of the user201and the VM208A receives any inputs from the computing device of the user201. In some embodiments, the platform202is one or more instances of a user VM120with respect to the virtual computing system100ofFIG.1. In some embodiments, the workload VM208A is an instance of a user VM120with respect to the virtual computing system100ofFIG.1. In some embodiments, the workload VM208A is on a same or different cluster (e.g., datacenter, region, public cloud, private cloud, etc.) as/than a cluster on which the platform202resides, in some embodiments. In some embodiments, the profile disk is part of the storage pool170ofFIG.1. In some embodiments, instead of virtual machines (e.g.,208A), containers or other computing elements can be used. In some embodiments, the platform202is hosted on a cloud or datacenter by an enterprise or an infrastructure provider. In some embodiments, the cloud or datacenter may have multiple availability zones. Each of the components (e.g., elements, entities) of the virtual computing system100and the non-DJI system200(e.g., the user201, the platform202, the workload manager204, the broker205, the workload VM208A, the operating OS209, the guest agent210, the credential provider211, and the disk service212), is implemented using hardware, software, or a combination of hardware or software, in one or more embodiments. The components of the virtual computing system100and the non-DJI system200can include any application, program, library, script, task, service, process or any type and form of executable instructions executed by one or more processors (e.g., the processing unit192A), in one or more embodiments. Each of the one or more processors is hardware. The instructions may be stored on one or more computer readable and/or executable storage media including non-transitory storage media such as non-transitory storage media in the storage pool170with respect toFIG.1. It is to be understood that any examples used herein are simply for purposes of explanation and are not intended to be limiting in any way. The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to disclosures containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, unless otherwise noted, the use of the words “approximate,” “about,” “around,” “substantially,” etc., mean plus or minus ten percent. The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the disclosure be defined by the claims appended hereto and their equivalents.
46,466
11861389
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS During the course of an application lifecycle management, it is often difficult to switch from one virtual application to another virtual application to perform different functions and/or phases of the application lifecycle management. Often, these virtual applications support different languages and/or structures. Thus, conversion and/or transfer of software, programs, code and/or other specified instructions from one virtual application to another virtual application can be difficult and tedious. Various embodiments of the present disclosure describe novel and nonobvious systems and methods for compiling a specified instruction from a first virtual application to a second virtual application. For example, KUBERNETES is a container-orchestration system for automating computer application deployment, scaling, and management. KUBERNETES often relies on one or more virtual applications, such as HELM and ANSIBLE. While HELM is often used to provide the early phases of application lifecycle development, HELM operators may not be as effective or desirable in performing later phases of the application lifecycle development. Often, users desire operators with robust functionality in order to take care of the later phases. A virtual application like ANSIBLE may have such robust operators. There is a desire and need for a tool to automate the conversion from operators used by HELM to operators used by ANSIBLE, as the ANSIBLE operators are more suitable for the later phases of application lifecycle development. However, while HELM operators are written using the GOLANG template language, ANSIBLE operators are written using JINJA2 and YAML. Helm ANSIBLE template exporter is a set of utilities that may be used to automate conversion of HELM operators (e.g., HELM Charts) into ANSIBLE operators. However, the conversion of HELM operators (e.g., HELM Charts) into ANSIBLE operators is not straightforward. The presently disclosed systems and methods provide novel and nonobvious systems and methods for converting HELM operators into ANSIBLE operators. Furthermore, the disclosed process of compiling the specified instruction from the first virtual application (e.g., HELM) to the second virtual application (e.g., ANSIBLE) may involve the generation of new syntax features (e.g., “second syntax features”) that are based on a determination of more precise definitions of variables and properties previously defined by current syntax features used by the first virtual application (e.g., “first syntax features”). Thus, generating the second syntax features may result in a smaller allocation of dynamic memory when executing the specified instruction in the second virtual application, as the processor may more quickly determine the precise definition of a variable or property defined by a syntax feature. The execution of a specified instruction using the new syntax features may result in shorter runtime, increased accuracy of the intended result of the specified instruction, and a more robust software development. For example, certain variables, and/or their definitions, may be deemed to not be as useful or relevant, e.g., in the application lifecycle management operations performed by the second virtual application. Such variables and/or their definitions or properties may be eliminated. In some aspects, the elimination may involve analyzing the specified instructions, generating an Abstract Syntax Tree (AST), and then performing a branch pruning. In some aspects, the execution of the specified instruction using the new syntax features generated using the systems and methods described herein may increase processor efficiency, e.g., by requiring less CPU cycles. For example, a first syntax feature (e.g., a “for” loop, such as “for x in range(5) print (x)”) may take a very long time to execute in the second virtual application. The first syntax feature may be replaced with a substitute syntax feature (e.g., a plurality of “print” functions,” such as “print(1)print(2)print(3)print(4)print(5)”), which may achieve the same result but may take less time. FIG.1illustrates a block diagram of an example computer network environment for compiling a specified instruction from a first virtual application to a second virtual application according to an example embodiment of the present disclosure. The network environment100may include a computing device102, a first virtual application server130A, and a second virtual application server130B. One or more of these components may be able to communicate with one another over a communication network150. As will be described, these components may be used to compile, cross-compile, convert, and/or execute a specified instruction from a first virtual application to a second virtual application according to an example embodiment of the present disclosure. For example, at a high level, a user may have developed software to a certain stage using an application development platform (“first application development platform”) provided by a first virtual application server. The user may wish to use a second application development platform provided by the first virtual application server for a number of reasons, as will be explained in relation toFIG.4. While the user may thus want to transfer the software developed using the first application development platform to the second application development platform, the software may have been developed in a programming language and structure used by the first application development platform but not the second application development platform. The computing device may involve subsystems that coordinate with subsystems of the first virtual application server130A and second virtual application server130B to transfer the developed software to the second application development platform, e.g., so that the user can continue software development operations on the second application development platform. The computing device102may comprise a portable computing device (e.g., a mobile device, personal digital assistant, laptop, tablet computers, smart camera, etc.) having one or more of the subcomponents described herein for compiling a specified instruction from a first virtual application to a second virtual application. The computing device102may include, for example, a cross compiler104, a processor114, memory116, peripherals120, a network interface122, a first virtual application124, and a second virtual application126. The cross compiler104may comprise subcomponents involved with receiving a specified instruction in a first language and syntax being used in the first virtual application124, and converting the specified instruction into a second language and syntax for use in the second virtual application126. The specified instruction, in either or both language, may be stored as source code118in memory116. In some aspects, the cross compiler104may comprise a user interface (UI)105, a debugger106, a runtime system108, a parser110, and a lexer112. The UI may allow the user of the computing device102to input commands and view results or status of a compilation. The debugger106may comprise a software component used to test and debug a target program, such as the specified instruction upon a partial and/or complete conversion to a language or syntax supported by the second virtual application126. The runtime system108may comprise a software component used to provide the user with the result of an execution of a specified instruction (e.g., after it has been converted, compiled, and/or cross-compiled into the second virtual application126). The parser110may comprise a software component that takes input data (e.g., the specified instruction from the first virtual application) and builds a data structure (e.g., a parse tree, an abstract syntax tree, a hierarchical structure, etc.). The parser110may be used to provide a structural representation of the input while checking for syntax. The lexer112may comprise a software component that can convert a sequence of characters (e.g., source code for the specified instructions) into a sequence of tokens. The converted tokens may have meaning that may be identifiable by the computing device102, the first virtual application124and/or first virtual application server130A, or the second virtual application126and/or second virtual application server130B. The processor114may comprise any one or more types of digital circuit configured to perform operations on a data stream, including functions described in the present disclosure. The memory116may comprise any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. The memory may store instructions that, when executed by the processor114, can cause the computing device102to perform one or more methods discussed herein. The peripherals120may comprise auxiliary devices (e.g., keyboard, mouse, monitor, display, graphic user interface, touch-sensitive display, etc.) used to enter input signals and/or view outputted information (e.g., specified instructions at a stage of compilation). The first virtual application124and the second virtual application126may be examples of software development tools, which may be installed in computing device, or may otherwise be accessible by the computing device102(e.g., via a browser enablement). The first virtual application124and the second virtual application126may be hosted, managed, and/or otherwise implemented via first virtual application server130A and second virtual application server130B, respectively. Furthermore, the user may desire that a specified instruction (e.g., a software or application that is the subjected to application lifestyle management or software development operation) be migrated from the first virtual application124to the second virtual application126. However, the first virtual application124to the second virtual application126may support different languages and structures for the specified instruction. For example, while HELM may be an example of the first virtual application124, ANSIBLE may be an example of the second virtual application126. While operators used by HELM (e.g., HELM charts) are typically written using the GOLANG template language, operators used in ANSIBLE are typically written using the JINJA2 and YAML languages. Systems and methods are disclosed herein for compiling the specified instruction from the first virtual application124to the second virtual application126. The first virtual application server130A and the second virtual application server130B may comprise a local or a remote computing system for providing an interface for the first virtual application124and the second virtual application126, respectively, processing and storing information received from the respective virtual applications, enabling access to databases, libraries, and other tools provided by the respective virtual applications, and facilitating the compilation of the specified instruction from the first virtual application124to the second virtual application126. For example, the first virtual application server130A may include one or more subcomponents that help to facilitate the use of the first virtual application124in performing functions of application lifecycle management, and to facilitate providing sufficient information to the computing device104(specifically, for example, cross compiler104) to convert and/or compile a specified instruction from the first virtual application124to the second virtual application126. For example, the first virtual application server130A may include one or more databases132A, which may store, for example, a template directory134A storing one or more templates136A. The templates136A may comprise for example, files of various code in the language or structure supported by the first virtual application124. For example, the templates136A may include GOLANG files to support one or more HELM based operators. The user may, via the first virtual application124, create, read, update, or delete templates. The first virtual application server130A may further comprise an API138A, a library140A, a network interface142A, and one or more tools144A The API138A can manage interactions with the first virtual application124installed and/or accessed by computing device102and other computing devices, including providing access to other subcomponents of first virtual application server130A (e.g., databases132A, library140A, tools,144A). The library140A may store non-volatile resources for use in the first virtual application124for performing one or more functions in the application lifecycle and/or software development. For example, the library140A may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values and/or type specifications. The tools144A may include programs, plug-ins, and/or other operators (e.g., HELM Charts) for use in software development operations performed by the first virtual application124. The second virtual application server130B may include one or more subcomponents that are similar to, or cognate with, the subcomponents of the first virtual application server130A. For example, the second virtual application130B may include one or more databases132B. As will be described herein, a target data structure133may be created within the databases132B. The target data structure133may include a template directory134B for storing one or more templates(s)136B. The target data structure133may store the specified instructions when or as they are converted and/or compiled to the language and format supported by the second virtual application126. The second virtual application server130B may further include an application program interface (API)138B for hosting or managing the second virtual application126, a library140B, a network interface142B, and one or more tools144B. The communication network150comprises wired and wireless networks. Examples of the wired networks may include a wide area network (WAN) or a local area network (LAN), a client-server network, a peer-to-peer network, and so forth. Examples of the wireless networks comprise Wi-Fi, a global system for mobile communications (GSM) network, and a general packet radio service (GPRS) network, an enhanced data GSM environment (EDGE) network, 802.5 communication networks, code division multiple access (CDMA) networks, Bluetooth networks or long term evolution (LTE) network, LTE-advanced (LTE-A) network or 5th generation (5G) network. One or more devices of the computer network environment may each comprise a network interface (e.g., network interface122, network interface142A, network and network interface142B) to allow the respective device to communicate with the communication network150. For example, the respective network interface may comprise a wired interface (e.g., electrical, RF (via coax), optical (via fiber)), a wireless interface, a, modem, etc. FIG.2illustrates a flowchart of an example process200for compiling a specified instruction from a first virtual application to a second virtual application according to an example embodiment of the present disclosure. The process200may be performed by one or more processors of a computing device used to receive a specified instruction, access libraries, create and update data structures, and compile (e.g., as in processor114of computing device102). Although the example process200is described with reference to the flow diagram illustrated inFIG.2, it will be appreciated that many other methods of performing the acts associated with the process200may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described may be optional. Process200may begin with the computing device receiving a request to execute a specified instruction in a second virtual application. (block202). In some aspects, the request may be received via user input (e.g., via UI105of the computing device104), and the specified instruction may be retrieved from the first virtual application. For example, a user that is performing application lifecycle testing or a software development operation on a specific application or program (“specified instruction”) using a certain virtual application (e.g., first virtual application124) may wish to use a different virtual application (e.g., second virtual application126), e.g., for the next phase of the application lifecycle or software development. The user may indicate such a request in the cross compiler104of the computing device102via UI105. The cross compiler104of the computing device102may thus receive, from the first virtual application124, the specified instructions for which the user would like to continue the application lifecycle or software development in the second virtual application126. Also or alternatively, the specified instruction may be stored as source code118in memory116, and may be retrieved from the memory116by the cross compiler104. The computing device may create a target data structure within the second virtual application (block204). For example, cross compiler104of the computing device102may automatically enter instructions for the creation of target data structure133. When the instructions are executed, e.g., by runtime system108, target data structure133may be created. In some aspects, the target data structure may be created using a library of the second virtual application (e.g., library140B from the second virtual application server130B). For example, a user may want to convert specified instructions from a first virtual application, such as HELM, to a second virtual application, such as ANSIBLE. In such an example, the cross compiler104may utilize a library such as ANSIBLE-GALAXY to initialize a new role for ANSIBLE, the second virtual application. The new role for ANSIBLE, e.g., via a target data structure, may be as a target destination for translated files of the specified instruction. The computing device may store, to a template directory in the target data structure, one or more templates identified from the specified instructions (block206). For example, cross compiler104of the computing device102may scan the specified instructions from the first virtual application124for possible template candidates. For example, parser110may scan the source code of the specified instruction to recognize, e.g., through lexer112, tokens that identify templates. The cross compiler104may automatically enter instructions to store the identified templates into the target data structure133. When the instructions are executed, e.g., by runtime system108, the second virtual application126and/or its server130B may include (e.g., within database132B), the identified templates136B. In some aspects (e.g., where the first virtual application124is HELM and the second virtual application126is ANSIBLE), the cross compiler104may copy templates from the template directory of HELM (e.g., HELM Chart directory) into the target structure of ANSIBLE (e.g., via ANSIBLE Role's templates directory). The computing device may identify a plurality of syntax features used by the first virtual application to write the specified instructions (e.g., “first syntax features”). Each first syntax feature may define a respective variable. Syntax features, including the first syntax features, may include, but are not limited to, conditional statements, branch instructions, function statements, and the like. The cross compiler104of the computing device102may scan the specified instructions from the first virtual application124to identify the plurality of syntax features. For example, lexer112may scan the linear sequence of characters of the specified instruction into a linear sequence of tokens. The parser110may turn the linear sequence of tokens into a hierarchical syntax tree (e.g., abstract syntax tree (AST). The cross compiler104may resolve names and check types from the specified instruction, e.g., using the hierarchical syntax tree, to identify the plurality of syntax features. The computing device may derive, for each first syntax feature, a more precise definition for the respective variable (block210). In some aspects, the more precise definition for the respective variable may be derived by generating an abstract syntax tree. For example, the parser110of the cross compiler104may use the sequence of tokens identified from the specified instructions to generate an abstract syntax tree. The abstract syntax tree (or any other hierarchical framework) may be used to identify the respective variable being defined by a syntax feature (e.g., conditional statement, branch instruction, function statement, etc.). In some aspects, the cross-compiler104or an associated tool (e.g., parser110, lexer112, etc.), after identifying a syntax feature, may scan the remainder of the specified instruction to determine how a respective variable is being defined. For example, the first syntax feature may define the respective variable ambiguously, such that, without understanding the context presented by the remainder of the specified instruction, could lead to more than one definition of the respective variable. Based on the determination of how the respective variable is defined, the cross-compiler104may use a library (e.g., library140B) of the second virtual application126, to determine if the respective variable can be defined more precisely.FIG.3presents an example of deriving the more precise definitions in more detail. The computing device may generate second syntax features that define the respective variables more precisely than the first syntax features (block212). As previously discussed, after the cross compiler104identifies a first syntax feature defining a respective variable, the cross compiler104may scan the specified instruction to understand the context behind syntax feature defining the respective variable, e.g., to ascertain a more precise definition of the respective variable. The first syntax feature, being coded in a programming language or structure of the first virtual application124, may not be as precise in defining the respective variable as a syntax feature of the programming language or structure used by the second virtual application (“second syntax feature) in defining the respective variable. The cross compiler104may thus access a library (e.g., library140B) of the second virtual application126, to determine if the respective variable can be defined more precisely using a second syntax feature (e.g., a second a second conditional statement, a second branch instruction, a second function statement, etc.). If so, the cross compiler104may, for each identified first syntax feature, generate a second syntax feature that defines the respective variable more precisely than the first syntax feature. Thus, the cross compiler104of the computing device may rewrite the specified instruction in the language or structure supported by the second virtual application, e.g., by translating the first syntax features into second syntax features that define respective variables more precisely.FIG.3presents an example of generating the second syntax features in more detail. As an example, if the first virtual application is HELM and if the second virtual application is ANSIBLE, deriving more precise definitions of the respective variable may involve determining more precise definitions of the respective variables involved in branch instructions. The branch instructions may include, but are not limited to conditional statements, loops, “with” clauses, “end” clauses, “range” functions, and “if” statements. For example, deriving a more precise definition of an “end” clause used in the specified instruction that is received in GOLANG, the language used by HELM, may involve deriving the meaning of the “end” clause. In GOLANG templates, the “end” clause may often be overloaded, and may be used for the “if” “range” and “with” subtrees. Since these subtrees can, and often are, heavily nested, an abstract syntax tree may be used to determine if the “end” keyword can be translated to an “endif” or an “endfor” clause used in JINJA2, a language supported by ANSIBLE, the second virtual application. Thus, the computing device may generate the second syntax feature “endif” or the second syntax feature “endfor,” depending on the more precise defintiions of variables involved, which may be determined via the abstract syntax tree. In another example involving HELM and ANSIBLE as the first and second virtual applications, the computing device may derive a more precise definition for a “range” function used in GOLANG, the language supported by HELM. The “range” function may be converted to a second syntax feature, such as “for-in” or a “for-each” function in the JINJA2 language used in ANSIBLE. In order to determine whether to convert the “range” function to a “for-in” or a “for-each” function, the cross compiler104of the computing device102may analyze the specified instruction to determine whether the “range” function performs iteration over a GOLANG map or a GOLANG slice. A type resolution may be used to determine whether to use a “for-in” versus a “for-each” function. In another example involving HELM and ANSIBLE as the respective first and second virtual applications, the computing device may derive a more precise definition for an “if” statement used in GOLANG, the language used in HELM. GOLANG may often overload the “if” keyword such that “if x” can mean either “if x is true” or “if x is defined.” Thus, “if” keywords used in the specified instruction in the GOLANG language may be ambiguous. Since HELM charts may often store definitions of variables and other properties in templates (e.g., in a template such as “values.yaml” stored in database132A), the computing device may resolve this ambiguity by locating stored definitions of properties and variables in such templates (e.g., using a translation heuristic). If the HELM charts does not define the respective variable (e.g., “x”) in a template, such as “values.yaml,” then an “if” statement such as “if x” can be interpreted as “if x is defined.” If x is defined in the templates, such as in the “values.yaml” template, then the type can be deduced by obtaining the value of x from the template. If x is true or false, then the GOLANG conditional may render, using the conditional logic, “if x.” The computing device may render the specified instruction into the one or more templates, e.g., using the second virtual application (block214). For example, the cross compiler104may store the specified instruction, with first syntax features converted to the second syntax features, to one or more templates136B and/or target data structure133. The specified instruction may be expressed via the second syntax features defining the respective variables more precisely than the first syntax features. In some aspects (e.g., where the first virtual application124is HELM and the second virtual application126is ANSIBLE), the computing device, the computing device102may render the specified instruction into the one or more templates by installing an ANSIBLE task capable of rendering the translated templates. FIG.3illustrates a flow diagram of another example process300for compiling a specified instruction from a first virtual application to a second virtual application according to an example embodiment of the present disclosure. The process300may be performed by one or more processors of a computing device used to receive a specified instruction, access libraries, create and update data structures, and compile (e.g., as in processor114of computing device102). Although the example process300is described with reference to the flow diagram illustrated inFIG.3, it will be appreciated that many other methods of performing the acts associated with the process300may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described may be optional. For simplicity, examples of implementing one or more blocks of process300may be explained in relation to having HELM as the first virtual application and ANSIBLE as the second virtual application. Method300may begin by utilizing a library of the second virtual application to initialize a target destination for translated files (block302). For example, the cross compiler104of computing device102may utilize a library of ANSIBLE, such as ANSIBLE-GALAXY, to initialize a new role for ANSIBLE as a target destination for translated files of the specified instruction. The computing device may then copy templates from a template directory in the first virtual application to the template directory in the second virtual application (block304). For example, cross compiler104may raw copy GOLANG templates from a directory associated with HELM Chart (e.g., template directory134A) into the template directory associated with the with ANSIBLE (e.g., template directory134B). The computing device may copy templated properties from the first virtual application to the second virtual application (block306). As used herein, a template property may involve variables that are defined in the specified instruction in one or more templates, e.g., in the programming language used by a virtual application. For example, the computing device may copy the contents of a HELM Chart's “values.yaml” template to ANSIBLE's “defaults/main.yaml” template. The HELM Chart's “values.yaml” template may define the template properties for a HELM Chart, while “defaults/main.yaml” may be used to define properties for an ANSIBLE role. For each of the copied templated properties, the computing device may determine whether a templated property is a self-reference (block308). As used herein, a “self-reference” may refer to a feature used in HELM Chart, which allows users to define a property in the template “values.yaml” and then reference the property in a subsequent configuration. ANSIBLE may not include an equivalent feature, and thus the computing device102may perform one or more steps described herein to resolve issues of “self-reference” in the specified instruction when converting the specified instruction to a form acceptable by the second virtual application126. If a given templated property is a self-reference, the computing device may determine if the value of the templated property can be deduced (block310). For example, the value of the template property can be deduced by determining the relation of the template property to other properties or variables in the specified instruction that may be defined more precisely. If the value can be deduced, the computing device may deduce the value of the template property (block312), and may enter the value of the template property (block318). If the value of the template property cannot be deduced, the computing device may prompt user input (block314), and check if user input has been received (block316). If the user has inputted value of the template property, the computing device may enter the value of the templated value (block318). If the user input has not been received, the computing device may remove the self-reference (block320). After each self-referenced template property is processed through blocks310through320, or if the computing device determines that the template properties do not include any self-references, the computing device may begin translation of syntax features from templates of the first virtual application into the templates of the second virtual application (block322). For example, the computing device may translate one or more branch instructions in the specified instructions (block324), e.g., to a second syntax feature. The branch instructions may include, but are not limited to conditional statements, loops, “with” clauses, “end” clauses, “range” functions, and “if” statements. In some aspects, the cross compiler104of the computing device102may convert the syntax of branch bodies to utilize the notations “{% . . . %}” instead of “{{ . . . }}” in the respective source code of the specified instruction. Translating an “end” clause to a second syntax feature may involve deriving a more precise meaning of the “end” clause from the specified instruction. In GOLANG templates, the “end” clause may often be overloaded, and may be used for the “if” “range” and “with” subtrees. Since these subtrees can, and often are, heavily nested, an abstract syntax tree may be used to determine if the “end” keyword can be translated to an “endif” or an “endfor” clause used in JINJA2, a language supported by ANSIBLE, the second virtual application. Thus, the computing device may generate the second syntax feature “endif” or the second syntax feature “endfor,” depending on the more precise definitions of variables involved, which may be determined via the abstract syntax tree. The computing device may derive more precise meanings of one or more conditional statements in the specified instruction (block326), and then translate the conditional statements (block328). For example, the computing device may derive a more precise definition for a “range” function used in GOLANG, the language used in HELM. The “range” function may be converted to a second syntax feature, such as “for-in” or a “for-each” function in the JINJA2 language used in ANSIBLE. In order to determine whether to convert the “range” function to a “for-in” or a “for-each” function, the cross compiler104of the computing device102may analyze the specified instruction to determine whether the “range” function performs iteration over a GOLANG map or a GOLANG slice. A type resolution may be used to determine whether to use a “for-in” versus a “for-each” function. Furthermore, the computing device may derive a more precise definition for an “if” statement used in GOLANG, the language used in HELM. GOLANG may often overload the “if” keyword such that “if x” can mean either “if x is true” or “if x is defined.” Thus, “if” keywords used in the specified instruction in the GOLANG language may be ambiguous. Since HELM charts may often store definitions of variables and other properties in templates (e.g., in a template such as “values.yaml” stored in database132A), the computing device may resolve this ambiguity by locating stored definitions of properties and variables in such templates (e.g., using a translation heuristic). If the HELM charts does not define the respective variable (e.g., “x”) in a template, such as “values.yaml,” then an “if” statement such as “if x” can be interpreted as “if x is defined.” If x is defined in the templates, such as in the “values.yaml” template, then the type can be deduced by obtaining the value of x from the template. If x is true or false, then the GOLANG conditional may render, using the conditional logic, “if x.” The computing device may identify and unwrap Boolean compositions identified from the specified instructions (block330). HELM templates written in GOLANG may often use prefix syntax for “and” and “or”. For example, a sample GOLANG code in a HELM Template using Boolean compositions may recite “{{if and condition1 condition2}}”. However, JINJA2, the language used by ANSIBLE, may use Boolean compositions in an ordered syntax. Thus, the above recited statement may be rewritten in JINJA2 as “{% if condition1 and condition 2%}. Additionally, Boolean compositions in ANSIBLE may be heavily nested. The computing device may thus identify and unwrap Boolean compositions by generating Abstract Tree Nodes from the specified instructions received from the first virtual application and reorganizing the Abstract Tree Nodes to adhere to JINJA2 syntax. The computing device may create filters for function statements (block332). For example, the cross compiler104of computing device102may create ANSIBLE Filter replacements for HELM's function statements written in GOLANG. In some aspects, filters may be created for function statements that have string outputs by using an implementation of a generic ANSIBLE Filter that can invoke the cross compiler104at runtime, using the name of the GOLANG template having the function statement and any arguments. For example, cross compiler104may use the function “[[“nginix.imagePullSecrets”.|filter(“indent”,6)}}” for the invocation. The rendered result may be returned and may be identical to a “text/template” function call. Also or alternatively, non-string objects may be returned via the function statement, e.g., using a “toMap” function. The computing device may determine whether any of the remaining syntax features from the specified instruction are untranslatable (block334). If the syntax features have been successfully translated, and/or if there are no remaining syntax features that are untranslatable, the computing device may render the execution of the translated templates (block338). If there are remaining syntax features that are untranslatable, the computing device102may prompt and receive user input (block336), e.g., to translate the syntax features so that the specified instruction is in a language and/or structure that can be supported by the second virtual application126. Afterwards, the computing device may render the execution of the translated templates (block338). FIG.4illustrates a flow diagram showing a method400for compiling a specified instruction from a first virtual application to a second virtual application, incorporated in an example software development operation, according to an example embodiment of the present disclosure. Software development operations may typically comprise several phases having several functions and/or sub-operations. The software development operations may be performed and/or facilitated through one or more virtual applications. As shown inFIG.4, phase I may involve automated application provisioning and configuration management (block402). Phase II may include performing patching and minor version upgrades to the software being developed (block404). Phase III may include performance of application lifecycles and storage lifecycles (block406). Phase IV may involve an analysis of metrics, alerts, log processing, and workload associated with the application (block408). Phase V may involve horizontal and/or vertical scaling, auto configuration tuning, abnormality detection, and scheduling tuning (block410). A user may desire to use one virtual application (e.g., the first virtual application124) for certain phases of the software development operation but may desire to use another virtual application (e.g., the second virtual application126) for other phases of the software development. For example, while the first virtual application may be better at basic installation (e.g., automated application provisioning and configuration management) and seamless upgrades (e.g., performing patch and minor version upgrades), the second virtual application may be better at performing full lifecycle analysis (e.g., application lifecycle, storage lifecycle, etc.) providing deep insights about an application (e.g., metrics, alerts, log processing and workload analysis), and performing autopilot functions (e.g., horizontal and/or vertical scaling, auto configuration tuning, abnormal detection, scheduling tuning). Also or alternatively, the second virtual application may support languages or structures that are more conducive to the performance of certain functions of the software development operation. Often, different virtual applications (e.g., HELM, ANSIBLE, etc.), even if they may be supported by the same container-orchestration or software development platform (e.g., KUBERNETES) may not support the same programming languages or support structures. For example, a first virtual application412, such as HELM, may use a first language (e.g., GOLANG) to perform software development operations in phase I and II, whereas a second virtual application416, such as ANSIBLE, may use a second language (e.g., JINJA2 and YAML) to conduct various software development operations. A cross compiler system414may facilitate the transition of a software being developed (e.g., the specified instruction) from having the first virtual application perform software development operations to having the second virtual application perform and/or continue the next phase of software development operations. The cross compiler system414may include subcomponents of, and perform the functions of, cross compiler104, as previously described. FIG.5illustrates a block diagram of an example computer system500for compiling a specified instruction from a first virtual application to a second virtual application, according to an example embodiment of the present disclosure. The example computer system500may include a computing device502; a first server522hosting a first virtual application510(e.g., which may be running on the computing device502); and a second server536hosting a second virtual application512(e.g., which may be running on the computing device502). The computing device502may include memory506, and a processor504in communication with the memory506. In some aspects, the computing device502, first server522, and second server536may share similar subcomponents and perform similar functions as computing device102, first virtual application server130A, and second virtual application server130B, respectively. The memory506may store instructions508that, when executed by the processor504, may cause the processor504to receive, from the first server522, a request516to compile a specified instruction518(e.g., a software, application, or program that may be undergoing one or more application lifecycle operations) for the second virtual application512. The request516may include an identifier520of the second server536. The processor504may use the identifier520to cause the second server536to create a target data structure538within the second virtual application512(e.g., hosted and/or stored at the second server536). One or more template(s)528identified from the specified instruction518may be stored to a template directory540in the target data structure538. Furthermore, the instructions508, when executed, may cause the processor504to identify, from the specified instruction518, a plurality of first syntax features530. Each first syntax feature530may define a respective variable532. The instructions508, when executed, may cause the processor504to determine, using the specified instruction518and for each first syntax feature530, a modified definition548for the respective variable532. Based on the modified definitions548, second syntax features544may be generated (e.g., using the library550of the second virtual application512hosted or managed at the second server536). The second syntax features544may define the respective variables532more precisely. The instructions508, when executed, may cause the processor504to render the specified instruction518into the one or more templates542(e.g., using the second virtual application512). The specified instruction518may thus be expressed via the second syntax features544and their respective variables532. In some aspects, the first virtual application510and the second virtual application512may be associated with a first programming language and a second programming language, respectively. It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer readable medium or machine-readable medium, including volatile or non-volatile memory, such as RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be provided as software or firmware, and/or may be implemented in whole or in part in hardware components such as ASICs, FPGAs, DSPs or any other similar devices. The instructions may be configured to be executed by one or more processors, which when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures. It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.
45,011
11861390
DETAILED DESCRIPTION Embodiments are described for transparent disk caching for virtual machines and applications. In one embodiment, a hard disk drive (HDD), solid state drive (SSD) or a similar storage device is connected to a host computer system. This storage device or group of storage devices may be referred to herein generally as a “disk.” The disk may be connected to the host computer system in a variety of different ways, including for example, a hardware interface, such as a universal serial bus (USB) interface, over a network, such as a storage area network (SAN) or the Internet, or through some other connection. The host computer system may include host applications (applications managed by the host operating system) and/or virtual machines (guest operating systems and guest applications) managed by a hypervisor that may part of the host operating system, run on top of the host operating system or run instead of the host operating system. Host applications or virtual machines of the host computer system may write data to the disk and read data from the disk during their normal course of operations. In one embodiment, the host operating system on the host computer system may control writes of host applications to the disk. In another embodiment, in which the host computer system includes virtual machines, the virtual machines do not have direct access to the disk, but rather the hypervisor provides disk virtualization. In this embodiment, a virtual disk, which is represented on a physical disk by a file, a linked set of files, or a similar structure, is presented to the virtual machine. In the event that a physical disk becomes disconnected or otherwise unavailable while data is being written to the disk, such as if the disk becomes unplugged, the disk runs out of available storage space, or the network connection goes down, the data being written may be lost. Accordingly, in one embodiment, the host operating system or the hypervisor maintains a cache which buffers writes from host applications or virtual machines before the data is written to the disk. The host operating system or the hypervisor can manage the cache so that cache cleaning is delayed relative to the corresponding data transfer from the cache to the disk. Various data structures can be used to implement the cache, such as for example, a circular data buffer, ring buffer or other first-in, first-out structure. In one embodiment, upon receiving a write instruction, the host operating system or the hypervisor stores the received data in the cache, and subsequently writes the data to the disk. The data is only cleared from the buffer after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache. In the event of a failure in writing to the disk, or if the disk becomes disconnected or otherwise inaccessible, for example, the host computer system or the hypervisor can detect such an occurrence. When the disk is disconnected from the system, either immediately or after a delay period determined by the network protocol, the host operating system or the hypervisor may generate a failure signal. In one embodiment, the maximum delay period is determined by the size of the circular buffer so that there is always a guarantee of preservation of data which has not yet been recorded to the disk. In one embodiment, in response to the failure signal, the host operating system or the hypervisor suspends host applications or virtual machines running on the host operating system and prompts the user to recover the disk. During this time, the data stored in the cache is preserved, so that any uncompleted write operations can be completed once disk accessibility is restored. After the restoration of the disk functionality, the host operating system or the hypervisor may use the data in the cache to assess the scale of the crash, append data from the cache to the disk to complete the write operations and attempt to resume the host applications or the virtual machines. After the host applications or the virtual machines are resumed, the host applications or the virtual machine may continue operations to utilize the data on disk. Accordingly, aspects of the present disclosure prevent the permanent and irreparable loss of data not written to the disk at the time the disk was disabled, which would occur in a conventional system not utilizing the transparent disk caching techniques described herein. For example, when the disk contains a file system to store files created by a host application, without a disk cache, there may be no other way to restore the status of the file system at the moment of disk failure. Similarly, when a virtual machine is started from a disk and the disk is disconnected, the virtual machine will go down because the virtual machine data on the disk is lost and there may be no way to restore the functionality of the virtual machine without the disk cache. Additional details of the transparent disk caching process are described below. FIG.1is a block diagram illustrating a virtualized computing environment100in which embodiments of the present disclosure may be implemented. In one embodiment, host computer system110may include one or more interconnected nodes A “node” as used herein refers to a group of components120including one or more processors122and one or more associated memory devices124locally accessible by the processors in the group. In one embodiment, the memory devices124serve as a separate hardware cache. The physical processor122may be further communicatively coupled to other memory devices and/or input/output (I/O) devices of the host computer system110. A “physical processor,” “processor” or “processing device herein refers to a device capable of executing instructions encoding arithmetic, logical, or I/O operations. In one embodiment, processor122may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. Furthermore processor122may be a single core processor which is typically capable of executing one instruction at a time (or process a single pipeline of instructions), or a multi-core processor which may simultaneously execute multiple instructions. In one embodiment, processor122may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A processor may also be referred to as a central processing unit (CPU). “Memory device” herein refers to a volatile or non-volatile memory device, such as RAM, ROM, EEPROM, or any other device capable of storing data. “I/O device” herein refers to a device capable of providing an interface between one or more processor pins and an external device capable of inputting and/or outputting binary data. In one embodiment, host computer system110may run multiple virtual machines140,142by executing a software layer, often referred to as a “hypervisor”132above the hardware120and below the virtual machines140,142, as schematically shown inFIG.1. In one embodiment, the hypervisor132may be a component of a host operating system130executed by the host computer system110. Alternatively, the hypervisor132may be provided by an application running under the host operating system130or may run directly on the host computer system110without an operating system beneath it. The hypervisor132may abstract the physical layer, including processors, memory, and I/O devices, and present this abstraction to virtual machines140,142as virtual devices, including virtual processors, virtual memory, and virtual I/O devices. In one embodiment, the hypervisor132may include transparent disk caching manager133configured to control cache134. In one embodiment, cache134buffers all writes from virtual machines140or142before data is written to one of underlying storage domains152or154. Transparent disk caching manager133can manage the cache134so that cache cleaning is delayed relative to the corresponding data transfer from the cache134to the disk. Various data structures can be used to implement the cache134, such as for example, a circular data buffer, ring buffer or other first-in, first-out structure. In one embodiment, upon receiving a write instruction, the transparent disk caching manager133stores the received data in the cache134, and subsequently writes the data to one of storage domains152or154. The data is only cleared from cache134after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache134. Each of virtual machines140,142may execute a guest operating system which may utilize the underlying virtual devices, each of which may map to a device of the host computer system110(e.g., a network interface device, a CD-ROM drive, etc.). One or more applications may be running on a virtual machine140,142under the guest operating system. Each of virtual machines140,142may be associated with one or more virtual processors. Processor virtualization may be implemented by the hypervisor132scheduling time slots on physical processor122such that from the perspective of the guest operating system those time slots are scheduled on a virtual processor. Memory virtualization may be implemented by a page table (PT) which is a memory structure translating virtual memory addresses to physical memory addresses. In one embodiment, host computer system110is coupled to one or more storage domains152,154. Each of the storage domains152,154may store virtual machine image data153,155for virtual machines140,142. In one embodiment, one or both of storage domains152,154may employ file-based storage, in which case the disk images may be provided by respective files. In another embodiment, one or both of storage domains152,154may employ block-based storage, in which case the disk images may be provided by respective logical volumes. In one embodiment, storage domain152is directly connected to host computer system110over a hardware interface162, such as a universal serial bus (USB) interface. In one embodiment, storage domain154is connected to host computer system110over a network164. The network164may include, for example, the Internet, intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, or other suitable networks, etc., or any combination of two or more such networks. For example, such networks may comprise satellite networks, cable networks, Ethernet networks, and other types of networks. Either or both of storage domains152,154may be embodied on one or more mass storage devices which can include, for example, flash memory, solid state drives (SSDs), magnetic or optical disks, or tape drives; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM), or any other type of storage medium. FIG.2is a block diagram illustrating a computing environment200in which embodiments of the present disclosure may be implemented. In one embodiment, host computer system210may include one or more interconnected nodes including hardware components220made up of one or more processors222and one or more associated memory devices224locally accessible by the processor222. The physical processor222may be further communicatively coupled to other memory devices and/or input/output (I/O) devices of the host computer system210. In one embodiment, host computer system210may include an operating system (host operating system)230and may run one or more applications (host applications)240,242. Operating system230may include a set of programs that manage hardware components220of host computer system210and provide common services for applications, such as applications240,242running on computer system210. In one embodiment, operating system230may include a kernel to control low-level processes, such as how memory is read and written, the order in which processes are executed, how information is received and sent by host computer system210, to control any peripheral devices, such as monitor, keyboard, mouse, touch screen, scanner, etc. and how to interpret information received over networks, such as network264. Operating system230may additionally include a user interface to interact with a user of host computer system210, allowing the user to control and use applications240,242, for example. In addition, operating system230may include application programming interfaces (APIs) to provide services and code libraries that let application developers write modular code reusing defined programming sequences in user space libraries or in the operating system230itself. In one embodiment, the operating system230may include transparent disk caching manager233configured to control cache234. In one embodiment, cache234buffers all writes from applications240or242before the data is written to one of storage domains252or254. Transparent disk caching manager233can manage the cache234so that cache cleaning is delayed relative to the corresponding data transfer from the cache234to disk. Various data structures can be used to implement the cache234, such as for example, a circular data buffer, ring buffer or other first-in, first-out structure. In one embodiment, upon receiving a write instruction, the transparent disk caching manager233stores the received data in the cache234, and subsequently writes the data to one of storage domains252or254. The data is only cleared from cache234after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache234. In one embodiment, host computer system210is coupled to one or more storage domains252,254. Each of the storage domains252,254may store corresponding application data253and255on behalf of applications240and242. In one embodiment, one or both of storage domains252,254may employ file-based storage, in which case the disk images may be provided by respective files. In another embodiment, one or both of storage domains252,254may employ block-based storage, in which case the disk images may be provided by respective logical volumes. In one embodiment, storage domain252is directly connected to host computer system210over a hardware interface262, such as a universal serial bus (USB) interface. In one embodiment, storage domain254is connected to host computer system210over a network264. The network264may include, for example, the Internet, intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, or other suitable networks, etc., or any combination of two or more such networks. For example, such networks may comprise satellite networks, cable networks, Ethernet networks, and other types of networks. Either or both of storage domains252,254may be embodied on one or more mass storage devices which can include, for example, flash memory, solid state drives (SSDs), magnetic or optical disks, or tape drives; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); or any other type of storage medium. FIG.3is a block diagram illustrating transparent disk caching manager, according to an embodiment. In one embodiment, transparent disk caching manager133,233includes virtual machine/application interface372, cache manager374, storage interface device376, and user interface module378. This arrangement of modules and components may be a logical separation, and in other embodiments, these modules or other components can be combined together or separated in further components. In one embodiment, disk cache134,234is connected to transparent disk caching manager133,233and includes a circular data buffer. In one embodiment, host computer system110,210may include both transparent disk caching manager133,233and cache134,234. In another embodiment, cache134,234may be external to host computer system110,210and may be connected to host computer system110,210over a network or other connection. In other embodiments, transparent disk caching manager133,233may include different and/or additional components which are not shown to simplify the description. In one embodiment, virtual machine/application interface372is responsible for communication and interaction with either virtual machines140,142or applications240,242on host computer system110,210. For example, virtual machine/application interface372may receive an instruction to write data to a storage device (e.g., part of storage domains152,154,252,254) from one of virtual machines140,142or applications240,242. The instruction may be received during the normal course of operation of virtual machines140,142or applications240,242and may relate to user data, system data, virtual machine image data, or other data being committed to the underlying physical storage devices in one of storage domains152,154,252,254. Virtual machine/application interface372may further interact with virtual machines140,142or applications240,242to, for example, suspend execution of virtual machines140,142in response to detecting that the storage device is disconnected from host computer system110,210and to resume execution of virtual machines140,142in response to determining that the storage device is reconnected. In one embodiment, cache manager374manages and controls disk cache134,234on host computer system110,210. For example, in response to virtual machine/application interface372receiving the instruction to write data to a storage device, cache manager374may store a copy of the data in cache134,234. In one embodiment, cache manager374buffers all writes from virtual machines140,142and applications240,242in cache134,234before data is written to one of underlying storage domains152,154,252,254. Cache manager374can manage the cache134,234so that the data remains stored in cache134,234until the data has been committed to disk. The data is only cleared from cache134,234after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache134,234. Various data structures can be used to implement the cache134,234such as for example, a circular data buffer, ring buffer or other first-in, first-out structure. A circular buffer is useful because it does not need to have its elements shuffled around when one is consumed and is a good implementation strategy for a queue that has fixed maximum size. When circular buffer is full (e.g., written with entries A-E) and a subsequent write is performed, cache manager374can overwrite the oldest data (e.g., entry A) and continue in a circular fashion. In one embodiment, after an interruption to a write operation, cache manager374can compare the data committed to the storage device to what is stored in cache134,234to determine where in the data to resume the write operation in one embodiment, during this comparison, cache manager374can disable the hardware cache in memory124,224, to ensure that the comparison of data in cache134,234is made with the data committed to the storage device and not whatever data is stored in the hardware cache. In one embodiment, storage device interface376is responsible for communication and interaction with the storage devices of storage domains152,154,252,254. For example, once cache manager374stores a copy of the data in cache134,234, storage device interface376may initiate a write operation to write the data from the cache134,234to the storage device. In one embodiment, storage domains152,252are directly connected to host computer system110,210over a hardware interface162,262, such as a universal serial bus (USB) interface. In one embodiment, storage domains154,254are connected to host computer system110,210over a network164,264. During execution of the write operation, there may be a failure in writing the data to the storage device, such as if the USB interface162,262becomes disconnected, the network164,264goes down or if the storage device becomes otherwise inaccessible. Storage device interface376may detect that the storage device is disconnected from host computer system110,210or is otherwise unavailable and may, for example, instruct virtual machine/application interface372to suspend execution of the virtual machine140,142(if applicable). In one embodiment, storage device interface376may notify user interface module378of the disconnect, so that user interface module378can instruct the user to initiate a repair. Storage device interface376may further determine that the storage device is reconnected to host computer system110,210and can resume the write operation to continue writing data from cache134,234to the reconnected storage device. In one embodiment, when the write operation is initiated, storage device interface376creates a file on the storage device, which is assigned a file handle, and begins writing data to disk. If the storage device is disconnected during the write operation the handle gets lost and when the storage device is reconnected, all of the files are assigned different handles. As a result, the virtual machine or application won't be able to find the right files into which it can continue writing data. In one embodiment, cache manager374maintains an indication of the file handles assigned at the start of the write operation in cache134,234. This mapping of handles to files can be used to identify the corresponding files after the storage device is reconnected by pointing the newly assigned handles to the original file handles assigned pre-failure. In one embodiment, the cache134,234may use a “virtual handle” to which both the old and new handles can be matched. In one embodiment, upon receiving notification from storage device interface376that the storage device has been disconnected, user interface module378may present a notification to the user on a display of the host computer system110,210. The notification may include the phrase “Disk disconnected, re-attach disk to continue writing file” or other similar language. In one embodiment, additional instructions to write data to the storage devices may be received while the storage devices are disconnected from host computer system110,210. When this occurs, transparent disk caching manager133,233may continue receiving the instructions may store the additional data in disk cache134,234. Transparent disk caching manager133,233may, however, refrain from initiating any additional write operations to the storage devices while the storage devices are disconnected. Instead, the additional data may remain in disk cache134,234until the storage devices are reconnected, at which point, transparent disk caching manager133,233may initiate a new write operation to write the data from cache to disk. FIG.4is a flow diagram illustrating a transparent disk caching method for write requests, according to an embodiment. The method400may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof. The processing logic is configured to allow a host computer system to cache data received from a virtual machine or application in a disk cache before the data is committed to disk, so that the data can be recovered in the event of an interruption during the write process. In one embodiment, method400may be performed by transparent disk caching manager133,233, as shown inFIGS.1-3. Referring toFIG.4, at block405, method400receives an instruction to write data to a storage device coupled to the host computer system110,210. In one embodiment, virtual machine/application interface372may receive an instruction to write data to a storage device (e.g., part of storage domains152,154,252,254) from one of virtual machines140,142or applications240,242. The instruction may be received during the normal course of operation of virtual machines140,142or applications240,242and may relate to user data, system data, virtual machine image data, or other data being committed to the underlying physical storage devices in one of storage domains152,154,252,254. At block410, method400stores a copy of the data in a cache134,234of the host computer system110,210. In one embodiment, cache manager374may store a copy of the data in cache134,234before the data is written to one of underlying storage domains152,154,252,254. Cache manager374can manage the cache134,234so that the data remains stored in cache134,234until the data has been committed to disk. The data is only cleared from cache134,234after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache134,234. At block415, method400writes the data from cache134,234to the storage device. In one embodiment, storage device interface376may initiate a write operation to write the data from the cache134,234to the storage device, in response to receiving the instruction at block405. In one embodiment, the data is only written to disk after it is stored in cache134,234to ensure that the data is not lost in the event of an interruption during the write operation. At block420, method400determines if the write to disk was successful, if the cache134,234is full or if a period of time has passed since the write operation was performed. In one embodiment, storage device interface376receives an acknowledgement message or other confirmation from the storage device to indicate that the data was successfully committed to disk. Upon receiving this acknowledgment, storage device interface376can determine that the data in cache134,234is no longer needed. Since cache134,234may be implemented as a circular buffer, in one embodiment when the buffer becomes full, storage device interface376may evict certain data or overwrite that data with new data. In one embodiment, it is the oldest data in the cache134,234which is evicted, so it is likely that this data was successfully committed to the disk before it is removed from the cache. In another embodiment, storage device interface376uses a timer to measure the age of data in the cache, thereby ensuring that the data is maintained in the cache for at least a minimum period of time before it is evicted. If none of these conditions have been met, at block425, method400maintains the copy of the data in cache134,234. If at block420however, method400determines that at least one of the conditions has been met, at block430, method400clears the copy of the data from cache134,234to make space available to store data corresponding to subsequent write operations. In one embodiment, cache manager374implements a time delay after determining that the data was successfully written to disk and before clearing data from cache134,234. This period of time delay can be used to check data integrity. When data is written to the disk, there still is no guarantee that all of the data was written correctly, so cache manager374may re-read the data and compare it to cached data. This verification may occur during the time delay period. FIG.5is a flow diagram illustrating a transparent disk caching method for virtual machines, according to an embodiment. The method500may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof. The processing logic is configured to allow a host computer system to cache data received from a virtual machine in a disk cache before the data is committed to disk, so that the data can be recovered in the event of an interruption during the write process. In one embodiment, method500may be performed by transparent disk caching manager133as shown inFIGS.1and3. Referring toFIG.5, at block505, method500receives an instruction to write data to a storage device coupled to the host computer system110. In one embodiment, virtual machine/application interface372may receive an instruction to write data to a storage device (e.g., part of storage domains152,154) from one of virtual machines140,142. The instruction may be received during the normal course of operation of virtual machines140,142and may relate to user data, system data, virtual machine image data, or other data being committed to the underlying physical storage devices in one of storage domains152,154. At block510, method500stores a copy of the data in a cache134of the host computer system110. In one embodiment, cache manager374may store a copy of the data in cache134before the data is written to one of underlying storage domains152,154. Cache manager374can manage the cache134so that the data remains stored in cache134until the data has been committed to disk. The data is only cleared from cache134after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache134. At block515, method500writes the data from cache134to the storage device. In one embodiment, storage device interface376may initiate a write operation to write the data from the cache134to the storage device, in response to receiving the instruction at block505. In one embodiment, the data is only written to disk after it is stored in cache134to ensure that the data is not lost in the event of an interruption during the write operation. At block520, method500detects that the storage device is disconnected from the host computer system110during execution of the write operation. In one embodiment, storage domain152is directly connected to host computer system110over a USB interface162and storage domain154is connected to host computer system110over a network164. During execution of the write operation, there may be a failure in writing the data to the storage device, such as if the USB interface162becomes disconnected, the network164goes down or if the storage device becomes otherwise inaccessible. In one embodiment, storage device interface376may detect that the storage device is disconnected from host computer system110or is otherwise unavailable. For example, a USB driver on host computer system110may detect that the USB cable has been unplugged or that power has been lost to the USB connected storage device and may provide a notification of this event to storage device interface376in another embodiment, a network driver in host computer system110may monitor that status of a network connection and notify storage device interface376when the connection to network164(and therefore to storage domain154) is lost. At block525, method500pauses the write operation and suspends execution of virtual machines140,142in response to detecting that the storage device is disconnected at block520. In another embodiment, method500may suspend execution of a process, running either on the virtual machine or on the host, which initiated the write operation. In one embodiment, virtual machine/application interface372may suspend execution of the virtual machine140,142. Suspending a virtual machine may be similar to putting a real computer into a sleep mode. In one embodiment, to suspend virtual machines140,142, virtual machine/application interface372may save a current state of the virtual machines140,142(including the state of all applications and processes running in the virtual machine) to a special file in memory124of host computer system110. When the suspended virtual machine is resumed, it may continue operating at the same point the virtual machine was at the time of its suspending. In another embodiment, virtual machine/application interface372may instead pause virtual machines140,142by temporarily releasing the resources, such as memory and processor, currently used by these virtual machines. The released resources can then be used by the host computer system110and its applications or by other virtual machines running on the host computer system110. At block530, method500determines that the storage device is reconnected to the host computer system110. In one embodiment, storage device interface376may determine that the storage device is reconnected to host computer system110. In one embodiment, the USB driver on host computer system110may detect that the USB cable has been plugged back in or that power has been restored to the USB connected storage device and may provide a notification of this event to storage device interface376. In another embodiment, the network driver in host computer system110may monitor that status of the connection to network164and notify storage device interface376when the connection is restored. At block535, method500resumes the write operation to continue writing the data from cache134to the storage device. In one embodiment, cache manager374maintains an indication of the last piece of data that was successfully written to disk before the storage device was disconnected. In this case, cache manager374can resume writing with the next piece of data in sequence. In another embodiment, after an interruption to a write operation, cache manager374can compare the data committed to the storage device to what is stored in cache134to determine what data from cache134is still to be written to the storage device. At block540, method500resumes execution of virtual machines140,142in response to determining that the storage device is reconnected at block530. In one embodiment, virtual machine/application interface372may read the state information of the virtual machines140,142from the special file in memory124of host computer system110and restore the state to that indicated in the file, so that the virtual machines140,142may continue operating at the same point as at the time they were suspended. FIG.6is a flow diagram illustrating a transparent disk caching method for applications, according to an embodiment. The method600may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof. The processing logic is configured to allow a host computer system to cache data received from an application in a disk cache before the data is committed to disk, so that the data can be recovered in the event of an interruption during the write process. In one embodiment, method600may be performed by transparent disk caching manager233, as shown inFIGS.2and3. Referring toFIG.6, at block605, method600receives an instruction to write data to a storage device coupled to the host computer system210. In one embodiment, virtual machine/application interface372may receive an instruction to write data to a storage device (e.g., part of storage domains252,254) from one of applications240,242. The instruction may be received during the normal course of operation of applications240,242and may relate to user data, system data, or other data being committed to the underlying physical storage devices in one of storage domains252,254. At block610, method600stores a copy of the data in a cache234of the host computer system210. In one embodiment, cache manager374may store a copy of the data in cache234before the data is written to one of underlying storage domains252,254. Cache manager374can manage the cache234so that the data remains stored in cache234until the data has been committed to disk. The data is only cleared from cache234after a period of time, or when additional space is needed, thereby ensuring that the data is successfully committed to the disk before it is removed from the cache234. At block615, method600writes the data from cache234to the storage device. In one embodiment, storage device interface376may initiate a write operation to write the data from the cache234to the storage device, in response to receiving the instruction at block605. In one embodiment, the data is only written to disk after it is stored in cache234to ensure that the data is not lost in the event of an interruption during the write operation. At block620, method600detects that the storage device is disconnected from the host computer system during execution of the write operation in one embodiment, storage domain252is directly connected to host computer system210over a USB interface262and storage domain254is connected to host computer system210over a network264. During execution of the write operation, there may be a failure in writing the data to the storage device, such as if the USB interface262becomes disconnected, the network264goes down or if the storage device runs out of available space or becomes otherwise inaccessible. In one embodiment, storage device interface376may detect that the storage device is disconnected from host computer system210or is otherwise unavailable. In one embodiment, transparent disk caching manager233pauses the write operation in response to detecting that the storage device is disconnected from the host computer. At block625, method600determines that the storage device is reconnected to the host computer system210. In one embodiment, storage device interface376may determine that the storage device is reconnected to host computer system210. In one embodiment, the USB driver on host computer system210may detect that the USB cable has been plugged back in or that power has been restored to the USB connected storage device and may provide a notification of this event to storage device interface376in another embodiment, the network driver in host computer system210may monitor that status of the connection to network264and notify storage device interface376when the connection is restored. At block630, method600resumes the write operation to continue writing the data from cache234to the storage device. In one embodiment, cache manager374maintains an indication of the last piece of data that was successfully written to disk before the storage device was disconnected. In this case, cache manager374can resume writing with the next piece of data in sequence. In another embodiment, after an interruption to a write operation, cache manager374can compare the data committed to the storage device to what is stored in cache234to determine what data from cache234is still to be written to the storage device. FIG.7is a flow diagram illustrating a transparent disk caching method for read requests, according to an embodiment. The method700may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof. The processing logic is configured to allow a host computer system to cache data read disk, so that the data can be accessed in the event that the disk becomes disconnected and prevent the virtual machine or host application running on the host computer system being suspended unnecessarily. In one embodiment, method700may be performed by transparent disk caching manager133,233, as shown inFIGS.1-3. Referring toFIG.7, at block705, method400receives an instruction to read data from a storage device coupled to the host computer system110,210. In one embodiment, virtual machine/application interface372may receive an instruction to read data from a storage device (e.g., part of storage domains152,154,252,254) from one of virtual machines140,142or applications240,242. The instruction may be received during the normal course of operation of virtual machines140,142or applications240,242and may relate to user data, system data, virtual machine image data, or other data previously committed to the underlying physical storage devices in one of storage domains152,154,252,254. At block710, method700determines whether the storage device is disconnected from the host computer system110. In one embodiment, storage domain152is directly connected to host computer system110over a USB interface162and storage domain154is connected to host computer system110over a network164. Prior to or during execution of the read operation, there may be a failure, such as if the USB interface162becomes disconnected, the network164goes down or if the storage device becomes otherwise inaccessible. In one embodiment, storage device interface376may detect that the storage device is disconnected from host computer system110or is otherwise unavailable. For example, a USB driver on host computer system110may detect that the USB cable has been unplugged or that power has been lost to the USB connected storage device and may provide a notification of this event to storage device interface376. In another embodiment, a network driver in host computer system110may monitor that status of a network connection and notify storage device interface376when the connection to network164(and therefore to storage domain154) is lost. If the storage device is not disconnected (i.e. is still connected and fully accessible), at block715, method700determines if the requested data is present in cache134,234. If the data is not found in cache134,234, at block720, method700copies the requested data from the storage device to cache134,234. In one embodiment, storage device interface376may initiate a read operation to read the data from the storage device and copy the data to cache134,234to the storage device. If the data was already present in cache134,234, or after the data is copied to cache134,234, at block725, method700provides the requested data from cache134,234to the requestor on host computer system110. If at block710, method700determines that the storage device is disconnected, at block730, method700determines if the requested data is present in cache134,234. If the data is present in cache134,234, at block725, method700provides the requested data from cache134,234to the requestor on host computer system110. This enables virtual machine140,142or host application240,242to continue normal operation without being suspended or crashing due to a read operation error. If the data is not found in cache134,234, at block735, method700suspends execution of virtual machines140,142. In another embodiment, method500may suspend execution of a process, running either on the virtual machine or on the host, which initiated the read operation. In one embodiment, cache manager133,233may implement read-ahead techniques to prefetch certain data from the storage device and have it available in cache134,234. For example, cache manager133,233may recognize the virtual machine140,142or host application240,242which is currently accessing storage device (or even an individual process being executed on host computer system110,210, and, based on prior I/O statistics, identify certain pieces of data from the storage device that the process is likely to request. In one embodiment, cache manager133,233or some other component of hypervisor132can monitor activities of these processes to build a profile comprising the I/O statistics. Upon identifying these pieces of data that are likely to be requested, cache manager133,233can prefetch them from the storage device and make them available in cache134,234before they are even requested. In this manner, even if the storage device becomes disconnected at some point, the processes can continue operation without having to be suspended or experiencing a read operation error. This can continue as long as the process requests data that has been stored in the cache, until the storage device can be reconnected. FIG.8illustrates a diagrammatic representation of a machine in the exemplary form of a host computer system800within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. For example, the instructions may cause the machine to perform transparent disk caching for virtual machines. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server machine in client-server network environment. The machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, host computer system800may represent either of host computer systems110or210, as shown inFIGS.1-2. The exemplary host computer system800includes a processing device (processor)802, a main memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory806(e.g., flash memory, static random access memory (SRAM)), and a data storage device818, which communicate with each other via a bus830. Processing device802represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device802may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device802may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device802is configured to execute the processing logic826for performing the operations and steps discussed herein. In one embodiment, processing logic826is representative of transparent disk caching manager133or233. The host computer system800may further include a network interface device808. The host computer system800also may include a video display unit810(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device812(e.g., a keyboard), a cursor control device814(e.g., a mouse), and a signal generation device816(e.g., a speaker). The data storage device818may include a computer-readable medium828on which is stored one or more sets of instructions822(e.g., instructions of transparent disk caching manager133or233) embodying any one or more of the methodologies or functions described herein. The instructions822may also reside, completely or at least partially, within the main memory804and/or within processing logic826of the processing device802during execution thereof by the host computer system800, the main memory804and the processing device802also constituting computer-readable media. The instructions may further be transmitted or received over a network820via the network interface device808. While the computer-readable storage medium828is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of several embodiments of the present invention. It will be apparent to one skilled in the art, however, that at least some embodiments of the present invention may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present invention. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present invention. In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that embodiments of the invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description. Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining”, “identifying”, “adding”, “selecting” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments of the invention also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
53,075
11861391
DETAILED DESCRIPTION Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description. Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art. Affinity and anti-affinity groups are used in the context of cloud-based systems such as the NFV infrastructure (NFVI) to indicate the restrictions on the placement of virtual resources imposed by the way these resources are used at the application/VNF level. For example, virtual machines (VMs), which host redundant entities of an application/VNF are grouped into an anti-affinity group requesting that, at the infrastructure level, these VMs are not placed on the same compute resource (i.e. physical host) so they are not affected simultaneously in case of a host failure. As a result, this grouping is known at the infrastructure level as well as at the application/VNF level. When the infrastructure is upgraded, its hardware or virtualization resources are taken out of service, which impacts the virtual resources hosted on these resources. To speed up the upgrade, it is desirable to upgrade as many infrastructure resources in parallel as possible, however this has the potential to create outages at the application/VNF level services. Moreover, it is desired that applications/VNFs providing continuous and highly available services be prepared to infrastructure level outages and get notifications. This preparedness may range from simply blocking or switching out traffic from a single impacted VM to reconfiguring or scaling out the application/VNF for the time/duration of the infrastructure upgrade. Since it is known to both the infrastructure and the application/VNF level, the anti-affinity group concept can be use as follows during infrastructure upgrades to address these opposing goals. The infrastructure manager may or may not know the application/VNF level managers that can be associated with an anti-affinity group. It can know it, for example, by knowing the VNF Manager (VNFM) which requested a VM (for a VNF) and its placement in such a group. Alternatively, the infrastructure manager may expose a registration interface through which manager entities (e.g. the VNF manager(s)185ofFIG.1, and possibly the element manager (EM) such as the EM2106and EM3107ofFIG.1, etc.) can register for notifications about events (e.g. upcoming upgrade) impacting certain anti-affinity groups. The manager entities may indicate the anti-affinity groups/virtual resources for which they wish to receive notifications, and for each group/resource the type of events (e.g. beginning/end of upgrade) they are interested in as well as the lead time (e.g. time needed to prepare the application/VNF for the event) they need for the event type notification. When an infrastructure upgrade is initiated for a list of resources, the infrastructure manager organizes the upgrade of resources according to the hosted anti-affinity groups, i.e. resources hosting virtual resources of the same anti-affinity group are upgraded in sequence forming a series of steps, where in each step a single resource is upgraded. Resources hosting different anti-affinity groups can be upgraded in parallel thus they can be upgraded in the same step. Based on a calculated ordering, the infrastructure manager identifies the managers responsible for applications/VNFs hosted on each anti-affinity group. It sends a start notification to each manager interested in an anti-affinity group to be upgraded in the next series. This notification allows the application/VNF manager to make the preparations necessary to mitigate the impacts of the infrastructure upgrade, for example, it may scale out the application/VNF so that there is more redundancy, or in case of geographical redundancy, it may switch the active role to the not impacted site. The infrastructure manager waits either until the appropriate lead time has expired or until it receives a ready response from each manager it has notified, then proceeds with the upgrade of resources supporting the anti-affinity group, one resource at a time. Once the last resource supporting the hosted anti-affinity group has been upgraded, the infrastructure manager sends a completion notification to the managers to which the start notification was sent, which in turn can perform any action needed to wrap-up the upgrade preparedness at the application/VNF level, e.g. scale in. The infrastructure manager may also send upgrade notification for each step individually within the upgrade series to the virtualized resources impacted in each particular step. FIG.2illustrates an example embodiment in which principles taught herein are used to coordinate the upgrade of resources, between the infrastructure manager executing the upgrade and a manager (EM1105or EM2106) managing an application/VNF (VNF1124, VNF2125or VNF3126) hosted on a group of virtual resources (VM1-VM13) hosted on the resources (Comp Host1-Comp Host6) to be upgraded. This coordination of the upgrade resources may be executed through direct communication between the VIM and EMs (such as indicated onFIG.2) or through indirect communication of the VIM and EMs through the VNFMs (such as indicated onFIG.1). In the specific example illustrated byFIG.2, there are three VNFs (VNF1124-VNF3126) managed by two EMs (EM1105and EM2106) which are hosted on an NFV Infrastructure (NFVI)130managed by the Infrastructure Manager. It should be understood that the managers (EMs)105,106are managing the application/VNF124-126which run on and/or are using the group of hardware resources. The application/VNF uses a group of virtual resources (the VMs) which are hosted on a group of (physical) resources (the CompHosts). This last group is the one being upgraded by the infrastructure manager, while the managers (EMs) are managing the top part ofFIG.2(the VNFs). Referring back toFIG.1, more generally, the manager can be any one of EM2106, EM3107, a VNF manager185or even the operations support system/business support system (OSS/BSS101). The notification can be sent by the virtualized infrastructure manager(s) (VIM(s))190and the entities to be upgraded can be anything in the NFVI130box, namely the vCompute, the virtualization layer, the interface between the virtualization layer (VI) and the hardware (HA) or any of the compute, storage and network hardware. Returning toFIG.2, VNF2125runs on multiple VMs {VM2, VM4, VM9, VM11}, which are listed in Anti-Affinity Group 2. VNF3126also runs on multiple VMs {VM7, VM8, VM13}, which are listed in Anti-Affinity Group 3. According to the method illustrated inFIGS.3and4, EM2106registers with the Infrastructure Manager210for upgrade notifications for Anti-Affinity Group 2 (for VNF2125) and for Anti-Affinity Group 3 (for VNF3126), steps320,420. This registration can happen only once, although it is possible that it could happen several times. The registration can happen at any time in advance of the upgrade; possibly a long time in advance of the upgrade. The VMs (which are virtual resources) are hosted on the compute hosts (which are physical resources), as shown. It should be noted that the infrastructure manager210may correspond to a VIM190extended with a software modification manager (SMM)705, illustrated inFIG.7. In the example ofFIGS.2-4, a system administrator305wants to upgrade the hypervisor which runs on compute hosts {CompHost4, CompHost5, CompHost6}, steps325,425. These physical hosts host VMs for both Anti-Affinity Group 2 and Anti-Affinity Group 3. The Infrastructure Manager210decides in which order the hosts are going to be upgraded, steps330,430. This corresponds to the step of calculating the upgrade order for the list of resources. Since the physical hosts host anti-affinity groups, the Infrastructure manager210needs to do the upgrade host by host with respect to those anti-affinity groups. If the Infrastructure manager decides to start the upgrade with CompHost6, it sends the notification to EM2106for Anti-Affinity Group 3, steps335,435and waits for the response or the expiration of a timer, steps345,350,450. For Anti-Affinity Group 2 the infrastructure manager sends the notification to EM2106before it proceeds with the upgrade, steps355,455, of the first of either CompHost4or CompHost5. The upgrade is then repeated for each resource, steps360,460. If the infrastructure manager had decided to start with the upgrade with CompHost4, instead of CompHost6, as described above, then it would have sent a notification for both anti-affinity group 2 and anti-affinity group 3, to EM2which manages VNF2125and VNF3126, before starting any upgrade. In a general case, any subset of hosts can be selected as first ones to upgrade as long as they do not host two VMs from the same anti-affinity group. I.e. in case of the example ofFIG.2, there is no two physical hosts to be upgraded that would not impact two VMs of one or the other anti-affinity group. Hence the upgrade needs to proceed one host at a time, and any one physical host can be selected as the first host to upgrade. The selection of the actual order in which the hosts will be upgraded in such a case may be done in any manner known by a person skilled in the art. The end notification is sent to the corresponding EM, once hosts for all VMs of a given anti-affinity group have been upgraded, i.e. for Anti-Affinity Group 2 after both CompHost4and CompHost5have been upgraded and for Anti-Affinity Group 3 after both CompHost4and CompHost6have been upgraded, step365,465. In the example ofFIG.2, since EM1105did not request notifications for Anti-Affinity Group 1, it is not aware of the upgrade of CompHost5and CompHost6and VNF1will perceive the upgrade as failure of CompHost5and CompHost6. In an embodiment, there is provided a method, executed in a Network Function Virtualization Infrastructure (NFVI) software modification manager, for upgrading infrastructure resources hosting a plurality of Virtual Resources (VRs), comprising:receiving an upgrade request for a list of infrastructure resources;determining an upgrade sequence for the list of infrastructure resources;sending a notification that a software modification procedure is about to start to a Virtual Network Function (VNF) level manager managing a Virtual Network Function hosted on a VR hosted on an infrastructure resource selected for upgrade;upgrading the infrastructure resources selected for upgrade; andnotifying the VNF level manager about the completion of the process. The infrastructure resources may be hardware resources, the upgrade request for the list of infrastructure resources may be received from a system administrator or from a network node and the list of infrastructure resources may comprise one or more resources. Determining an upgrade sequence may comprise identifying impacted VRs and VR groups and determining an order in which the software modifications of NFVI resources can be performed considering constraints imposed by the impacted VRs and VR group. In the method, a first infrastructure resource may be selected based on groupings of the VRs in anti-affinity groups related to VNFs, for an anti-affinity group, the VRs impacted simultaneously in the group may not exceed a maximum number specified for the anti-affinity group and at least a minimum number of VRs may be kept available at all times. The notification that a software modification procedure is about to start may comprise information for a single VR or information for a VR group, the notification that a software modification procedure is about to start may further specify whether a VR is going to be live-migrated or shut down, and a leadtime. The leadtime may correspond to a maximum time to wait before starting the upgrading of the infrastructure resource. The NFVI software modification manager may wait for the leadtime before starting the upgrading of the infrastructure resource and the leadtime may be determined as the maximum leadtime imposed by constraints. The notification may be based on subscription and/or sent via the MANO entity requesting the allocation of the virtualized resource or creating the anti-affinity group. After sending the start notification that a software modification procedure is about to start, the method may further comprise receiving a ready for software modification message from the VNF level manager and upgrading the infrastructure resource may comprise upgrading a software component of the infrastructure resource. In the method, upgrading the infrastructure resource may comprises any one of:changing a firmware of the infrastructure resources;change a host OS and/or a hypervisor software, including virtual machines;changing software providing virtual networks; andchanging software providing virtual storage. When the upgrading of the infrastructure resource selected for upgrade is completed, a further infrastructure resource may be selected for upgrade and may be upgraded, until all the infrastructure resources in the list of infrastructure resources are upgraded. The method may further comprise, as an initial step, the step of receiving, from a VNF-level Manager, information whether coordination of NFVI software modifications is necessary for a VR or a VR group, as well as the applicable constraints. Coordination of NFVI software modifications may entail that the VNF-level Manager is registering to receive notifications from the NFVI software modification manager. A VR group may be an anti-affinity group. The VNF-level Manager may register to receive notification for a plurality of anti-affinity groups. The step of receiving information from a VNF-level Manager may further comprise receiving any one of:a minimum lead time of the notification, the lead time reflecting the time the VNF needs to prepare for the potential disruption(s);a minimum number of anti-affinity group members required to be available and/or a maximum number of anti-affinity group members that can be impacted simultaneously; the minimum number reflecting a minimum number of virtualized resources required for the function/service to be provided (e.g. cluster membership requires quorum); the maximum number reflecting the replication schema/redundancy; anda VM migration tolerance, wherein a VM may tolerate live, offline or no migration at all. The method may further comprise the step of sending a notification that the upgrade request for the list of infrastructure resources has been completed. VR may comprise any one of virtual machines, containers, hypervisors, virtual local area network and virtual disks/storage. The NFVI software modification manager may be a virtual infrastructure manager (VIM), the NFVI software modification manager may be composed of a plurality of VIMs, and the VNF level manager may comprise any one of a VNF Manager, an Element Manager (EM), an operations support system/business support system (OSS/BSS) or another functional block responsible for the coordination on behalf of hosted VNF(s) and Management and Orchestration (MANO). In an embodiment, there is provided an NFVI software modification manager comprising processing circuitry and a memory, the memory containing instructions executable by the processor whereby the NFVI software modification manager is operative to execute any of the methods described herein. In an embodiment, there is provided a computer-readable storage medium, having stored thereon a computer program that when executed enables an NFVI software modification manager to execute any of the methods described herein. In an embodiment, there is provided a method for coordinating the upgrade of resources, between an infrastructure manager executing the upgrade and a manager managing an application/VNF hosted on a group of virtual resources hosted on the resources to be upgraded, comprising:receiving an upgrade request for a list of resources;computing an upgrade order for the list of resources according to constraints of the hosted groups of virtual resources;sending a start notification to a manager interested in a group of resources to be upgraded;upgrading the resources of the group of resources; andsending a completed notification to the manager interested in the group of resources to be upgraded. The method may further comprise receiving a request for registering for notifications for a group of virtual resources from the manager of an application/VNF using that group of virtual resources when the group of virtual resources or their hosting group of resources to be upgraded. The method, may further comprise before upgrading the resources of the group of resources, sending a notification to one or more managers registered for a group of virtual resources hosted on the group of resources; and waiting until an acknowledgement is received from the manager interested in the group of resources to be upgraded, the acknowledgement being in response to the start notification. The content of section 6.3.3 of REL006 of ETSI NFVI software modification scenarios and requirements, entitled NFVI Software, as well as the accompanying Annex B: NFVI software modification flows, are reproduced throughout the remainder of the description below and inFIGS.5and7. These extracts include changes that were proposed for introduction in the ETSI standard, according to an embodiment. 6.3.3 NFVI Software 6.3.3.1 Introduction The NFVI functional block is the totality of all hardware and software components which build up the environment in which VNFs are deployed, managed and executed. It can be divided into physical hardware resources for computing, storage and networking, and software resources providing the virtualisation layer and the virtualised resources (e.g. hypervisors, VMs, VLANs, virtual disks etc.). The NFVI and its constituent components have a lifecycle independent of the lifecycles of the VNFs and MANO hosted on that infrastructure. The layered nature of software, however, means that changes to the NFVI layer has the potential to have adverse effects on the VNF and MANO layers if care is not taken (as described in this clause) to avoid or minimise this impact. This potential impact affects the governance, precedence and priority over software modification management activities. A NFVI software modification procedure may be initiated after the successful completion of the initial software download procedure as described in clause 6.2 (not provided herein). 6.3.3.2 Software Modification Precedence The nature of layered architectures is that, through planning and technology, many software modifications of a given layer can proceed without impact on surrounding layers. This, however, is not universal, and there will be edge cases where a conflict arises between the need to make software modifications to the NFVI layer, and its potential to impact one or more workloads (VNF and/or MANO) hosted on that NFVI. In these cases, rules must be defined for how to handle the conflict. Since the purpose of NFVI is to support workloads it might seem essential to let those workloads dictate when a software modification to the NFVI can proceed. However, this approach, if unbounded, could indefinitely block NFVI software modifications from proceeding, with potentially significant impacts on stability, security, or new service deployment. Conversely, limits in technology and impacts on services either through direct service disruption during a software modification, or indirect service disruption through pre-modification workload migration, may be deemed unacceptable for certain classes of workload. The balance between these extremes is to provide a limited facility for services to respond to a pending service disruption/outage at the NFVI layer, and take appropriate actions in a more graceful fashion than the actions that would be taken under failure conditions. This would generally take the form of a “notice period” of pending software modifications, and the time bound opportunity for services to respond on their own in preparation for such a NFVI software modification. This allows for more customised, fine-grained responses to predictable NFVI service disruptions. It should be noted that no amount of testing can guarantee the success of a NFVI software modification. Thus, any notice period can serve only to reduce the potential for disruptive impact. A software modification might introduce a pathological behaviour, and so part of the software modification management process must be the gradual, risk controlled introduction of workloads onto the newly changed infrastructure, in order to further reduce the risk associated with software modifications. 6.3.3.3 Coordination of NFVI Software Modifications A software modification will often affect far more than a single NFVI component, and might require the controlled deployment of software modifications across an entire resource pool on the one hand side. On the other hand, VNFs and MANO deployed on these NFVI components may need to take different actions to mitigate the potential impact. Some may need to change their configuration e.g. scale out to increase their redundancy, others may need to evacuate the virtualised resource being shut down for the time of the software modification, or block/reduce traffic directed to such an impacted virtualised resource. Thus, the prerequisite of successful coordination is to be able to identify at the NFVI layer the constraints of the hosted VNFs and MANO with respect to the virtualised resources and their groups. The grouping of virtualised resources relevant and known to both the NFVI and the VNF/MANO layers are the anti-affinity group. It is used typically to prevent single points of failure and therefore reflect redundancy used at the upper layers. The constraints may be expressed when the virtualised resources and their groups are requested from or defined for the NFVI layer, for example, together with the manager entity who would need to be notified about the upcoming NFVI software modification. Alternatively, the manager entities could subscribe for such notifications. The constraints among others might indicate whether the notification is requested before a virtualised resource or a group of virtualised resources (e.g. anti-affinity group) being impacted, the lead time requested for the notification as preparations may need time, and options such as an upper time limit at which virtual resource migration is feasible otherwise shutdown is preferred, or for an anti-affinity group the minimum number of virtual resources that need to be available or maximum number virtual resources that can be impacted simultaneously. Whenever a NFVI software modification is requested from the NFVI Software Modification Manager it needs to coordinate the software modification activities at the NFVI layer with the managers of the hosted entities by considering these constraints of virtualised resources and groups. the NFVI Software Modification Manager is responsible for managing the NFVI software modification procedure. Note however that at this time it has not been decided which NFV functional block will implement this function. The general coordination flow for NFVI software modification could look like the example illustrated inFIG.5. The diagram shows the NFVI Software Modification Manager705function550and a VNF-level Manager entity185, which could represent, for example, the VNFM, the EM, the OSS/BSS or possibly another functional block responsible for the coordination on behalf of the hosted VNF(s) and MANO. According toFIG.5the exemplary flow500is as follows:1. As prerequisite, the VNF-level Managers inform the NFVI Software Modification Manager705whether coordination of NFVI software modifications is necessary for a virtualised resource (VR) or a VR group (such as an anti-affinity group) as well as the applicable constraints. This may be subscription based or part of the VR/VR group creation process2. When a NFVI software modification is requested (2.a), the NFVI Software Modification Manager705identifies the impacted VRs and VR groups and the order in which the software modifications of NFVI resources can be performed considering the constraints imposed by these impacted VRs and VR group (2.b).3. The NFVI Software Modification Manager notifies the VNF-level Manager that an NFVI software modification procedure is about to start, which may impact the VR group (3.a) or the VR (3.c) for which it coordinates the process. For a VR such as a virtual machine it may further specify whether the VR is going to be live-migrated or shut down. At the same time as the notification is sent, the NFVI Software Modification Manager starts a timer with the determined lead time. The lead time may be determined, for example, as the maximum lead time imposed by the constraints. The NFVI Software Modification Manager wait the lead time before proceeding with the software modifications.4. The VNF-level Manager initiates the necessary preparations for the potential disruption(s) caused by the NFVI software modification(s). In case of a VR group (4.a) this may mean for example a scale out to increase the VNF's redundancy, or switching the active role from the impacted VNF to its geo-redundant pair. In case of a VR (4.c) this may also mean a switch over of the active role of the VNFC instance hosted on the impacted VR, or redirecting part or the entire traffic to another redundant VNFC instances. Once the preparations are completed the VNF-level Manager may inform the NFVI Software Modification Manager about its readiness (4.b, 4.d), however this is not necessary. Such response can be used by the NFVI Software Modification Manager to cancel the lead timer.5. Once the lead time expires or all responses have been received the NFVI Software Modification Manager proceeds with the software modification of the NFVI resources (5.a) or resource (5.b) as applicable. In case of a VR group (5.a), this may mean multiple iterations until all NFVI resources supporting the VR groups have been modified as requested. Doing so the NFVI Software Modification Manager needs to respect the applicable constraints. For an anti-affinity group, for example, the VRs impacted simultaneously in the group may not exceed the maximum number specified for the anti-affinity group and at least the minimum number of VRs may need to be kept available at all times.6. Once the NFVI software modification procedure is completed for all required resources, the NFVI Software Modification Manager notifies the VNF-level Manager about the completion of the process (6.a, 6.c), which in turn can initiate any wrap-up actions (6.b, 6.d), such as reversing the configuration changes made in preparation to the impact or workload rebalancing.7. Finally, the NFVI Software Modification Manager sends a notification that the requested NFVI software modification has been completed. Turning toFIG.6, there is provided a method600, executed by a Network Function Virtualization Infrastructure (NFVI) software modification manager550, for coordination of NFVI software modifications of a NFVI providing at least one Virtual Resource (VR) hosting at least one Virtual Network Function (VNF), comprising:receiving an NFVI software modifications request, step602;sending a notification that a software modification procedure of the at least oneVR is about to start to a VNF level manager, the VNF level manager managing a VNF hosted on the at least one VR provided by the NFVI, step604;applying software modifications to at least one resource of the at least one VR, step609; andnotifying the VNF level manager about completion of the software modifications, step611. The at least one VR may comprise at least one VR group. The method may further comprise, receiving information from the VNF level manager comprising an indication whether coordination of NFVI software modifications is necessary for a VR as well as applicable constraints, step601. The information may further comprise an anti-affinity group. The receiving information may be subscription based or part of the VR creation process. The method may further comprise identifying impacted VRs and VR groups and an order in which software modifications of NFVI resources can be performed considering constraints imposed by the impacted VRs and VR groups, step603. The notification to the VNF level manager may further comprise an indication whether the at least one VR is going to be live-migrated or shut down when the at least one VR is as a virtual machine. At a same time as the notification is sent, a timer may be started with a determined lead time, the lead time being determined as the maximum lead time imposed by constraints, step605. The method may further comprise waiting the lead time before proceeding with the NFVI software modifications, step606. When readiness signaling can be used by the VNF-level manager (i.e. the API is provided) the waiting is at most the lead time. When this option is not provided the waiting is at least the lead time. The method may further comprise initiating preparations for potential disruptions caused by the NFVI software modifications and, when the at least one VR is a VR group, scaling out to increase VNF redundancy. Or, the method may further comprise initiating preparations for potential disruptions caused by the NFVI software modifications and, when the at least one VR is a VR group, switching an active role from an impacted VNF to a geo-redundant pair associated with the VNF. Or, the method may further comprise initiating preparations for potential disruptions caused by the NFVI software modifications and switching over an active role of a VNF component (VNFC) instance hosted on an impacted VR. Or the method may further comprise initiating preparations for potential disruptions caused by the NFVI software modifications and redirecting at least a part of traffic to at least one redundant VNF component (VNFC) instance, step607. The method may further comprise receiving readiness information from the VNF-level Manager and canceling the lead timer, step608. When the at least one VR is a VR group, applying software modifications may comprise multiple iterations until all NFVI resources supporting the VR group have been modified and, for an anti-affinity group, the VRs impacted simultaneously in the anti-affinity group do not exceed the maximum number specified for the anti-affinity group and at least a minimum number of VRs are kept available at all times, step610. Notifying the VNF level manager about the completion of the software modifications may further comprise reversing configuration changes made in preparation to an impact, or workload rebalancing, step612. The method may further comprise sending a notification to the VNF level manager that the NFVI software modifications have been completed for the VNF hosted on the at least one VR, step613. 6.3.3.4 NFVI Resource Software Modification A software modification of NFVI may include:Change of firmware of the physical equipmentChange of the host OS and/or hypervisor software, including virtual machinesChange of software providing virtual networksChange of software providing virtual storage Note that similar consideration may apply to the replacement of physical computing, networking and storage equipment. The sequence diagram ofFIG.7provides an overview of how these NFVI resource software modifications could be handled. It is the elaboration of the fifth step of the exemplary coordination flow using as example the upgrade of a compute host (CompHost), which impacts a hosted VR, namely a VM. Note that this is an illustrative example demonstrating the interaction that could take place for this process.1. NFVI Software Modification Manager requests that a compute host (e.g. CompHost) is put in maintenance mode. This means that some preparations need to be done so that the resource can be taken out of service.2. Depending on the applicable policies, an appropriate action is performed on the impacted VM: The VM may be shut down or it might be migrated. Migration requires that a compatible host is available to migrate the VM to.3. Once the action is completed and the resource does not serve any VM anymore, it is in maintenance mode and the NFVI Software Modification Manager is informed.4. The NFVI Software Modification Manager initiates the upgrade of the CompHost resource. The upgrade method is not essential as the resource is out of service. The duration of the upgrade method might be of importance as will be discussed below.5. The NFVI Software Modification Manager is informed about the completion of the resource upgrade.6. Since the upgrade has finished the NFVI Software Modification Manager requests to take the CompHost resource back into service.7. The actions necessary to bring the resource back into service are performed.8. NFVI Software Modification Manager request is confirmed that the CompHost resource is back in service again. The flow diagram700ofFIG.7focuses on the upgrade of an individual resource and the same process needs to be repeated for each NFVI resource or group of resources to be upgraded as it has been indicated in the step5of the exemplary coordination flow ofFIG.5. Some of these may be performed in parallel, while other may need to be in sequence. In general, the software modification of individual resources need to be ordered in such a way that they do not impact the network services. In particular, VMs that participate in a redundancy schema at the VNF level are configured as an anti-affinity group, which needs to be taken into consideration. The software modification process of individual resources should impact at most an eligible number of VMs of an anti-affinity group at a time. The number could be constrained by the minimum number of virtualised resources that need to be available at any given time in the anti-affinity group (for example to maintain quorum) and/or the maximum number of virtualised resources that can be impacted simultaneously based on the replication schema used at the VNF or MANO level. With respect to virtualised network resources, the anti-affinity grouping of virtualised links requires similar consideration although the NFVI may provide reliable virtualised network resources, whose reliability needs to be maintained throughout the upgrade. The same approach is applicable to virtualised storage resources. For a large data center it is probably not sufficient to perform the software modification of resources one by one as this would take considerable time. Careful planning of the NFVI software modification will be needed to optimize the time consumption. The entity planning the software modification should e.g. consider which are the resources that could be put in maintenance mode simultaneously, where to migrate resources in order to avoid repeating migration steps and how to optimize resource relocation when the physical resources are limited. Additional considerations may be required to accommodate potential NS and VNF LCM requests during the NFVI software modification as these operations may interfere with the software modifications and may be subject to SLAs. For example, VNF scaling in may remove just upgraded virtualised resources, while scaling out of the old version instead of the new prolongs the NFVI software modification process. Also, the software modification process may prevent scaling out or instantiation operations if too many resources are taken out from a resource pool for software modification at peak usage time, which in turn may cause performance degradation at the VNF and network service levels or prevent VNF healing after failure. Turning toFIG.8, there is provided a method800, for applying software modifications to at least one resource of the VR, comprising requesting that a compute host resource be put in maintenance mode, indicating that some preparations need to be done so that the compute host resource can be taken out of service, step851. The method may further comprise shutting down an impacted VM, step852. The method may further comprise migrating an impacted VM to an available compatible host, step853. The method may further comprise receiving a notification that the compute host resource does not serve any VM anymore and is in maintenance mode, step854. The method may further comprise initiating an upgrade of the compute host resource, step855. The method may further comprise receiving an information about completion of the resource upgrade, step856. The method may further comprise requesting to take the compute host resource back into service, step857. The method may further comprise performed actions to bring the compute host resource back into service, step858. The method may further comprise receiving a request confirmation that the compute host resource is back in service again, step859. 6.3.3.5 NFVI Software Modification Requirements REQ.NFVI.M.01: VNF instances or their managers (e.g. EM/OSS/BSS/VNFM) shall be able to receive an advanced notification about the software modification of a NFVI resource or group of NFVI resource as well as at the completion of the respective software modifications. For the purpose of the notification a group of NFVI resources impacting the VNF can be identified by the anti-affinity group they are supporting and the VNF is using or by a resource group assigned to a tenant. NOTE: The notification may be based on subscription and/or sent via the MANO entity requesting the allocation of the virtualised resource or creating the anti-affinity group. REQ.NFVI.M.02: It shall be possible to specify parameters of impact tolerance of a VNF with respect to each of their virtualised resource(s) and anti-affinity group(s). Towards the NFVI layerThe minimum lead time of the advanced notification.NOTE: The lead time reflects the time the VNF needs to prepare for the potential disruption(s).The minimum number of anti-affinity group members required to be available and/or the maximum number of anti-affinity group members that can be impacted simultaneously.NOTE: The minimum number reflects the minimum number of virtualised resources required for the function/service to be provided (e.g. cluster membership requires quorum); the maximum number reflects the replication schema/redundancy.VM migration toleranceNOTE: A VM may tolerate live, offline or no migration at all. Considering the VNF level fault detection mechanisms a maximum interruption time can be specified that can go unnoticed. If the NFVI cannot guarantee this maximum interruption time, live migration will be detected and handled as a failure at the VNF level therefore offline migration should be used instead. The migration may also be constrained by affinity/anti-affinity groups and/or location constraints. Optionally towards the MANORules/policies that prepare the VNF for an upcoming upgrade NOTE: Example of such rule would be a scale out rule that the VNFM or NFVO applies to the VNF when it receives the notification of an NFVI software modification for an anti-affinity group. REQ.NFVI.M.03 During the NFVI software modification process the affinity/anti-affinity groups of virtualised resources shall be maintained according to the specified parameters. The NFVI software modification process shall not impact simultaneously more than the eligible number of virtualised resources of an anti-affinity group. NOTE: The eligible number of resources depends on the currently available virtualised resources, the maximum number that can be taken out and the minimum number of members required in an anti-affinity group. REQ.NFVI.M.04: The NFVI software modification process shall not impact the overall reliability and performance indicators (KPIs) of the virtualised resources offered. The NFVI software modification process shall consider potential NS and VNF LCM operations during its execution. NOTE: When virtualised resources are requested during an NFVI software modification within the limits of existing reservations the requests should always succeed. REQ.NFVI.M.05 During the NFVI software modification process the compatibility requirements between virtualised and virtualisation resources shall be satisfied. NOTE: For example, in case of the upgrade of the hypervisor or OS of virtualisation layer the virtualised resource using the current VM image may become incompatible. The NFVI software modification needs to incorporate the VM image conversion process and ensure that VMs are migrated/failed over between compatible hosts and that reservations are adjusted appropriately during the upgrade, which means that if both old and new versions are being used simultaneously both need to have access to reserved resources adequately. Turning toFIG.9, illustrating a network node in its environment900, the network node960includes processing circuitry970, device readable medium980, interface990, auxiliary equipment984, power source986, power circuitry987, and antenna962. Although network node960illustrated in the example wireless network ofFIG.9may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node960are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium980may comprise multiple separate hard drives as well as multiple RAM modules). Similarly, network node960may be composed of multiple physically separate components, which may each have their own respective components. In certain scenarios in which network node960comprises multiple separate components, one or more of the separate components may be shared among several network nodes. Processing circuitry970is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry970may include processing information obtained by processing circuitry970by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of the processing making a determination. Processing circuitry970may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node960components, such as device readable medium980, network node960functionality. For example, processing circuitry970may execute instructions stored in device readable medium980or in memory within processing circuitry970. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry970may include a system on a chip (SOC). In some embodiments, processing circuitry970may include one or more of radio frequency (RF) transceiver circuitry972and baseband processing circuitry974. In some embodiments, radio frequency (RF) transceiver circuitry972and baseband processing circuitry974may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry972and baseband processing circuitry974may be on the same chip or set of chips, boards, or units In certain embodiments, some or all of the functionality described herein as being provided by a network node, may be performed by processing circuitry970executing instructions stored on device readable medium980or memory within processing circuitry970. In alternative embodiments, some or all of the functionality may be provided by processing circuitry970without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry970can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry970alone or to other components of network node960, but are enjoyed by network node960as a whole, and/or by end users and the wireless network generally. Device readable medium980may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry970. Device readable medium980may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry970and, utilized by network node960. Device readable medium980may be used to store any calculations made by processing circuitry970and/or any data received via interface990. In some embodiments, processing circuitry970and device readable medium980may be considered to be integrated. Interface990is used in the wired or wireless communication of signaling and/or data between network node960and network906. As illustrated, interface990comprises port(s)/terminal(s)994to send and receive data, for example to and from network906over a wired connection. Interface990also includes radio front end circuitry992that may be coupled to, or in certain embodiments a part of, antenna962. Radio front end circuitry992comprises filters998and amplifiers996. Radio front end circuitry992may be connected to antenna962and processing circuitry970. Radio front end circuitry may be configured to condition signals communicated between antenna962and processing circuitry970. Radio front end circuitry992may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry992may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters998and/or amplifiers996. The radio signal may then be transmitted via antenna962. Similarly, when receiving data, antenna962may collect radio signals which are then converted into digital data by radio front end circuitry992. The digital data may be passed to processing circuitry970. In other embodiments, the interface may comprise different components and/or different combinations of components. In certain alternative embodiments, network node960may not include separate radio front end circuitry992, instead, processing circuitry970may comprise radio front end circuitry and may be connected to antenna962without separate radio front end circuitry992. Similarly, in some embodiments, all or some of RF transceiver circuitry972may be considered a part of interface990. In still other embodiments, interface990may include one or more ports or terminals994, radio front end circuitry992, and RF transceiver circuitry972, as part of a radio unit (not shown), and interface990may communicate with baseband processing circuitry974, which is part of a digital unit (not shown). Antenna962may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals907. Antenna962may be coupled to radio front end circuitry990and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna962may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as MIMO. In certain embodiments, antenna962may be separate from network node960and may be connectable to network node960through an interface or port. Antenna962, interface990, and/or processing circuitry970may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna962, interface990, and/or processing circuitry970may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment. Power circuitry987may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node960with power for performing the functionality described herein. Power circuitry987may receive power from power source986. Power source986and/or power circuitry987may be configured to provide power to the various components of network node960in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source986may either be included in, or external to, power circuitry987and/or network node960. For example, network node960may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry987. As a further example, power source986may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry987. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used. Alternative embodiments of network node960may include additional components beyond those shown inFIG.9that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node960may include user interface equipment to allow input of information into network node960and to allow output of information from network node960. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node960. In some embodiments some or all the steps of the methods described herein may be executed in network node960. For example, the network node may execute a Network Function Virtualization Infrastructure (NFVI) software modification manager or part of the functionality thereof, the network node comprising processing circuitry and a memory, the memory containing instructions executable by the processor whereby the NFVI software modification manager is operative to: receive an NFVI software modifications request;send a notification that a software modification procedure of at least one Virtual Resource (VR) is about to start to a Virtual Network Function (VNF) level manager, the VNF level manager managing a VNF hosted on the at least one VR provided by the NFVI;apply software modifications to at least one resource of the at least one VR; andnotify the VNF level manager about completion of the software modifications. The NFVI software modification manager executed in whole or in part on in network node960is further operative to execute any one of the methods disclosed herein. In some embodiments, a computer-readable storage medium, has stored thereon a computer program that when executed enables an NFVI software modification manager to execute any one of the methods disclosed herein. FIG.10is a schematic block diagram illustrating a virtualization environment1000. In the context of this disclosure, a container is a software component that can contain other components within itself. Multiple containers can share the same operating system (OS) instance, and each container provides an isolated execution environment for its contained component. As opposed to VMs, containers and their contained components share the same host OS instance and therefore create less overhead. There are two types of placement constraints in a cloud environment: affinity groups and anti-affinity groups. The anti-affinity groups express which VMs cannot be placed together on the same host. Thus, considering the application level redundancy, VMs of the same anti-affinity group cannot be upgraded at the same time as they may form a redundant pair, i.e. providing and protecting a given application service. The virtualization environment1000, comprises general-purpose or special-purpose network hardware devices1030comprising a set of one or more processors or processing circuitry1060, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device may comprise memory1090-1which may be non-persistent memory for temporarily storing instructions1095or software executed by processing circuitry1060. Each hardware device may comprise one or more network interface controllers (NICs)1070, also known as network interface cards, which include physical network interface1080. Each hardware device may also include non-transitory, persistent, machine-readable storage media1090-2having stored therein software1095and/or instructions executable by processing circuitry1060. Software1095may include any type of software including software for instantiating one or more virtualization layers1050(also referred to as hypervisors), software to execute virtual machines1040as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein. Virtual machines1040, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer1050or hypervisor. Different embodiments of the instance of virtual appliance1020may be implemented on one or more of virtual machines1040, and the implementations may be made in different ways. During operation, processing circuitry1060executes software1095to instantiate the hypervisor or virtualization layer1050, which may sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer1050may present a virtual operating platform that appears like networking hardware to virtual machine1040. As shown inFIG.10, hardware1030may be a standalone network node with generic or specific components. Hardware1030may comprise antenna10225and may implement some functions via virtualization. Alternatively, hardware1030may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO)175, which, among others, oversees lifecycle management of applications1020. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, virtual machine1040may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines1040, and that part of hardware1030that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines1040, forms a separate virtual network elements (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines1040on top of hardware networking infrastructure1030and corresponds to application1020inFIG.10. In some embodiments, one or more radio units10200that each include one or more transmitters10220and one or more receivers10210may be coupled to one or more antennas10225. Radio units10200may communicate directly with hardware nodes1030via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be effected with the use of control system10230which may alternatively be used for communication between the hardware nodes1030and radio units10200. In some embodiments some or all the steps of the methods described herein may be executed in the virtualization environment100ofFIG.1or in the virtualization environment1000ofFIG.10. For example, the virtualization environment100,1000may execute a Network Function Virtualization Infrastructure (NFVI) software modification manager or part of the functionality thereof, the virtualization environment comprising processing circuitry and a memory, the memory containing instructions executable by the processor whereby the NFVI software modification manager is operative to:receive an NFVI software modifications request;send a notification that a software modification procedure of at least one Virtual Resource (VR) is about to start to a Virtual Network Function (VNF) level manager, the VNF level manager managing a Virtual Network Function hosted on the at least one VR hosted on the NFVI;apply software modifications to at least one resource of the at least one VR; andnotify the VNF level manager about completion of the software modifications. The NFVI software modification manager is further operative to execute any one of the methods disclosed herein. In some embodiments, a computer-readable storage medium, has stored thereon a computer program that when executed enables an NFVI software modification manager to execute any one of the methods disclosed herein. In some embodiments some or all the steps of the methods described herein may be executed in a cloud-based system, comprising processing circuitry and a memory, the memory containing instructions executable by the processor whereby an NFVI software modification manager is enabled and is operative to execute any one of the methods disclosed herein.
61,276
11861392
DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. A primary system is comprised of file system data. The file system data includes a plurality of files (e.g., content files, text files, etc.) and metadata associated with the plurality of files. The file system data may include data associated with one or more virtual machines. The primary system may perform a backup snapshot of the file system data and send the backup snapshot to a secondary storage system. A backup snapshot represents the state of the primary system at a particular point in time. A backup snapshot may be a full backup snapshot or an incremental backup snapshot. A full backup snapshot includes the entire state of the primary system at a particular point in time. An incremental backup snapshot includes the state of the primary system that has changed since a last backup snapshot. A secondary storage system may be comprised of a secondary storage cluster that includes a plurality of nodes. The secondary storage system may ingest and store the backup snapshot across the plurality of nodes of the secondary storage cluster. A file system manager associated with the secondary storage system may organize the file system data of the backup snapshot using a tree data structure (e.g., Cohesity Snaptree®). The tree data structure may be comprised of a file system metadata snapshot tree and one or more file metadata trees, which enables a backup snapshot to be a fully hydrated backup snapshot, i.e., a backup snapshot that provides a complete view of the primary system corresponding to a moment in time when the backup snapshot was performed. The file system metadata snapshot tree may be used to capture different versions of the primary system's file system data. For example, a first file system metadata snapshot tree may correspond to a first backup snapshot and a second file system metadata snapshot tree may correspond to a second backup snapshot. The tree data structure may allow a chain of file system metadata snapshot trees (i.e., different file system metadata snapshot tree versions) to be linked together by allowing a node of a later version of a file system metadata snapshot tree to reference a node of a previous version of a file system metadata snapshot tree (e.g., a “file system metadata snapshot tree forest”). For example, a node of the second file system metadata snapshot tree corresponding to the second backup snapshot may reference a node of the first file system metadata snapshot tree corresponding to the first backup snapshot. A file metadata tree may correspond to one of the files included in the backup snapshot. For example, the file metadata tree may correspond to a virtual machine container file. The file metadata tree is a snapshot structure that is configured to store the metadata associated with the file. A cloud instance of a user virtual machine hosted on the primary system may be generated for one or more reasons. For example, the cloud instance of the user virtual machine may be generated for testing/development purposes. In other embodiments, the user virtual machine hosted on the primary system is offline and the cloud instance of the user virtual machine hosted on the primary system is generated to reduce the amount of downtime associated with the virtual machine. Conventional systems typically use the primary system to generate a copy of the virtual machine and deploy the virtual machine copy to the cloud. However, such an approach reduces the amount of resources the primary system has to perform one or more other tasks, such as running the virtual machine. Such an approach may not be possible in the event the primary system is offline. In some embodiments, a cloud instance of the user virtual machine is generated according to a backup policy. The secondary storage system may be used to generate and deploy the cloud instance of the user virtual machine according to the backup policy. In other embodiments, the primary system is configured to perform one or more backup snapshots to a cloud instantiation of the secondary storage system and the cloud instantiation of the secondary storage system is configured to generate and deploy the cloud instance of the user virtual machine according to the backup policy. The cloud instantiation of the secondary storage system may be comprised of a plurality of virtual instances. The cloud instantiation of the secondary storage system may be configured to store file system data of a primary system in a similar manner as an on-premises secondary storage system, but in a cloud environment. The virtual machine running on the primary system may be associated with a first virtual machine format (e.g., VMware). The first virtual machine format may not be compatible with a virtual machine format associated with a cloud environment (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.). The secondary storage system or the cloud instantiation of the secondary storage system may be configured to convert a copy of the virtual machine hosted on the primary system from a first virtual machine format to a second virtual machine format that is compatible with the cloud environment in which the cloud instance of the user virtual machine is to be deployed. The backup policy may include a schedule that indicates a frequency at which a cloud instance of the user virtual machine is to be generated. For example, the cloud instance of the user virtual machine may be generated each time the primary system performs a backup snapshot to the secondary storage system or to the cloud instantiation of the secondary storage system, on a periodic basis (e.g., hourly, daily, weekly, etc.), or when an amount of data associated with a virtual machine has changed more than a change threshold amount. The cloud instance of the user virtual machine may be maintained in a standby mode in a cloud environment until a deploy condition has been satisfied. For example, a user virtual machine hosted on the primary system may go offline or the primary system may go offline. In the event the deploy condition has been satisfied, the cloud instance of the user virtual machine is deployed and ready to be used by a user associated the primary system virtual machine within a short period of time (e.g., minutes). In other embodiments, a cloud instance of the user virtual machine is generated in response to a user command (e.g., on-demand). For example, the cloud instance of the user virtual machine may be generated for test/development purposes. A secondary storage system or a cloud instantiation of the secondary storage system may be used to generate and deploy the cloud instance of the user virtual machine. In other embodiments, the cloud instance of the user virtual machine is generated in response to a determination that the user virtual machine on the primary system is offline. For example, a user associated with the primary system may provide to a secondary storage system or to a cloud instantiation of the secondary storage system a command to generate the cloud instance of the user virtual machine. In response to the command, the secondary storage system or the cloud instantiation of the secondary storage system may be configured to convert a backup of the user virtual machine hosted on the primary system from a first virtual machine format to a second virtual machine format that is compatible with the cloud environment in which the cloud instance of the user virtual machine is to be deployed. The secondary storage system or the cloud instantiation of the secondary system may be further configured to deploy the cloud instance of the user virtual machine to the cloud environment. In other embodiments, the cloud instance of the user virtual machine is generated in response to a determination that the user virtual machine on the primary system is offline, but the secondary storage system is offline and the cloud instantiation of the secondary storage system has yet to be generated. A cloud object storage may store a snapshot archive that includes data associated with an archived version of the user virtual machine hosted on the primary system. A cloud instantiation of the secondary storage system may be generated, an archived version of the virtual machine may be provided to the cloud instantiation of the secondary storage system, the cloud instantiation of the secondary storage system may be configured to convert the archived version of the user virtual machine from a first format to a second format that is compatible with the cloud environment in which the cloud instance of the user virtual machine is to be deployed, and deploy the cloud instance of the user virtual machine to the cloud environment. By using a secondary storage system or a cloud instantiation of the secondary storage system to generate a cloud instance of a user virtual machine hosted on a primary system, the cloud instance of the user virtual machine may be generated without affecting a performance of the primary system. Furthermore, regardless of whether the primary system or secondary storage system is online, the cloud instantiation of the secondary storage system may generate a version of the user virtual machine, which reduces the amount of downtime for a user associated with the user virtual machine. FIG.1is a block diagram illustrating an embodiment of a system for deploying a cloud instance of a user virtual machine. In the example shown, system100includes datacenter101coupled to cloud environment121avia network connection111. Datacenter101is comprised of primary system102and secondary storage system104. Primary system102is a computing system that stores file system data. The file system data may include a plurality of files (e.g., content files, text files, etc.) and metadata associated with the plurality of files. For example, one of the files may be a virtual machine container file that corresponds to a user virtual machine. Primary system102may be comprised of one or more servers, one or more computing devices, one or more storage devices, and/or a combination thereof. Primary system102may be configured to send a backup snapshot of file system data to secondary storage system104according to one or more backup snapshot policies. In some embodiments, a backup snapshot policy indicates that file system data is to be backed up on a periodic basis (e.g., hourly, daily, weekly, monthly, etc.), when a threshold size of data has changed, or in response to a command from a user associated with primary system102. In some embodiments, primary system102includes an agent (not shown) that causes primary system102to perform a backup snapshot according to the backup snapshot policy. The agent may receive an instruction to perform a backup snapshot from secondary storage system104. Secondary storage system104is comprised of a secondary storage cluster that includes a plurality of nodes. The plurality of nodes may be comprised of one or more solid state drives, one or more hard disk drives, or a combination thereof. Each node may have its own corresponding processor. Secondary storage system104may be configured to ingest a backup snapshot received from primary system102and configured to store the data associated with the backup snapshot across the secondary storage cluster. Secondary storage system104may include a file system manager105that is configured to organize the file system data of the backup snapshot using a tree data structure. The tree data structure may provide a view of the file system data corresponding to a backup snapshot. The view of the file system data corresponding to the backup snapshot may be comprised of a file system metadata snapshot tree and one or more file metadata trees. The file system metadata snapshot tree is configured to store metadata associated with the file system data. A file metadata tree may correspond to one of the files included in the backup snapshot and store the metadata associated with a file. For example, a file metadata tree may correspond to a virtual machine container file (e.g., virtual machine image file, virtual machine disk file, etc.). Regardless if the view of the file system data corresponds to a full backup snapshot or an incremental backup snapshot, the view of the file system data corresponding to the backup snapshot provides a fully hydrated backup snapshot that provides a complete view of primary system102corresponding to at a moment in time when the backup snapshot was performed. A fully hydrated backup is a backup that is ready for use without having to reconstruct a plurality of backups to use it. Conventional systems may reconstruct a backup by starting with a full backup and applying one or more changes associated with one or more incremental backups to the data associated with the full backup. In contrast, any file stored in the storage volume at a particular time and the file's contents, for which there is an associated backup, may be determined from the file system metadata snapshot tree, regardless if the associated backup snapshot was a full backup snapshot or an incremental backup snapshot. Creating an incremental backup snapshot may only include copying data of the storage volume(s) that was not previously backed up. However, the file system metadata snapshot tree corresponding to the incremental backup snapshot provides a complete view of the storage volume(s) at the particular moment in time because it includes references to data of the storage volume that was previously stored. For example, a root node associated with the file system metadata snapshot tree may include one or more references to leaf nodes associated with one or more previous backup snapshots and one or more references to leaf nodes associated with the current backup snapshot. This provides significant savings in the amount of time needed to restore or recover a storage volume and/or a database. In contrast, traditional recovery/restoration methods may require significant time, storage, and computational resources to reconstruct a particular version of a volume or database from a full backup and a series of incremental backups. The view of file system data may allow any file (e.g., a virtual machine container file) that was stored on primary system102at the time the corresponding backup snapshot was performed, to be retrieved, restored, or replicated. A file system metadata snapshot tree includes a root node, one or more levels of one or more intermediate nodes associated with the root node, and one or more leaf nodes associated with an intermediate node of the lowest intermediate level. The root node of a file system metadata snapshot tree includes one or more pointers to one or more intermediate nodes. The root node corresponds to a particular backup snapshot of file system data. Each intermediate node includes one or more pointers to other nodes (e.g., a lower intermediate node or a leaf node). A leaf node of the file system metadata snapshot tree may store data associated with a file for a file that is less than or equal to a limit size (e.g., 256 kB). A leaf node of the file system metadata snapshot tree may be an index node (inode). A leaf node of the file system metadata snapshot tree may store a pointer to a file metadata tree for a file that is greater than the limit size. A file metadata tree includes a root node, one or more levels of one or more intermediate nodes associated with the root node, and one or more leaf nodes associated with an intermediate node of the lowest intermediate level. A leaf node of a file system metadata snapshot tree may include a pointer to the root node of the file metadata tree. A file metadata tree is similar to a file system metadata snapshot tree, but a leaf node of a file metadata tree includes an identifier of a data brick associated with one or more data chunks of the file or a pointer to the data brick associated with one or more data chunks of the file. For example, a leaf node of a file metadata tree may include a pointer to or an identifier of a data brick associated with one or more data chunks of a virtual machine container file. The location of the data chunks associated with a data brick may be identified using a table stored in a metadata store that matches brick numbers (i.e., a brick identifier) to chunk identifiers (e.g., SHA-1) or the location of the data brick may be identified based on the pointer to the data brick. The brick identifier may be used to identify a corresponding chunk identifier. A file table may associate chunk identifiers (e.g., SHA-1) with chunk files. A chunk file is configured to store a plurality of data chunks. The file table may include associate a location of a chunk identifier with an offset within a chunk file. The identified chunk identifier may be used to identify the chunk file that stores one or more data chunks associated with a file. Datacenter101is coupled to cloud environment121avia network connection111. Network connection111may be one or more of the following: a wired network connection, a wireless network connection, the Internet, an intranet, or any other appropriate communication connection. Cloud environment121amay correspond to a public cloud (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.). Cloud environment121amay correspond to a private cloud. Cloud environment121amay include a cloud instantiation122aof secondary storage system104, cloud portal123a, cloud object storage124a, and cloud deployment server126a. There may be a plurality of other cloud environments, e.g., cloud environments121b,121cwith their own corresponding cloud instantiations of secondary storage system104, cloud portal, cloud object storage, and cloud deployment server. To generate cloud instantiation122aof secondary storage system104, cloud portal123amay be configured to authenticate a user associated with secondary storage system104. Cloud portal123amay request the user associated with secondary storage system104to provide a credential that indicates the one or more secondary storage systems to which the user is associated. For example, the user may provide a username and password that is associated with an account. Cloud portal123amay store a data structure (e.g., list, table, etc.) that associates one or more secondary storage systems with an account. Cloud portal123amay determine the one or more secondary storage systems associated with a user based on the data structure. Cloud portal123amay provide to a user device a list of one or more secondary storage systems associated with user's account via a user interface associated with cloud portal123a. The user interface associated with cloud portal123amay receive a selection of one of the one or more secondary storage systems associated with the user's account. In response to selection, cloud portal123amay cause a cloud instantiation of selected secondary storage system to be generated. Cloud instantiation122aof secondary storage system104may act as a backup for secondary storage system104. In other embodiments, cloud instantiation122aof secondary storage system104acts as a backup system for primary system102. In other embodiments, cloud instantiation122aof secondary storage system104is used to deploy a cloud instance of a user virtual machine in the event primary system102(the system that hosts the user virtual machine) or secondary storage system104is offline. Cloud instantiation122aof secondary storage system104may use an archived version of the user virtual machine to generate the cloud instance of the user virtual machine. Secondary storage system104is comprised of a secondary storage cluster that is comprised of a plurality of nodes. Each node of the secondary storage cluster has a particular storage capacity. Cloud portal123amay be configured to cause cloud instantiation122aof secondary storage system104to have the same storage capacity as secondary storage system104. For example, secondary storage system104may be comprised of three physical storage nodes, each physical storage having a storage capacity of 10 TB. Cloud portal123amay be configured to generate cloud instantiation122ato include three virtual cloud instances, each virtual cloud instance having a storage capacity of 10 TB. The virtual cloud instances may be stored across one or more virtual machines. In other embodiments, cloud instantiation122aof secondary storage system104has more storage capacity than secondary storage system104. In other embodiments, cloud instantiation122aof secondary storage system has less storage capacity than secondary storage system104. Cloud instantiation122aof secondary storage system104may be configured for the public cloud (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.) in which cloud instantiation122awill reside. Secondary storage system104may be configured to provide to cloud instantiation122aof secondary storage system104one or more secondary storage snapshots (i.e. corresponding copies of one or more backup snapshots that are received from the primary system). In some embodiments, the one or more secondary storage snapshots are replication data associated with one or more corresponding backup snapshots. A secondary storage snapshot may be provided cloud instantiation122aof secondary storage system104according to one or more secondary storage snapshot policies. A secondary storage snapshot policy may cause secondary storage system104to send to cloud instantiation122aof secondary storage system104a secondary storage snapshot for each backup snapshot received from primary system102, after a threshold number of backup snapshots are received from primary system102, or according to a backup schedule (e.g., once per day, once per week, etc.). Cloud instantiation122aof secondary storage system104may be hosted on a cloud server. The cloud server may receive from cloud portal123aan instruction to generate cloud instantiation122aof secondary storage system104. The cloud server may provide the instruction to an agent (not shown) running on the cloud server to generate cloud instantiation122aof secondary storage system104. In some embodiments, cloud portal123aand cloud instantiation122aof secondary storage system104are hosted on the same cloud server hosted in cloud environment121a. In other embodiments, cloud portal123aand cloud instantiation122aof secondary storage system104are hosted on different cloud servers hosted in cloud environment121a. In other embodiments, secondary storage system104is configured to archive data associated with one or more backup snapshots according to one or more archive policies. In some embodiments, an archive policy indicates that the data associated with a backup snapshot is to be archived to cloud object storage124aon a periodic basis (e.g., hourly, daily, weekly, monthly, etc.), when a threshold size of data has changed, and/or upon a command from a user associated with secondary storage system104. An archived backup snapshot may be a serialized version of the data associated with a backup snapshot. Cloud object storage124amay be configured to store a plurality of snapshot archives. A subset of the snapshot archives may be received from secondary storage system104or cloud instantiation122aof secondary storage system104. Cloud object storage124ais configured to store snapshot archives associated with a plurality of datacenters. Cloud object storage124amay receive a request for one of the stored snapshot archives. In response to the request, cloud object storage124ais configured to provide the requested snapshot archive to the cloud instantiation associated with the request, for example, cloud instantiation122a. The requested snapshot archive may be comprised of a serialized data file. Serializing is a process by which a data file is generated to store data in a manner that mimics the structure of a tree data structure. The serialized data file may be encoded in a manner that allows the serialized data file to be utilized to reconstruct a desired portion of the tree data structure to obtain a data of interest from the serialized data file without the need to reconstruct the entire tree data structure. The serialized data file is a flat set of data comprised of a plurality of data blocks. A data block of the data file may correspond to a node of a tree data structure. The order of the data blocks of the serialized data file corresponds to an order of the tree data structure. A tree data structure may have a root node, a plurality of intermediate nodes, and a plurality of leaf nodes. The serialized data file may first include a data block corresponding to the root node, then data blocks corresponding to the plurality of intermediate nodes, and then data blocks corresponding to the plurality of leaf nodes. For example, a first data block of the serialized data file may correspond to a root node of the tree data structure, a second data block of the serialized data file may correspond to a first intermediate node of the tree data structure, a third data block of the serialized data file may correspond to a second intermediate node of the tree data structure, a fourth data block of the serialized data file may correspond to a first leaf node of the tree data structure, . . . and an nth data block of the serialized data file may correspond to the nth leaf node of the tree data structure. Cloud instantiation122aof secondary storage system104may include virtual file system manager125a. Cloud instantiation122amay receive one or more secondary storage snapshots from secondary storage system104(e.g., replication data of a backup snapshot) and virtual file system manager125amay virtually rebuild the secondary storage clusters of secondary storage system104based on the one or more secondary storage snapshots. The secondary storage clusters of secondary storage system104may be virtually rebuilt by building a tree data structure based on the file system data included in the secondary storage snapshot. Virtual file system manager125amay build the tree data structure by deserializing a serialized data file associated with a snapshot archive. The rebuilt tree data structure is similar to the tree data structure generated by file system manager105of secondary storage system104. Cloud instantiation122aof secondary storage system104may be in a standby mode while secondary storage system122ais online. While in the standby mode, cloud instantiation122aof secondary storage system104may maintain its data by receiving one or more secondary storage snapshots from secondary storage system104and in response to receiving the one or more secondary storage snapshots, generating one or more tree data structures and/or updating one or more tree data structures based on the data included in the one or more received secondary storage snapshots. Secondary storage system104may go offline. During this period of time, secondary storage system104may be unable to perform one or more secondary storage functions for primary system102and primary system102must wait for secondary storage system104to come back online. For example, secondary storage system104may be unable to back up primary system102, restore one or more files to primary system102, and/or deploy a cloud instance of a virtual machine stored by secondary storage system104. A physical component of secondary storage system104may have failed and needs to be replaced. It may take a particular period of time before the physical component is replaced (e.g., due to shipping time and/or repair time). Cloud instantiation122aof secondary storage system104may be deployed upon determining that secondary storage system104is offline. In some embodiments, cloud instantiation122aof secondary storage system104receives an indication that secondary storage system104is offline. For example, secondary storage system104may send a heartbeat signal to cloud instantiation122aof secondary storage system104. Cloud instantiation122aof secondary storage system104may determine that secondary storage system104is offline in the event the heartbeat signal is not received within a threshold period of time. In other embodiments, a user associated with secondary storage system104provides an indication that secondary storage system104is offline. Cloud deployment server126amay be deployed to cloud environment121a, such as Amazon Web Services, Microsoft Azure, Google Cloud, etc. A user virtual machine stored by cloud instantiation122aof secondary storage system104may be associated with a first virtual machine format (e.g., VMware). A virtual machine running on cloud deployment server126amay be associated with a second virtual machine format (e.g., Amazon Web Services virtual machine, Microsoft Azure virtual machine, Google Cloud virtual machine, etc.). The user virtual machine may be converted into a virtual machine format associated with cloud environment121ato which cloud deployment server126ais deployed. In some embodiments, a version of a user virtual machine is selected to be deployed to cloud deployment server126a. Cloud instantiation122aof secondary storage system104may identify a tree data structure corresponding to the selected version of the user virtual machine, traverse the identified tree data structure to locate the data associated with the selected version of the user virtual machine, convert the selected version of the user virtual machine into a format that is compatible with a cloud environment in which the user virtual machine is to be deployed, and provide the data associated with converted virtual machine to cloud deployment server126alocated in cloud environment121a. In some embodiments, cloud instantiation122aof secondary storage system104is configured to backup data associated with a user virtual machine running on cloud deployment server126a. For example, the user virtual machine running on cloud deployment server126amay be configured to perform one or more backup snapshots to cloud instantiation122aof secondary storage system104. In the event secondary storage system104comes back online, cloud instantiation122aof secondary storage system104may be configured to copy the backup data associated with the user virtual machine running on cloud deployment server126a. In response to receiving the copied data, secondary storage system104may be configured to update its tree data structures corresponding to the user virtual machine based on the copied data. After the secondary storage system is up-to-date, secondary storage system104may return as the primary backup storage for primary system104and cloud instantiation122aof secondary storage system104may be torn down. In some embodiments, a cloud instance of a user virtual machine stored on secondary storage system104is generated according to a backup policy. Secondary storage system104may be used to generate and deploy the cloud instance of the user virtual machine according to the backup policy. In other embodiments, primary system102is configured to perform one or more backup snapshots to cloud instantiation122aof secondary storage system104and cloud instantiation122aof secondary storage system104is configured to generate and deploy to cloud deployment server126athe cloud instance of the user virtual machine according to the backup policy. Secondary storage system104or cloud instantiation122aof secondary storage system104may be configured to convert a copy of the user virtual machine hosted on primary system102from a first virtual machine format to a second virtual machine format that is compatible with the cloud environment121ain which the cloud instance of the virtual machine is to be deployed. The backup policy may include a schedule that indicates a frequency at which a cloud instance of the user virtual machine is to be generated. For example, the cloud instance of the user virtual machine may be generated each time primary system102performs a backup snapshot that includes data associated with the user virtual machine to secondary storage system104, on a periodic basis (e.g., hourly, daily, weekly, etc.) or when an amount of data associated with the user virtual machine has changed more than a change threshold amount. The cloud instance of the user virtual machine may be maintained in a standby mode in cloud environment121auntil a deploy condition (e.g., a virtual machine running on primary system102may go offline or primary system102may go offline) has been satisfied. In the event the deploy condition has been satisfied, the cloud instance of the user virtual machine is deployed and ready to be used by a user associated with the primary system within a short period of time (e.g., minutes). In other embodiments, a cloud instance of the user virtual machine is generated in response to a user command (e.g., on-demand). For example, the cloud instance of the user virtual machine may be generated for test/development purposes. Secondary storage system104or cloud instantiation122aof secondary storage system104may be used to generate and deploy the cloud instance of the user virtual machine. In other embodiments, the cloud instance of the user virtual machine is generated in response to a determination that the virtual machine on primary system102is offline. For example, a user associated with primary system102may provide to secondary storage system104or to cloud instantiation122aof secondary storage system104a command to generate the cloud instance of the virtual machine. In response to the command, secondary storage system104or cloud instantiation122aof secondary storage system104may be configured to convert a copy of the virtual machine running on primary system102from a first virtual machine format to a second virtual machine format that is compatible with cloud environment102in which the cloud instance of the virtual machine is to be deployed and deploy the cloud instance of the virtual machine to cloud environment121a. In other embodiments, a user associated with primary system102desires to deploy a cloud instance of the virtual machine to cloud environment121a, but secondary storage system104is offline and cloud instantiation122aof secondary storage system104has yet to be generated. Cloud object storage124amay store a snapshot archive that includes data associated with an archived version of the user virtual machine hosted on primary system102. Cloud instantiation122aof secondary storage system104may be generated, an archived version of the user virtual machine may be provided to cloud instantiation122aof secondary storage system104, cloud instantiation122aof secondary storage system104may be configured to convert the archived version of the user virtual machine from a first virtual machine format to a second virtual machine format that is compatible with cloud environment121ain which the cloud instance of the user virtual machine is to be deployed, and deploy the cloud instance of the user virtual machine to cloud environment121a. FIG.2Ais a block diagram illustrating an embodiment of a tree data structure. A tree data structure may be used to represent the file system data that is stored on a secondary storage system, such as secondary storage system104, or a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. The file system data may include metadata for a distributed file system and may include information, such as chunk identifier, chunk offset, file size, directory structure, file permissions, physical storage locations of the files, etc. A file system manager, such as file system manager105or virtual file system manager125a, may generate tree data structure200. Tree data structure200is comprised of a file system metadata snapshot tree that includes a root node202, intermediate nodes212,214, and leaf nodes222,224,226,228, and230. Although tree data structure200includes one intermediate level between root node202and leaf nodes222,224,226,228,230, any number of intermediate levels may be implemented. Tree data structure200may correspond to a backup snapshot of file system data at a particular point in time t, for example at time t=1. The backup snapshot may be received at a secondary storage system from a primary system. In other embodiments, tree data structure200corresponds to a secondary storage snapshot. The secondary storage snapshot may be a copy of a backup snapshot. The secondary storage snapshot may be received at a cloud instantiation of a secondary storage system from the secondary storage system. The file system metadata snapshot tree in conjunction with a plurality of file metadata trees may provide a complete view of the primary system for a particular point in time. A root node is the starting point of a file system metadata snapshot tree and may include pointers to one or more other nodes. An intermediate node is a node to which another node points (e.g., root node, other intermediate node) and includes one or more pointers to one or more other nodes. A leaf node is a node at the bottom of a file system metadata snapshot tree. Each node of the tree structure includes a view identifier of a view with which the node is associated (e.g., TreeID). A leaf node may be configured to store key-value pairs of file system data. A data key k is a lookup value by which a particular leaf node may be accessed. For example, “1” is a data key that may be used to lookup “DATA1” of leaf node222. The data key k may correspond to a brick identifier (e.g., brick number) of a data brick. A data brick may be associated with one or more data chunks. In some embodiments, the leaf node is configured to store file system metadata (e.g., chunk identifier (e.g., hash value, SHA-1, etc.), file size, directory structure, file permissions, physical storage locations of the files, etc.). A leaf node may store a data key k and a pointer to a location that stores the value associated with the data key. In other embodiments, a leaf node is configured to store the actual data when the data associated with a file is less than or equal to a limit size (e.g., 256 kb). In some embodiments, a leaf node includes a pointer to a file metadata tree (e.g., blob structure) when the size of a file is larger than the limit size. For example, a leaf node may include a pointer to a file metadata tree corresponding to a virtual machine container file associated with a user virtual machine. A root node or an intermediate node may include one or more node keys. The node key may be an integer value or a non-integer value. Each node key indicates a division between the branches of the node and indicates how to traverse the tree structure to find a leaf node, i.e., which pointer to follow. For example, root node202may include a node key of “3.” A data key k of a key-value pair that is less than or equal to the node key is associated with a first branch of the node and a data key k of a key-value pair that is greater than the node key is associated with a second branch of the node. In the above example, to find a leaf node storing a value associated with a data key of “1,” “2,” or “3,” the first branch of root node202would be traversed to intermediate node212because the data keys of “1,” “2”, and “3” are less than or equal to the node key “3.” To find a leaf node storing a value associated with a data key of “4” or “5,” the second branch of root node202would be traversed to intermediate node214because data keys “4” and “5” are greater than the node key of “3.” A data key k of a key-value pair is not limited to a numerical value. In some embodiments, non-numerical data keys may be used for a data key-value pair (e.g., “name,” “age”, etc.) and a numerical number may be associated with the non-numerical data key. For example, a data key of “name” may correspond to a numerical key of “3.” Data keys that alphabetically come before the word “name” or is the word “name” may be found following a left branch associated with a node. Data keys that alphabetically come after the word “name” may be found by following a right branch associated with the node. In some embodiments, a hash function may be associated with the non-numerical data key. The hash function may determine which branch of a node with which the non-numerical data key is associated. In the example shown, root node202includes a pointer to intermediate node212and a pointer to intermediate node214. Root node202includes a NodeID of “R1” and a TreeD of “1.” The NodeID identifies the name of the node. The TreeID identifies the view with which the node is associated. When a change is made to data stored in a leaf node as described with respect toFIGS.2B,2C, and2D, the TreeID is used to determine whether a copy of a node is to be made. Root node202includes a node key that divides a set of pointers into two different subsets. Leaf nodes (e.g., “1-3”) with a data key k that is less than or equal to the node key are associated with a first branch and leaf nodes (e.g., “4-5”) with a data key k that is greater than the node key are associated with a second branch. Leaf nodes with a data key of “1,” “2,” or “3” may be found by traversing tree data structure200from root node202to intermediate node212because the data keys have a value that is less than or equal to the node key. Leaf nodes with a data key of “4” or “5” may be found by traversing tree data structure200from root node202to intermediate node214because the data keys have a value that is greater than the node key. Root node202includes a first set of pointers. The first set of pointers associated with a data key less than the node key (e.g., “1”, “2,” or “3”) indicates that traversing tree data structure200from root node202to intermediate node212will lead to a leaf node with a data key of “1,” “2,” or “3.” Intermediate node214includes a second set of pointers. The second set of pointers associated with a data key greater than the node key indicates that traversing tree data structure200from root node202to intermediate node214will lead to a leaf node with a data key of “4” or “5.” Intermediate node212includes a pointer to leaf node222, a pointer to leaf node224, and a pointer to leaf node226. Intermediate node212includes a NodeID of “I1” and a TreeID of “1.” Intermediate node212includes a first node key of “1” and a second node key of “2.” The data key k for leaf node222is a value that is less than or equal to the first node key. The data key k for leaf node224is a value that is greater than the first node key and less than or equal to the second node key. The data key k for leaf node226is a value that is greater than the second node key. The pointer to leaf node222indicates that traversing tree data structure200from intermediate node212to leaf node222will lead to the node with a data key of “1.” The pointer to leaf node224indicates that traversing tree data structure200from intermediate node212to leaf node224will lead to the node with a data key of “2.” The pointer to leaf node226indicates that traversing tree data structure200from intermediate node212to leaf node226will lead to the node with a data key of “3.” Intermediate node214includes a pointer to leaf node228and a pointer to leaf node230. Intermediate node212includes a NodeID of “I2” and a TreeID of “1.” Intermediate node214includes a node key of “4.” The data key k for leaf node228is a value that is less than or equal to the node key. The data key k for leaf node230is a value that is greater than the node key. The pointer to leaf node228indicates that traversing tree data structure200from intermediate node214to leaf node228will lead to the node with a data key of “4.” The pointer to leaf node230indicates that traversing tree data structure200from intermediate node214to leaf node230will lead the node with a data key of “5.” Leaf nodes222,224,226,228,230include data key-value pairs of “1: DATA1,” “2: DATA2,” “3: DATA3,” “4: DATA4,” “5: DATA5,” respectively. Leaf nodes222,224,226,228,230include a NodeID of “L1,” “L2,” “L3,” “L4,” “L5,” respectively. Each of the leaf nodes222,224,226,228,230include a TreeID of “1.” To view the value associated with a data key of “1,” tree data structure200is traversed from root node202to intermediate node212to leaf node222. To view the value associated with a data key of “2,” tree data structure200is traversed from root node202to intermediate node212to leaf node224. To view the value associated with a data key of “3,” tree data structure200is traversed from root node202to intermediate node212to leaf node226. To view the value associated with a data key of “4,” tree data structure200is traversed from root node202to intermediate node214to leaf node228. To view the value associated with a data key of “5,” tree data structure200is traversed from root node202to intermediate node214to leaf node230. In some embodiments, leaf node222,224,226,228,230are configured to store metadata associated with a file. In other embodiments, leaf node222,224,226,228,230are configured to store a pointer to a file metadata tree (e.g., blob structure). FIG.2Bis a block diagram illustrating an embodiment of a cloned file system metadata snapshot tree. A file system metadata snapshot tree may be cloned when a file system metadata snapshot tree is added to a tree data structure. In some embodiments, tree data structure250may be created by a storage system, such as secondary storage system104or a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. The file system data of a primary system, such as primary system102, may be backed up to a secondary storage system, such as secondary storage system112. A subsequent backup snapshot may correspond to a full backup snapshot or an incremental backup snapshot. The manner in which the file system data corresponding to the subsequent backup snapshot is stored in secondary storage system may be represented by a tree data structure. The tree data structure corresponding to the subsequent backup snapshot is created by cloning a file system metadata snapshot tree associated with a last backup snapshot. The tree data structure associated with a plurality of secondary storage snapshots may be cloned in a similar manner. In the example shown, tree data structure250includes root nodes202,204, intermediate nodes212,214, and leaf nodes222,224,226,228, and230. Tree data structure250may be a snapshot of file system data at a particular point in time, such as t=2. The tree data structure can be used to capture different versions of file system data at different moments in time. The tree data structure may allow a chain of backup snapshot versions (i.e., file system metadata snapshot trees) and/or a chain of secondary storage snapshot versions to be linked together by allowing a node of a later version of a file system metadata snapshot tree to reference a node of a previous version of a file system metadata snapshot tree. For example, a snapshot tree with root node204is linked to a snapshot tree with root node202. Each time a backup snapshot is performed, a new root node may be created and the new root node includes the same set of pointers included in the previous root node, that is the new root node of the snapshot may be linked to one or more intermediate nodes associated with a previous snapshot. The new root node also includes a different NodeID and a different TreeID. The TreeID is the view identifier associated with a view of the primary system corresponding to the particular moment in time. In some embodiments, a root node is associated with a current view of the file system data. A current view may still accept one or more changes to the data. The TreeID of a root node indicates a backup snapshot with which the root node is associated. For example, root node202with a TreeID of “1” is associated with a first backup snapshot and root node204with a TreeID of “2” is associated with a second backup snapshot. In the example shown, root node204is associated with a current view of the file system data. In other embodiments, a root node is associated with a snapshot view of the file system data. A snapshot view may represent a state of the file system data at a particular moment in time in the past and is not updated. In the example shown, root node202is associated with a snapshot view of the file system data. In the example shown, root node204is a clone (e.g., a copy) of root node202. Similar to root node202, root node204includes the same pointers as root node202. Root node204includes a first set of pointers to intermediate node212. Root node204includes a NodeID of “R2” and a TreeID of “2.” FIG.2Cis a block diagram illustrating an embodiment of modifying a file system metadata snapshot tree. In the example shown, tree data structure255may be modified by a file system manager, such as file system manager105or virtual file system manager125a. A file system metadata snapshot tree with a root node204may be a current view of the file system data at time t=1. A current view represents a state of the file system data that is up-to-date and capable of receiving one or more modifications to the snapshot tree that correspond to modifications to the file system data. Because a snapshot represents a perspective of the file system data that is “frozen” in time, one or more copies of one or more nodes affected by a change to file system data, are made. In the example shown, the value “DATA4” has been modified to be “DATA4′.” In some embodiments, the value of a key value pair has been modified. For example, the value of “DATA4” may be a pointer to a file metadata tree corresponding to a first version of a virtual machine and the value of “DATA4′” may be a pointer to a file metadata tree corresponding to the second version of the virtual machine. In other embodiments, the value of the key pair is the data associated with a content file that is smaller than or equal to a limit size. In other embodiments, the value of the key value pair points to a different file metadata tree. The different file metadata tree may be a modified version of the file metadata tree that the leaf node previously pointed (e.g., a different version of a virtual machine container file). To modify a file system metadata snapshot tree, the file system manager starts at root node204because that is the root node associated with snapshot tree at time t=2 (i.e., the root node associated with the last backup snapshot). The value “DATA4” is associated with the data key “4.” The file system manager traverses tree data structure255from root node204until it reaches a target node, in this example, leaf node228. The file system manager compares the TreeID at each intermediate node and leaf node with the TreeID of the root node. In the event the TreeID of a node matches the TreeID of the root node, the file system manager proceeds to the next node. In the event the TreeID of a node does not match the TreeID of the root node, a shadow copy of the node with the non-matching TreeID is made. For example, to reach a leaf node with a data key of “4,” the file system manager begins at root node204and proceeds to intermediate node214. The file system manager compares the TreeID of intermediate node214with the TreeID of root node204, determines that the TreeID of intermediate node214does not match the TreeID of root node204, and creates a copy of intermediate node214. The intermediate node copy216includes the same set of pointers as intermediate node214, but includes a TreeID of “2” to match the TreeID of root node204. The file system manager updates a pointer of root node204to point to intermediate node216instead of pointing to intermediate node214. The file system manager traverses tree data structure255from intermediate node216to leaf node228, determines that the TreeID of leaf node228does not match the TreeID of root node204, and creates a copy of leaf node228. Leaf node copy232stores the modified value “DATA4′” and includes the same TreeID as root node204. The file system manager updates a pointer of intermediate node216to point to leaf node232instead of pointing to leaf node228. In some embodiments, leaf node232stores the value of a key value pair that has been modified. In other embodiments, leaf node232stores the modified data associated with a file that is smaller than or equal to a limit size. In other embodiments, leaf node232stores a pointer to a file metadata tree corresponding to a file, such as a virtual machine container file. FIG.2Dis a block diagram illustrating an embodiment of a modified snapshot tree. Tree data structure255shown inFIG.2Dillustrates a result of the modifications made to a snapshot tree as described with respect toFIG.2C. FIG.2Eis a block diagram illustrating an embodiment of a tree data structure at a particular moment in time. In the example shown, tree data structure280includes a snapshot tree at time t=3. The tree data structure allows a chain of snapshot trees to be linked together. Each time a backup snapshot is performed, a root node of the snapshot tree may be linked to one or more intermediate nodes associated with a previous snapshot tree. In the example shown, tree data structure280includes a file system metadata snapshot tree comprising root node206, intermediate nodes212,218, and leaf nodes222,224,226,230,234. Root node202is associated with a first backup snapshot, root node204is associated with a second backup snapshot, and root node206is associated with a third backup snapshot. The snapshot tree having root node206is a modified version of the snapshot tree having root node204(i.e., the value of “DATA4′” has been modified to be “DATA4″”). FIG.3Ais a block diagram illustrating an embodiment of a tree data structure. In some embodiments, tree data structure300may be created by a storage system, such as secondary storage system104, or a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. In the example shown, tree data structure300corresponds to a file and stores the metadata associated with the file. For example, tree data structure300may correspond to a virtual machine container file and may be used to store virtual machine file system metadata. A leaf node of a file system metadata snapshot tree, such as a leaf node of tree data structures200,250,255, may include a pointer to a tree data structure corresponding to a file, such as tree data structure300. A tree data structure corresponding to a file (i.e., a “file metadata tree”) is a snapshot tree, but is used to organize the data blocks associated with a file that are stored on the secondary storage system or a cloud instantiation of the secondary storage system. Tree data structure300may be referred to as a “metadata structure” or a “snapshot structure.” A tree data structure corresponding to a content file (e.g. virtual machine container file) at a particular point in time (e.g., a particular version) may be comprised of a root node, one or more levels of one or more intermediate nodes, and one or more leaf nodes. In some embodiments, a tree data structure corresponding to a content file is comprised of a root node and one or more leaf nodes without any intermediate nodes. Tree data structure300may be a snapshot of a content file at a particular point in time t, for example at time t=1. In the example shown, tree data structure300includes a file root node302, file intermediate nodes312,314, and file leaf nodes322,324,326,328,330. Although tree data structure300includes one intermediate level between root node302and leaf nodes322,324,326,328,330, any number of intermediate levels may be implemented. Similar of the file system metadata snapshot trees described above, each node includes a “NodeID” that identifies the node and a “TreeID” that identifies a snapshot/view with which the node is associated. In the example shown, root node302includes a pointer to intermediate node312and a pointer to intermediate node314. Root node202includes a NodeID of “FR1” and a TreeID of “1.” In the example shown, intermediate node312includes a pointer to leaf node322, a pointer to leaf node324, and a pointer to leaf node326. Intermediate node312includes a NodeID of “FI1” and a TreeID of “1.” Intermediate node312includes a first node key and a second node key. The data key k for leaf node322is a value that is less than or equal to the first node key. The data key for leaf node324is a value that is greater than the first node key and less than or equal to the second node key. The data key for leaf node326is a value that is greater than the second node key. The pointer to leaf node322indicates that traversing tree data structure300from intermediate node312to leaf node322will lead to the node with a data key of “1.” The pointer to leaf node324indicates that traversing tree data structure300from intermediate node312to leaf node324will lead to the node with a data key of “2.” The pointer to leaf node326indicates that traversing tree data structure300from intermediate node312to leaf node326will lead to the node with a data key of “3.” In the example shown, intermediate node314includes a pointer to leaf node328and a pointer to leaf node330. Intermediate node314includes a NodeID of “FI2” and a TreeID of “1.” Intermediate node314includes a node key. The data key k for leaf node328is a value that is less than or equal to the node key. The data key for leaf node330is a value that is greater than the node key. The pointer to leaf node328indicates that traversing tree data structure300from intermediate node314to leaf node328will lead to the node with a data key of “4.” The pointer to leaf node330indicates that traversing tree data structure300from intermediate node314to leaf node330will lead the node with a data key of “5.” Leaf node322includes a data key-value pair of “1: Brick 1.” “Brick 1” is a brick identifier that identifies the data brick associated with one or more data chunks of a content file (e.g., virtual machine container file) corresponding to tree data structure300. Leaf node322includes NodeID of “FL1” and a TreeID of “1.” To view the value associated with a data key of “1,” tree data structure300is traversed from root node302to intermediate node312to leaf node322. Leaf node324includes a data key-value pair of “2: Brick 2.” “Brick 2” may be associated with one or more data chunks associated with a content file (e.g., virtual machine container file). Leaf node324includes NodeID of “FL2” and a TreeID of “1.” To view the value associated with a data key of “2,” tree data structure300is traversed from root node302to intermediate node312to leaf node324. Leaf node326includes a data key-value pair of “3: Brick 3.” “Brick 3” may be associated with one or more data chunks associated with a content file (e.g., virtual machine container file). Leaf node326includes NodeID of “FL3” and a TreeID of “1.” To view the value associated with a data key of “3,” tree data structure300is traversed from root node302to intermediate node312to leaf node326. Leaf node328includes a data key-value pair of “4: Brick 4.” “Brick 4” may be associated with one or more data chunks associated with a content file (e.g., virtual machine container file). Leaf node328includes NodeID of “FL4” and a TreeID of “1.” To view the value associated with a data key of “4,” tree data structure300is traversed from root node302to intermediate node314to leaf node328. Leaf node330includes a data key-value pair of “5: Brick 5.” “Brick 5” may be associated with one or more data chunks associated with a content file (e.g., virtual machine container file). Leaf node330includes NodeID of “FL5” and a TreeID of “1.” To view the value associated with a data key of “5,” tree data structure300is traversed from root node302to intermediate node314to leaf node330. A content file, such as a virtual machine container file, may be comprised of a plurality of data chunks. A brick may be associated with one or more data chunks. In the example shown, leaf nodes322,324,326,328,330each store a corresponding brick identifier. The location of the data chunks associated with a data brick may be identified using a table stored in a metadata store that matches brick numbers to chunk identifiers or the location of the data brick may be identified based on the pointer to the data brick. A chunk file table may associate chunk identifiers (e.g., SHA-1) with a chunk file id. A chunk file is configured to store a plurality of data chunks. The file table may include associate a location of a chunk identifier with an offset within a chunk file id. The one or more data chunks associated with a brick identifier may be determined based on a corresponding chunk identifier and a corresponding chunk file id. FIG.3Bis a block diagram illustrating an embodiment of adding a file metadata tree to a tree data structure. In some embodiments, tree data structure350may be created by a storage system, such as secondary storage system104or a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. The tree data structure corresponding to a file can be used to capture different versions of the file at different moments in time. When a backup snapshot or secondary storage snapshot is received, a root node of the file metadata tree may be linked to one or more intermediate nodes associated with a previous file metadata tree. This may occur when the file is included in both backup/secondary storage snapshots. In the example shown, tree data structure350includes a first file metadata tree comprising root node302, intermediate nodes312,314, and leaf nodes322,324,326,328, and330and a second file metadata tree comprised of root node304, intermediate nodes312,314, and leaf nodes322,324,326,328, and330. The second file metadata tree may correspond to a version of a file at a particular point in time, for example at time t=2. The first file metadata tree may correspond to a first version of a virtual machine container file and the second file metadata tree may correspond to a second version of the virtual machine container file. To create a snapshot of the file data at time t=2, a new root node is created. The new root node may be clone of the original node and include the same set of pointers as the original node, but includes a different NodeID and a different TreeID. In the example shown, root node304includes a set of pointers to intermediate nodes312,314, which are intermediate nodes associated with a previous snapshot. In the example shown, root node304is a copy of root node302. Similar to root node302, root node304includes the same pointers as root node302. Root node304includes a NodeID of “FR2” and a TreeID of “2.” FIG.3Cis a block diagram illustrating an embodiment of modifying a file metadata tree. In the example shown, tree data structure380may be modified by a file system manager, such as file system manager105or virtual file system manager125a. A file metadata tree with root node304may be a current view of the file data at time, for example, at time t=2. In some embodiments, the file data of a content file may be modified such that one of the data chunks is replaced by another data chunk. When a data chunk of file data associated with a previous backup snapshot is replaced with a new data chunk, the data brick associated with the new data chunk may be different. A leaf node of a file metadata tree may be configured to store a brick identifier of a brick associated with the new data chunk. To represent this modification to the file data, a corresponding modification is made to a current view of a file metadata tree. The data chunk of the file data that was replaced has a corresponding leaf node in the previous file metadata tree. A new leaf node in the current view of the file metadata tree is created, as described herein, that corresponds to the new data chunk. The new leaf node includes an identifier associated with the current view. The new leaf node may also store the chunk identifier associated with the modified data chunk. In the example shown, a data chunk associated with “Brick 4” has been modified. The data chunk associated with “Brick 4” has been replaced with a data chunk associated with “Brick 6.” In some embodiments, the data chunk associated with “Brick 6” includes a data chunk associated with a virtual machine container file. At t=2, the file system manager starts at root node304because that is the root node associated with the file metadata tree at time t=2. The value “Brick 4” is associated with the data key “4.” The file system manager traverses tree data structure380from root node304until it reaches a target node, in this example, leaf node328. The file system manager compares the TreeID at each intermediate node and leaf node with the TreeID of the root node. In the event the TreeID of a node matches the TreeID of the root node, the file system manager proceeds to the next node. In the event the TreeID of a node does not match the TreeID of the root node, a shadow copy of the node with the non-matching TreeID is made. For example, to reach a leaf node with a data key of “4,” the file system manager begins at root node304and proceeds to intermediate node314. The file system manager compares the TreeID of intermediate node314with the TreeID of root node304, determines that the TreeID of intermediate node314does not match the TreeID of root node304, and creates a copy of intermediate node314. The intermediate node copy316includes the same set of pointers as intermediate node314, but includes a TreeID of “2” to match the TreeID of root node304. The file system manager updates a pointer of root node304to point to intermediate node316instead of pointing to intermediate node314. The file system manager traverses tree data structure380from intermediate node316to leaf node328, determines that the TreeID of leaf node328does not match the TreeID of root node304, and creates a copy of leaf node328. Leaf node332is a copy of leaf node328, but stores the brick identifier “Brick 6” and includes the same TreeID as root node304. The file system manager updates a pointer of intermediate node316to point to leaf node332instead of pointing to leaf node328. FIG.3Dis a block diagram illustrating an embodiment of a modified file metadata tree. The file metadata tree380shown inFIG.3Dillustrates a result of the modifications made to file metadata tree380as described with respect toFIG.3C. FIG.4Ais a block diagram illustrating an embodiment of archive data. A backup snapshot is the state of a system at a particular moment in time. A backup snapshot may be stored locally at a storage system, such as secondary storage system104. A backup snapshot allows the state of a system to be rolled back to a moment in time for which a backup snapshot is stored. A system may store a large number of backup snapshots (e.g., thousands, millions). Each backup snapshot may require a significant amount of storage (e.g., GBs, TBs, PBs, etc.). In some embodiments, it is be desirable to archive a backup snapshot to a remote storage location, such as cloud object storage124a. For example, one or more older backup snapshots may be archived to cloud object storage124afor long-term retention, for data recovery purposes (e.g., a primary system virtual machine is offline and a secondary storage system storing a backup of the primary system virtual machine is also offline), to handle spikes in storage demand, etc. One or more backup snapshots that include cold data (i.e., data that is not accessed frequently) may be archived to cloud object storage to free up local storage for one or more snapshots that include hot data (i.e., data that is accessed frequently). The file system data associated with a backup snapshot may be archived from a secondary storage system to a remote storage location. An archive policy may indicate that a full snapshot archive of a backup snapshot or an incremental snapshot archive of the backup snapshot is to be performed and stored at the remote storage location. A full snapshot archive includes a complete view of a file system metadata snapshot tree at a particular moment in time. For example, a full snapshot archive associated with a backup snapshot at t=3, as depicted inFIG.2E, includes root node206, intermediate nodes212,218, and leaf nodes222,224,226,230, and234. An incremental snapshot archive includes a partial view of a file system metadata snapshot tree at a particular moment in time. An incremental snapshot archive may include a representation of what was not previously archived. For example, an incremental snapshot archive associated with a backup snapshot at t=3, as depicted inFIG.2E, includes root node206, intermediate node218, and leaf node234. The incremental snapshot archive associated with a backup snapshot at t=3 does not include root nodes202,204, intermediate nodes212or leaf nodes222,224,226,228,230because those nodes were previously archived. A snapshot archive may be performed based on one or more policies associated with a backup storage system. For example, a full snapshot archive may be performed on a periodic basis (e.g., every X day(s), every Y week(s), every Z month(s), etc.), upon a threshold size of bytes changing from the previous full snapshot, after a threshold number of incremental snapshot archives have been performed, etc. A policy may indicate that an incremental snapshot archive is to be performed on a more frequent basis than a full snapshot archive. The full snapshot archive and incremental snapshot archives may be associated with a backup snapshot corresponding to a state of file system data at a particular moment in time. For example, archive data400is associated with the snapshot tree corresponding to a backup snapshot at time t=1, archive data450is associated with the snapshot tree corresponding to a backup snapshot at time t=2, and archive data480is associated with the snapshot tree corresponding to a backup snapshot at time t=3. As seen inFIGS.4A-4C, each snapshot archive builds off of a previous snapshot archive, that is, a block of serialized data includes a file offset to a block associated with previously serialized data. In the example shown, archive data400includes file system data451and serialized tree data461. In the example shown, archive data400is a file representation of a backup snapshot of the file system metadata snapshot tree at t=1. Archive data400is configured to store a full backup snapshot of the snapshot tree corresponding to a backup snapshot at time t=1. A full snapshot archive may include a complete view of the nodes of the file system metadata snapshot tree at a particular moment in time (i.e., all nodes associated with a root node of the snapshot tree) and the data referenced or stored in each of the leaf nodes of the file system metadata snapshot tree. For example, a leaf node may include a pointer to a storage location of a value. A full snapshot archive is independent on its own and does not refer back to one or more previous snapshot archives. In the example shown, file system data451corresponds to data stored in the leaf nodes of the snapshot tree corresponding to a backup snapshot at time t=1. Since archive data400includes a full backup snapshot of the snapshot tree corresponding to the backup snapshot at t=1, file system data451includes the data stored in or referenced by leaf nodes222,224,226,228, and230inFIG.2A, that is, file system data451includes “DATA1,” “DATA2,” “DATA3,” “DATA4,” and “DATA5.” In some embodiments, the file system data is the data (e.g., data blocks of a file, data segments of a file) for a distributed file system. File system data may be stored as a flat set of data. In some embodiments, file system data451stores all the data blocks associated with leaf nodes of a snapshot tree. In some embodiments, file system data451stores a plurality of file data blocks in a single block of file system data451. In some embodiments, the file system data includes file system metadata, such as file size, directory structure, file permissions, physical storage locations of the files, etc. In other embodiments, blocks422,424,426,428,430include file offsets to a serialized file metadata tree that corresponds to a file metadata tree. A serialized file metadata tree is similar to a serialized file system metadata tree, but serializes the nodes associated with a file metadata tree into a flat set of data. A serialized tree data is configured to store the structure of the file system metadata snapshot tree associated with the file system data as a flat set of data that is comprised of one or more blocks. Each block of the flat set of data corresponds to a node of the snapshot tree. A block may contain a file offset. A file offset represents a pointer of a file system metadata snapshot tree. Because some archive systems cannot store pointers, a file offset is used in place of pointers. The file offset may be to another block of the serialized tree data. The file offset may be to another block of a different serialized tree data. In the example shown, serialized tree data461corresponds to a snapshot tree corresponding to a backup snapshot at time t=1. Serialized tree data461is comprised of a plurality of blocks. Each block corresponds to one of the snapshot tree nodes. For example, blocks422,424,426,428,430,412,414, and402correspond to nodes222,224,226,228,230,212,214, and202, respectively, of the file system metadata snapshot tree at t=1 inFIG.2A. Block402corresponds to root node202. Because root node202includes pointers to intermediate nodes212and214, block402includes file offsets to blocks412and414. Blocks412and414correspond to intermediate nodes212and214, respectively. Because intermediate node212includes pointers to leaf nodes222,224, and226, block412includes file offsets to blocks422,424, and426. The file offsets correspond to the pointers of a file system metadata snapshot tree. Similarly, block414includes file offsets to blocks428,430because intermediate node214includes pointers to leaf nodes228,230. Blocks422,424,426,428, and430correspond to the leaf nodes of file system metadata snapshot tree200and each include a corresponding file offset to one or more blocks of the file system data stored in file system data451. For example, block422includes an offset to one or more blocks in file system data451that store the value of L1. Similarly, blocks424,426,428,430include corresponding offsets to one or more blocks in file system data451that store the value of L2, L3, L4, and L5, respectively. FIG.4Bis a block diagram illustrating an embodiment of archive data. In the example shown, archive data450may be archived by a storage system, such as secondary storage system104. In the example shown, archive data450includes file system data453and a serialized tree data463. In the example shown, file system data453is an incremental snapshot archive of a file system metadata snapshot tree at time t=2. An incremental snapshot archive may include changes to the data of a file system metadata snapshot tree since a last snapshot archive (e.g., new data or modified data). File system data453may be stored as a flat set of data. In some embodiments, file system data453stores all data blocks associated with leaf nodes of a snapshot tree that were not previously archived. In some embodiments, file system data453stores a plurality of file data blocks in a single block of file system data453. In some embodiments, the file system data includes file system metadata, such as file size, directory structure, file permissions, physical storage locations of the files, etc. Serialized tree data463is a serialized version of one or more nodes of the file system metadata snapshot tree corresponding to a backup snapshot at time t=2 and is represented as a flat set of data that is comprised of one or more blocks. Each block of the flat set of data corresponds to a node of the snapshot tree. Serialized tree data463includes a serialized representation of one or more changes to a file system metadata snapshot tree (e.g., new node, modified node, deleted node) since a previous backup snapshot. To determine whether a node should be included in a serialized tree data, a file system manager starts at the root node associated with a file system metadata snapshot view and traverses the file system metadata snapshot tree. At each node of the file system metadata snapshot tree, the file system manager determines whether that particular node existed in one or more previous file system metadata snapshot trees. In the event the node didn't exist in the previous file system metadata snapshot tree, a block corresponding to the node is included in serialized tree data. In the event the node is determined to have existed in one or more previous file system metadata snapshot trees, a block corresponding to the node is not included in the serialized tree data because a previous serialized tree data already includes a block corresponding to the node. Instead, a file offset to the block of the previous serialized tree data may be included in one or more of the blocks in the serialized tree data. For example, to create a snapshot corresponding to a backup snapshot at t=2, root node204was added. The snapshot tree corresponding to the backup snapshot at t=2 indicates that the value of “DATA4” has been modified to be “DATA4′.” Intermediate node216and leaf node232were added to the snapshot tree to ensure that each node along this path has a TreeID of “2.” In the example shown, serialized tree data463corresponds to the new nodes of the file system metadata snapshot tree corresponding to the backup snapshot at t=2. Each block of serialized tree data463corresponds to one of the nodes associated with the file system metadata snapshot tree corresponding to the backup snapshot at t=2. For example, blocks432,416,404correspond to nodes232,216,204, respectively. In the example shown, block404corresponds to root node204. Because root node204includes a pointer to intermediate node212, block404includes a file offset to block412of serialized tree data461. Previously stored serialized tree data461already includes block412that corresponds to intermediate node212. A file offset to a previously stored serialized tree data is used to save memory and prevent storing duplicative data. Root node204also includes a pointer to intermediate node216. Similarly, block404also includes a file offset to block416, which corresponds to intermediate node216. Intermediate node216includes pointers to leaf nodes230,232. The value of leaf node230has not changed and was previously stored in file system metadata451. To save memory and prevent storing duplicative data, block416includes a file offset to block430of serialized tree data461. Block416also includes a file offset to block432. Block432corresponds to leaf node232. Intermediate node216is a new node because tree data structure200did not include intermediate node216. Thus, serialized tree data463includes a block that corresponds to intermediate node216. Block432corresponds to leaf node232of tree data structure250. Leaf node232is a new node because tree data structure200did not include leaf node232. Thus, serialized tree data463includes a block that corresponds to leaf node232. Block432includes a file offset to one or more blocks in file system data453that store the value of leaf node232. FIG.4Cis a block diagram illustrating an embodiment of archive data. In the example shown, archive data480can be archived by a system, such as secondary storage system104. In the example shown, archive data480includes file system data455and a serialized tree data465. File system data455is an incremental snapshot archive of the file system data stored in or referenced by the one or more leaf nodes of a snapshot tree. For example, file system data455may include one or more values of the file system metadata snapshot tree corresponding to a backup snapshot at time t=3 that were not previously archived. File system data455may be stored as a flat set of data. In some embodiments, file system data455stores all data blocks associated with leaf nodes of a file system metadata snapshot tree that were not previously archived. In some embodiments, file system data455stores a plurality of file data blocks in a single block of file system data455. In some embodiments, the file system data includes file system metadata, such as file size, directory structure, file permissions, physical storage locations of the files, etc. Serialized tree data465is a serialized version of one or more nodes of the snapshot tree corresponding to a backup snapshot at time t=3 and is represented as a flat set of data that is comprised of one or more blocks. To create a file system metadata snapshot tree corresponding to the backup snapshot at t=3, root node206was added. The file system metadata snapshot tree corresponding to the backup snapshot at t=3 indicates that the value of “DATA4′” has been modified to be “DATA4″.” Intermediate node218and leaf node234were added to the file system metadata snapshot tree corresponding to a backup snapshot at t=3 to ensure that each node along this path has a TreeID of “3.” In the example shown, serialized tree data465corresponds to new root nodes of the file system metadata snapshot tree corresponding to a third backup snapshot at time t=3. Each block of serialized tree data465corresponds to one of the nodes of the file system metadata snapshot tree corresponding to the backup snapshot at time t=3. For example, blocks434,418,406correspond to nodes234,218,206, respectively. Block406corresponds to root node206. Because root node206includes a pointer to intermediate node212, block406includes a file offset to block412of serialized tree data461. Root node206includes a pointer to intermediate node218. Similarly, block406includes a file offset to block418, which corresponds to intermediate node218. Intermediate node218includes a pointer to leaf nodes230,234. The value of leaf node230has not changed and was previously stored in file system metadata451. To save memory and prevent storing duplicative data, block418includes a file offset to block430of serialized tree data461. Block418also includes a file offset to block434. Block434corresponds to leaf node234. Intermediate node218is a new node because tree data structure350did not include intermediate node218. Thus, archive data480includes a block that corresponds to intermediate node218. Block434corresponds to leaf node234of tree data structure280. Leaf node234is a new node because tree data structure250did not include leaf node234. Thus, archive data480includes a block that corresponds to leaf node234. Block434includes a file offset to a block of file system metadata455that stores the value of leaf node234. FIG.5is a flow chart illustrating an embodiment of a process for archiving data. In the example shown, process500may be implemented by a storage system, such as secondary storage system104. In some embodiments, process500is used to perform a full snapshot archive. In other embodiments, process500is used to perform an incremental snapshot archive. At502, is it is determined that file system data is to be archived. A backup snapshot is the state of a system at a particular moment in time. A backup snapshot may be stored locally at a storage system, such as secondary storage system104. A backup snapshot allows the state of a system to be rolled back to a moment in time for which a snapshot is stored. A system may store a large number of backup snapshots (e.g., thousands, millions). Each backup snapshot may require a significant amount of storage (e.g., GBs, TBs, PBs, etc.). It may be desirable to archive a backup snapshot to a remote storage location, such as cloud object storage124a. The file system data associated with a backup snapshot may be archived to the remote storage location. An archive policy may indicate that a full snapshot archive of a snapshot or an incremental snapshot archive of the backup snapshot is to be performed and stored to the remote storage location. A full snapshot archive may include a complete view of one version of a file system metadata snapshot tree and one or more associated file metadata trees for a particular moment in time. A full snapshot archive may include a block corresponding to a root node associated with the view at the particular moment in time and blocks corresponding to any intermediate nodes and/or leaf nodes associated with the root node of the file system metadata snapshot tree as well as blocks corresponding to the nodes associated with the one or more file metadata trees. An incremental snapshot archive includes a partial view of one version of a file system metadata snapshot tree and one or more associated file metadata trees for a particular moment in time. An incremental snapshot archive may include a block corresponding to a root node associated with the file system metadata snapshot tree and one or more blocks corresponding to nodes that were added for the backup snapshot. The one or more blocks may correspond to nodes of the file system metadata snapshot tree or a file metadata tree. At504, a file system metadata snapshot tree and one or more associated file metadata trees for a view are serialized into serialized tree data and file system data associated with the view is serialized into serialized file system data. Serializing the file system metadata snapshot tree and one or more file metadata trees into serialized tree data creates a flat set of data that represents a view corresponding to a backup snapshot. Serializing the file system data into serialized file system data creates a flat set of data that represents the file system data. The file system metadata snapshot tree and the file system data are serialized into flat sets of data because a remote location may be incapable of storing a tree data structure. The serialized tree data is comprised of one or more blocks. The serialized tree data is a representation of a file system metadata snapshot tree and one or more associated file metadata trees in block form. Each block of the serialized tree data corresponds to a node of a view of a backup snapshot. Instead of a node having one or more pointers to one or more other nodes, a block of the serialized tree may include one or more file offsets to one or more other blocks. The file offsets represent the pointers of a tree data structure. A block may include a file offset to another block in the serialized tree data. A block may include a file offset to another block in a previously serialized tree data. For example, a file system metadata snapshot tree node may include a pointer to a node associated with a previous file system metadata snapshot tree. A block that corresponds to the file system metadata snapshot tree node may include a file offset to the block of a previously serialized tree data block that corresponds to the node associated with the previous file system metadata snapshot tree. The file system metadata snapshot tree node may also include a pointer to a node associated with the current file system metadata snapshot tree. A block that corresponds to the file system metadata snapshot tree node may include a file offset to the block of the current serialized tree data that corresponds to the node associated with the current file system metadata snapshot tree. The serialized file system data, i.e., a flat set of data, is comprised of one or more blocks. Each block of the serialized file system data corresponds to a data block or data segment of the file system data. In some embodiments, a full backup snapshot is performed and the serialized tree data includes a plurality of blocks that correspond to the plurality of nodes of the tree data structure corresponding to the full backup snapshot. In other embodiments, an incremental backup snapshot is performed and the serialized tree data includes a plurality of blocks that correspond to the one or more that have been added to a tree data structure since a previous backup snapshot. At506, the serialized tree data and serialized file system data are archived. The serialized tree data and serialized file system data may be archived, to a remote location, such as cloud object storage124a. FIG.6is a flow chart illustrating an embodiment of a process for restoring data. In the example shown, process600may be performed by a cloud portal, such as cloud portal123a. At602, an indication that a secondary storage system is offline is received. A secondary storage system may be coupled to a primary system and configured to receive a backup snapshot from the primary system. In response to receiving the backup snapshot, the secondary storage system is configured to store and organize the one or more data blocks of the backup snapshot using a tree data structure. The secondary storage system may be configured to store a plurality of backup snapshots associated with the primary system and to archive to cloud storage the one or more of the backup snapshots A user associated with the primary system may send a request to the secondary storage system. The request may be a request to perform a backup snapshot to the secondary storage system, a request to restore one or more of the stored backup snapshots, a request to generate a cloud instance of a virtual machine backup, etc. The secondary storage system may be unable to satisfy the request. In the event the secondary storage system is unable to perform the request, the primary system may provide the user an error message indicating that the secondary storage system is unable to perform the request. In response to receiving the error message, a user associated with the primary system may log into the cloud portal to start the cloud instantiation process. In other embodiments, the secondary storage system may provide a heartbeat signal to the primary system. In the event the primary system does not receive the heartbeat signal within a threshold period, the primary system is configured to provide to a cloud portal an indication that the secondary storage system is offline, which causes the cloud portal to generate a cloud instantiation of the secondary storage system. At604, a cloud instantiation of the secondary storage system is generated. A secondary storage system is comprised of a plurality of storage nodes. Each storage node has a particular storage capacity. A cloud portal may provision resources for the cloud instantiation of the secondary storage system. The cloud instantiation of the secondary storage system may correspond to a virtual secondary storage cluster. The virtual secondary storage cluster may be configured to have the same storage capacity as the secondary storage system. For example, a secondary storage system may be comprised of three physical storage nodes, each physical storage node having a storage capacity of 10 TB. The cloud instantiation of the secondary storage system may be comprised of three virtual cloud instances, each virtual cloud instance having a storage capacity of 10 TB. In other embodiments, the virtual secondary storage cluster is configured to have more storage capacity than the secondary storage system. In other embodiments, the virtual secondary storage cluster is configured to have less storage capacity than the secondary storage system. The cloud instantiation of the secondary storage system may be configured for the public cloud (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.) in which the cloud instantiation will reside. A user may specify the public cloud in which the cloud instantiation will reside. In other embodiments, the virtual secondary storage cluster may be configured to have a user-specified storage capacity. For example, the user may request to have 50 TBs of storage. Each virtual cloud instance may be configured to have a default storage capacity (e.g., 10 TB). In other embodiments, the cloud instantiation of the secondary storage system is configured to have a default storage capacity (e.g., a virtual secondary storage cluster comprised of three virtual cloud instances, each having a storage capacity of 10 TB). At606, a user is authenticated. A user associated with the cloud instantiation of the secondary storage system may log into a user interface of the cloud instantiation. A cloud object storage is configured to store a plurality of snapshot archives associated with a plurality of enterprises. An enterprise may be associated with one or more data centers. Each data center may have a corresponding secondary storage system. The corresponding secondary storage systems may be configured to archive corresponding backup snapshots to cloud object storage. A user associated with the enterprise may be permitted to access a snapshot archive and request a snapshot archive to be restored to one of the one or more data centers associated with the enterprise. In other embodiments, the user is associated with only one of the enterprise's data centers. The user may be permitted to access snapshot archives specific to that particular data center and restore to a primary system of the particular data center or the secondary storage system of the particular data center, the snapshot archives specific to that particular data center. The cloud portal may be configured to request the user to provide a credential that indicates the user is permitted to access the one or more snapshot archives associated with an enterprise. The user's credential may be linked to a subset of the plurality of snapshot archives. For example, the credential of the user associated with a first enterprise is linked to the snapshot archives associated with the first enterprise and the credential of the user associated with a second enterprise is linked to the snapshot archives associated with the second enterprise. Upon authenticating the user, the user may have access to any of the snapshot archives included in the subset of snapshot archives. At608, an indication of an external target is received. The external target may correspond to a user destination system that will receive the data associated with a snapshot archive. The user destination system may correspond to a primary system of a data center, a secondary storage system of the data center, or a cloud deployment server. The archive data associated with a secondary storage system may be encrypted. The indication may include a key to decrypt the archive data. At610, the cloud retrieve process is started. A list of one or more snapshot archives available to be restored may be presented to a user via a cloud user interface. A request for one or more snapshot archives is received from the user. The request may include an associated destination for the file system data associated with the selected snapshot archive. The request may specify which secondary storage system the user desires to restore (e.g., an enterprise may be associated with a plurality of secondary storage systems, which snapshot archives to restore, a date range associated with a snapshot archive to restore, and a format for the file system data associated with the snapshot archive). The request may specify one or more snapshot archives needed to restore a particular version of a virtual machine. One or more secondary storage clusters may be virtually rebuilt in the cloud instantiation of the secondary storage system using the one or more snapshot archives requested by the user. Virtually rebuilding a secondary storage cluster includes reconstituting a tree data structure based on the one or more requested snapshot archives. A snapshot archive may correspond to a backup snapshot that was stored on the secondary storage system and archived to cloud object storage. In other embodiments, the snapshot archive corresponds to a backup snapshot that is not stored on the secondary storage system (e.g., the backup snapshot was stored on the secondary storage system past a retention period, archived to cloud storage, and removed from the secondary storage system). In other embodiments, the snapshot archive includes data associated with a particular version of a virtual machine. The request for one or more snapshot archives may be for the entire snapshot archive or a portion of the snapshot archive. For example, a user may request to restore an entire snapshot archive to restore the primary system to a particular moment in time. The user may request to restore a portion of the snapshot archive to restore one or more files that are included in the snapshot archive. For example, the user may request to restore a virtual machine container file that is included in one or more snapshot archives. A snapshot archive is comprised of serialized file system data and serialized tree data. The cloud instantiation of the secondary storage system is configured to reconstitute a snapshot tree associated with the snapshot archive by deserializing the serialized file system data and the serialized tree data and file. The cloud instantiation of the secondary storage system is configured to store the deserialized file system data and the deserialized tree data across the virtual cloud instances (e.g., the file system data is stored in the cloud instantiation of the secondary storage system). At612, the requested data is provided to the external target. In some embodiments, the cloud instantiation of the secondary storage system is configured to provide all of the file system data associated with the snapshot archive. In other embodiments, the cloud instantiation of the secondary storage system is configured to provide a portion of the file system data associated with the snapshot archive. For example, a subset of the files (e.g., a particular virtual machine container file) included in the snapshot archive may be requested. The cloud instantiation of the secondary storage system is configured to traverse the reconstituted snapshot tree and to locate the file system data associated with the requested subset of files. Upon location, the cloud instantiation of the secondary storage system may provide the requested data to the primary system associated with the user or to another location, such as a cloud deployment server. The cloud instantiation of the secondary storage system may be configured to convert the virtual machine included in the snapshot archive from a first virtual machine format to a second virtual machine format that is compatible with the cloud environment in which a cloud deployment server is to be deployed, and deploy the cloud instance of the virtual machine to the cloud deployment server. FIG.7is a flow chart illustrating an embodiment of a process for restoring archived data. In the example shown, process700may be performed by a storage system, such as a cloud instantiation122aof secondary storage system104. Process700may be implemented to perform some or all of steps610,612of process600. At702, a request for one or more snapshot archives may be received. A primary system may be configured to send a backup snapshot comprising primary system file system data to a secondary storage system. The backup snapshot is comprised of a plurality of data blocks. In response to receiving the backup snapshot, the secondary storage system may be configured to store the data blocks associated with the backup snapshot and to organize the file system data using a tree data structure, e.g., a snapshot tree. The secondary storage system (e.g., a secondary storage cluster) may be configured to archive a snapshot tree to a remote storage location, such as cloud object storage. A snapshot archive may include serialized file system data and serialized tree data. In some embodiments, the request for one or more snapshot archives is for a snapshot archive that corresponds to an incremental snapshot archive. For example, a user may desire to restore one or more files associated with a backup snapshot without having to restore all of the file system data associated with a backup snapshot. In other embodiments, the request for one or more snapshot archives is for a snapshot archive that corresponds to a full snapshot archive. For example, a user may desire to restore the file system of a primary system or other system to a state associated with a full backup snapshot. In other embodiments, the request for one or more snapshot archives is a snapshot archive that corresponds to an incremental snapshot archive and one or more other snapshot archives. For example, a user may desire to restore a version of a virtual machine container file. The data associated with the version of the virtual machine container file may be stored in a plurality of snapshot archives. In some embodiments, a request for one snapshot archive causes one or more other snapshot archives associated with the requested snapshot archive to be requested. For example, a snapshot archive that includes a portion of a virtual machine container file is requested, but the data associated with other portions of the virtual machine container file are stored across a plurality of snapshot archives. The one or more other snapshot archives are requested to generate a complete version of the virtual machine container file. At704, the one or more requested snapshot archives are retrieved from cloud object storage. A snapshot archive is comprised of serialized file system data and serialized tree data. In some embodiments, an incremental snapshot archive is retrieved. In some embodiments, a full snapshot archive is retrieved. In some embodiments, a full snapshot archive and one or more incremental snapshot archives are retrieved. At706, a tree data structure associated with the one or more retrieved snapshot archives is reconstituted. A virtual file manager of the cloud instantiation may virtually rebuild one or more secondary storage systems by reconstituting a tree data structure by deserializing serialized tree data associated with a retrieved snapshot archive. In other embodiments, the tree data structure is reconstituted by deserializing serialized tree data associated with a plurality of snapshot archives. Reconstituting the structure of a tree data structure includes reading the flat set of data associated with the serialized tree data. The flat set of data may include blocks of data that correspond to nodes of a tree data structure and associated file offsets that correspond to pointers of the tree data structure. For example, for a request associated with snapshot archive480, the complete tree structure at t=3 may be reproduced based on serialized tree data465,463,461. The virtual file system manager of a cloud instantiation may deserialize the serialized tree data. Root node206may be reproduced because serialized tree data465includes a block406that corresponds to root node206of the tree data structure, which includes offsets to blocks associated with intermediate nodes212,218. Intermediate node212may be reproduced because block406includes an offset to block412, which corresponds to intermediate node212. The data associated with intermediate node212may be determined from serialized tree data461. Intermediate node218may be reproduced because block406includes an offset to block418, which corresponds to intermediate node218. The data associated with intermediate node218may be determined from serialized tree data465. Leaf node234may be reproduced because block418includes an offset to block434, which corresponds to leaf node234. The value associated with leaf node234may be accessed and reproduced because block434includes an offset to one or more blocks of data stored in file system data455. Leaf nodes222,224,226may be reproduced because block406, which corresponds to root node206, includes an offset to block412of serialized tree data461. Block412of serialized tree data461corresponds to intermediate node212. Block412includes an offset to blocks422,424,426, which correspond to leaf nodes222,224,226, respectively. The corresponding values associated with leaf nodes222,224,226may be accessed and reproduced because blocks422,424,426include file offsets to one or more blocks of data stored in file system data451. Leaf node230may be reproduced because block418of serialized tree data465includes an offset to block430of serialized tree data461. Block430of serialized tree data461corresponds to leaf node230. The value associated with leaf node230may be accessed and reproduced because block430includes an offset to one or more blocks of data stored in file system data451. In some embodiments, a partial tree data structure is reproduced by deserializing one or more serialized tree data. For example, for the request of a value associated with a data key of “4” at time t=3, a portion of tree data structure280may be reproduced based on serialized tree data465. As seen inFIG.2E, leaf node234has a data key-value pair of “4: DATA4′” and a TreeID of “3.” Because a TreeID of “3” is associated with a file system metadata snapshot tree view at t=3, the value stored in leaf node234, as opposed to the values stored in leaf nodes228,232, is the value of a data key “4” at t=3. Although serialized tree data465includes file offsets to serialized tree data463,461, serialized tree data461,463do not need to be deserialized because the requested value may be determined without deserializing those files. In some embodiments, a subset of the serialized tree data needed to produce the entire snapshot is deserialized to determine the value for a data key at the particular time. At708, the reproduced tree data structure is traversed to locate the data associated with a user request. A user may request to restore an entire snapshot archive to restore the primary system to a particular moment in time or the user may request to restore a portion of the snapshot archive to restore one or more files that are included in the snapshot archive. For example, the user may request to restore a version of a virtual machine container file that is included in one or more snapshot archives. The reproduced tree is traversed based on the one or more data keys associated with the request. For example, for a request for a value associated with a data key of “4” at time t=3, reproduced tree data structure380may be traversed from reproduced root node306to reproduced intermediate node318to reproduced leaf node334. At710, the requested data is retrieved from cloud instantiation of the secondary storage system and provided. For example, for a request for a value associated with a data key of “4” at time t=3, a value of “DATA4″” may be retrieved from the file system data stored in the virtual cloud instances of the cloud instantiation and provided. In some embodiments, all of the file system data associated with the reproduced file system metadata snapshot tree is provided. In other embodiments, a portion of the file system data associated with the reproduced file system metadata snapshot tree is provided. The cloud instantiation of the secondary storage system may be configured to convert a virtual machine that is included in one or more snapshot archives to a format that is compatible with the cloud environment in which the cloud deployment server is to be deployed, and deploy the cloud instance of the virtual machine to the cloud deployment server. FIG.8is a flow chart illustrating an embodiment of a process for deploying a cloud instance of a virtual machine. In the example shown, process800may be performed by in part by a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. At802, an instruction to generate a cloud instantiation of the secondary storage system is provided. The cloud instantiation of secondary storage system may be hosted on a cloud server. The cloud server may receive from a cloud portal an instruction to generate cloud instantiation of a secondary storage system. The cloud server may provide the instruction to an agent running on the cloud server to generate cloud instantiation of secondary storage system. A secondary storage system is comprised of one or more secondary storage clusters. Each node of the secondary storage cluster has a particular storage capacity. A cloud portal may provision resources for the cloud instantiation of the secondary storage system. The cloud instantiation of the secondary storage system may correspond to a virtual secondary storage cluster. The virtual secondary storage cluster may be configured to have the same storage capacity as the secondary storage system. The virtual secondary storage cluster may be comprised of a plurality of virtual cloud instances, each virtual cloud instance having a particular storage capacity. In other embodiments, the virtual secondary storage cluster has a storage capacity less than the storage capacity of the secondary storage system. In other embodiments, the virtual secondary storage cluster has a storage capacity greater than the storage capacity of the secondary storage system. The cloud instantiation of the secondary storage system may be configured for the cloud environment (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.) in which the cloud instantiation will reside. A user may specify the cloud environment in which the cloud instantiation will reside. In some embodiments, the cloud instantiation of the secondary storage system is automatically generated when the secondary storage system initially comes online. In other embodiments, the cloud instantiation of the secondary storage system is generated in response to a user request. The request to generate a cloud instantiation of a secondary storage system may be received from a user while the secondary storage system is online. The cloud instantiation of the secondary storage system may be generated as a preventive measure in the event the secondary storage system goes offline. In other embodiments, the cloud instantiation of the secondary storage system generated after the secondary storage system is offline. In some embodiments, the cloud instantiation of the secondary storage system acts as a backup for the secondary storage system. The cloud instantiation of the secondary storage system may enable a copy of the data stored by the secondary storage system to be accessed while the secondary storage system is offline. In other embodiments, a primary system may be configured to directly send one or more backup snapshots to a cloud instantiation of a secondary storage system without an on-prem secondary storage system. At804, one or more secondary storage clusters of the secondary storage system are rebuilt in the cloud instantiation of the secondary storage system. In some embodiments, the one or more secondary storage clusters of secondary storage system may be rebuilt by building a tree data structure based on one or more snapshot archives received from a cloud object storage. A snapshot archive is comprised of serialized file system data and serialized tree data. The cloud instantiation of the secondary storage system is configured to reconstitute a tree data structure by deserializing the serialized tree data. In other embodiments, the one or more secondary storage clusters of a secondary storage system may be rebuilt by building a tree data structure based on the file system data included in a secondary storage snapshot. The secondary storage system may provide to the cloud instantiation of the secondary storage system one or more secondary backup snapshots. A secondary backup snapshot may be a replica of a backup snapshot received from a primary system. An initial secondary storage snapshot may include data that provides a complete view of the file system data associated with a primary system corresponding to a particular moment in time. The initial secondary storage snapshot may be a clone of a tree data structure generated by the secondary storage system. At806, a new cloud instance of a user virtual machine is deployed based on at least a portion of data stored in the rebuilt secondary storage clusters of the secondary storage system. The rebuilt tree data structure may include a file metadata tree corresponding to a virtual machine container file. The data associated with the user virtual machine may be located by traversing the rebuilt tree data structure to the leaf nodes associated with the file metadata tree corresponding to the virtual machine container file corresponding to the user virtual machine. The data associated with the user virtual machine file may be associated with a virtual machine format (e.g., VMware) that is not compatible with a virtual machine format associated with a cloud environment in which the cloud instance of the user virtual machine is to be deployed. The user virtual machine file may be converted to the virtual machine format associated with the cloud environment in which the cloud instance of the user virtual machine is to be deployed. The new cloud instance of the user virtual machine may then be deployed to a cloud deployment server hosted in the cloud environment. FIG.9is a flow chart illustrating an embodiment of a process for deploying a user virtual machine. In the example shown, process900may be implemented by a cloud deployment server, such as cloud deployment server126a. Process900may be implemented to perform some or all of806of process800. At902, a cloud instantiation of a user virtual machine is maintained in a standby mode. A cloud deployment server may be used to maintain the cloud instantiation of the user virtual machine in the standby mode. The cloud instantiation of the user virtual machine is maintained in the standby mode as a backup in case the user virtual machine hosted on a primary system goes offline. In some embodiments, a cloud instantiation of the user virtual machine is generated according to a backup policy. The backup policy may include a schedule that indicates a frequency at which a cloud instantiation of the virtual machine is to be generated. For example, the cloud instantiation of the user virtual machine may be generated each time the primary system performs a backup snapshot that includes data associated with a version of a user virtual machine to a secondary storage system, on a periodic basis (e.g., hourly, daily, weekly, etc.) or when an amount of data associated with a user virtual machine has changed more than a change threshold amount. At904, a version of the user virtual machine in a production system is determined to be unavailable. For example, a user associated with the user virtual machine hosted on a primary system may provide to a cloud deployment server an indication that the production system is offline. In other embodiments, the production system (i.e., the primary system hosting the user virtual machine) is configured to provide a heartbeat signal to the cloud deployment server hosting the cloud instantiation of the user virtual machine. In the event the cloud instantiation of the user virtual machine does not receive the heartbeat signal within a threshold period of time, the user virtual machine in the production system is determined to be offline. In other embodiments, a user associated with the cloud instantiation of the user virtual machine may provide an indication that a secondary storage system configured to back up the user virtual machine is offline. In other embodiments, the secondary storage system is configured to provide a heartbeat signal to the cloud instantiation of the user virtual machine. In the event the cloud instantiation of the user virtual machine does not receive the heartbeat signal within a threshold period of time, the secondary storage system is determined to be offline. At906, the cloud instantiation of the user virtual machine is deployed. The cloud instance of the virtual machine may be maintained in a standby mode in a cloud environment until a deploy condition has been satisfied. Deploying the cloud instantiation of the user virtual machine includes changing a mode of the cloud instance of the user virtual machine from a standby mode to an active mode. For example, a user virtual machine hosted on the primary system (production system) may go offline or the primary system may go offline. In the event the deploy condition has been satisfied, the cloud instance of the virtual machine is deployed (i.e., turned on) and ready to be used by a user associated the user virtual machine within a short period of time (e.g., minutes). In other embodiments, the secondary storage system is determined to be offline and the cloud instantiation of the user virtual machine is deployed (e.g., turned on) in response to determining the secondary storage system to be offline. This may ensure that a copy of a production system virtual machine is ready to be deployed in the event the user virtual machine in the production system goes offline while the secondary storage system is also offline. FIG.10Ais a flow chart illustrating an embodiment of a process for rebuilding and maintaining a cloud instantiation of a secondary storage system. In the example shown, process1000may be performed by a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. In some embodiments, process1000is implemented to perform some or all of step804of process800. At1002, archived data is received. The archived data may be a snapshot archive retrieved from cloud object storage. A snapshot archive is a serialized data file comprised of serialized file system data and serialized tree data. At1004, the archived data is deserialized. The cloud instantiation of the secondary storage system may be configured to reconstitute tree data structure associated with the archived data by deserializing the serialized data file. Deserializing is a process by which a flat set of data is read to reconstitute a tree data structure. The cloud instantiation of the secondary storage system is configured to store the file system data and the deserialized tree data across the virtual cloud instances (e.g., the file system data is stored in the cloud instantiation of the secondary storage system). At1006, a tree data structure is generated or updated based on the deserialized archived data. The tree data structure may provide a partial or complete view of the file system data corresponding to a snapshot archive FIG.10Bis a flow chart illustrating an embodiment of a process for rebuilding and maintaining a cloud instantiation of a secondary storage system. In the example shown, process1050may be performed by a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. In some embodiments, process1000is implemented to perform some or all of step804of process800. At1052, replication data is received. The replication data, i.e., a secondary storage snapshot, may be a replica of a backup snapshot that is received at a secondary storage system from a primary system. At1054, a tree data structure is generated or updated based on the replication data. The tree data structure may provide a partial or complete view of the file system data corresponding to the replication data. The view of the file system data corresponding to the replication data may be comprised of a file system metadata snapshot tree and one or more file metadata trees. FIG.11is a flow chart illustrating an embodiment of a process for deploying a user virtual machine. In the example shown, process1100may be performed by a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. In some embodiments, process1100is implemented to perform some or all of step612of process600. In some embodiments, process1100is implemented to perform some or all of step806of process800. At1102, user virtual machine data is obtained. In some embodiments, user virtual machine data is obtained at a secondary storage system from a primary system hosting a virtual machine. In other embodiments, user virtual machine data is obtained at a cloud instantiation of a secondary storage system from a primary system hosting a virtual machine. In other embodiments, user virtual machine data is obtained at a cloud instantiation of a secondary storage system from a cloud object storage storing an archived version of the user virtual machine. At1104, user virtual machine data is converted to a virtual environment of a cloud deployment, if applicable. The user virtual machine data may be associated with a first virtual machine format (e.g., VMware). The first virtual machine format may be not be compatible with a virtual machine format associated with the virtual environment of a cloud deployment. The user virtual machine data may be converted from the first virtual machine format into a virtual machine format that is compatible with the virtual environment of the cloud deployment (e.g., Amazon Web Services, Microsoft Azure, Google Cloud, etc.). At1106, the converted user virtual machine data is provided to the cloud deployment system for deployment. The converted user virtual machine data may be provided to a cloud deployment server hosted in a cloud environment. FIG.12is a flow chart illustrating an embodiment of a process for tearing down a cloud instance of a user virtual machine. In the example shown, process1200may be performed by a cloud instantiation of a secondary storage system, such as cloud instantiation122aof secondary storage system104. At1202, a cloud instance of a user virtual machine is backed up to a cloud instantiation of a secondary storage system. A datacenter comprising a primary system that hoses the user virtual machine, and a secondary storage system may be offline. The cloud instance of the user virtual machine may be deployed while the primary system and/or the secondary storage system is offline. In some embodiments, the cloud instance of the user virtual machine is deployed and configured to back up its data to the cloud instantiation of the secondary storage system. For example, the cloud instance of the user virtual machine may be configured to perform a backup snapshot of its file system data and to send the backup snapshot to the cloud instantiation of the secondary storage system. At1204, an indication is received that the primary system hosting the user virtual machine or the secondary storage system is online. For example, a user associated with the primary system or a user associated with a secondary storage system may provide the indication. In other embodiments, the cloud instantiation of the secondary storage system may receive a heartbeat signal from the primary system or from the secondary storage system. At1206, one or more snapshot trees are cloned. The one or more snapshot trees may correspond to one or more backup snapshots received from the cloud instance of the user virtual machine while the secondary storage system is offline. The one or more snapshot trees may be cloned by copying a corresponding root node associated with the one or more snapshot trees. The corresponding root node copy includes the same set of pointers as a copied root node, but may include a different nodeID and view identifier. At1208, data associated with the one or more cloned snapshot trees is converted. The data associated with the one or more cloned snapshot trees may include data of a cloud virtual machine. A format of the cloud virtual machine may be different than a format of a virtual machine in a datacenter. The data of the cloud virtual format may be converted into a format of the primary system virtual machine. For example, the cloud virtual machine may have an associated disk with one or more associated volumes. The data included in the volumes may be converted into one or more virtual machine files in a format associated with the primary machine virtual machine. In some embodiments, information associated with the virtual machine is unknown. The cloud virtual machine may be converted into a virtual machine format associated with the primary system, but include the same or a different number of disks, and include the same number of volumes as the cloud virtual machine. For example, the cloud virtual machine may include two disks and four volumes and the primary system virtual machine may include two disks and four volumes. In another example, the cloud virtual machine may include four disks and four volumes. The primary system virtual machine may include two disks and four volumes. Other configurations that may be different between the cloud virtual machine and the primary system virtual machine may include the number of cores, memory size, network interface card speed, and/or IP address. At1210, the converted data is provided. In some embodiments, the converted data is provided to the primary system hosting the user virtual machine. In response to receiving the converted data, the primary system may be configured to restore the user virtual machine. In other embodiments, the converted data is provided to the secondary storage system. In response to receiving the converted data, the secondary storage system may update its own tree data structures based on the converted data. The secondary storage system may then be used to restore the user virtual machine running on the primary system. At1212, an indication is received that the system receiving the data is up-to-date. The cloud instantiation of the secondary storage system may receive from the secondary storage system a notification that the secondary storage system is finished updating its tree data structure based on the converted data. In other embodiments, the cloud instantiation of the secondary storage system receives from the primary system hosting the user virtual machine a notification that the user virtual machine hosted on the primary system is up-to-date. At1214, the cloud instance of the user virtual machine is torn down. FIG.13is a flow chart illustrating an embodiment of a process for updating a secondary storage system. In the example shown, process1300may be performed by a secondary storage system, such as secondary storage system104. At1302, the data associated with one or more cloned snapshot trees is received. The data associated with one or more cloned snapshot trees may include the file system data included in one or more backup snapshots received by a cloud instantiation of a secondary storage system while the secondary storage system was offline. At1304, one or more tree data structures are updated based on the received data associated with one or more cloned snapshot trees. The data associated with one or more cloned snapshot trees may include file system data. The secondary storage system may organize the file system data of a backup snapshot using a tree data structure. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
129,524
11861393
DETAILED DESCRIPTION FIG.1illustrates a system diagram100that includes an automated assistant104capable of invoking multiple different agent modules in response to a command from a user. The automated assistant104can be accessed by a user through a client device102that is connected to a remote device116, such as a server device, which can host the automated assistant104. The automated assistant104can receive textual or audio inputs from the client device102and interpret the inputs for performing actions to assist the user. The automated assistant104can use a voice to text engine106for converting audio inputs into text or other medium that can be further processed by the automated assistant104. The automated assistant104can further include a text parser engine108that can process textual input, or text converted from an audio input, and convert the input into instructions for execution by the automated assistant104and/or one or more agent modules. In some implementations, the text parser engine108can determine whether an input corresponds to a multitask command. When the text parser engine108determines that an input corresponds to a multitask command, an agent selection engine110can be employed to identify the agent modules that should be invoked for completing the multiple tasks involved in executing the command. An agent module can be an application that is accessible to the client device102over a network and associated with a native application on the client device102, or a website accessible to the client device102. In some implementations, the agent module can be a third party application that is provided by an entity that is different than an entity that provides an operating system the client device102or other software on the client device102. Alternatively, the agent module can be a first party application that is provided by the entity that also provides the operating system or other software for the client device102. The automated assistant104can access an index that correlates agent modules to various functions, and use the index to determine the agent modules that are suitable for completing the multitask command provided by the user. The agent modules can be managed by separate remote servers and the automated assistant104can access the remote servers over a network130. When the automated assistant104identifies the agent modules suitable for completing the multitask command, an agent interaction engine112can delegate tasks to each identified agent module. The automated assistant104can invoke each agent module to perform one or more tasks of the delegated tasks by transmitting a signal over the network130to each server device that hosts an agent module. For example, the automated assistant104can access a first server118, a second server120, and an Nth server122that each host a first agent module124, a second agent module126, and an Nth agent module128, respectively. Depending on the multitask command provided to the automated assistant104from the user, the agent interaction engine112can delegate tasks to the agent modules in a series or in parallel. For example, the agent interaction engine112can provide tasks in series by first delegating a first task to the first agent module124. In response to the first agent module124providing an output to the automated assistant104, the automated assistant104can provide a second task to the second agent module126. Thereafter, in response to the second agent module providing an output to the automated assistant104, the automated assistant104can provide an Nth task to the Nth agent module128. This process can continue until each task of the multiple tasks corresponding to the input command from the user is complete. The agent interaction engine112can delegate tasks in parallel by simultaneously assigning multiple tasks to multiple different agent modules (e.g., the first agent module124and the second agent module126). For example, a multitask command provided by a user can be parsed to determine the specific tasks for completing the multitask command, and at least two of the tasks can be simultaneously delegated to separate agent modules. In some implementations, outputs can be received from the separate agent modules and used to delegate another task to another agent module. In some implementations, output can be provided from one or more of the agent modules and processed by the agent interaction engine112. The output can correspond to a task completion indicator that includes information related to the task, or a request for more information for completing the task. For example, an agent module that has been delegated a task can query the automated assistant104to obtain additional information, and the automated assistant104can determine whether to obtain the additional information from the user or a separate agent module. If the automated assistant104determines that the additional information should be obtained from the user, the automated assistant104can cause a request to be provided at an automated assistant interface114of the client device102. The request can be an audible output or a textual output that queries the user for the additional information. When the user provides the additional information to the automated assistant104, the automated assistant104can treat the additional information as an input that is processed and thereafter provided to the agent module that requested the information, and/or any other agent module that might need the information. In some implementations, the agent module can receive an input from the automated assistant or another agent module, and invoke a separate agent module with parameters for completing a particular subtask. In this way, the agent module can at least temporarily “steer” an interaction, with the automated assistant acting as an intermediary. In some implementations, the automated assistant104can determine that a separate agent module is more suitable for providing the additional information to the agent module requesting the additional information. In such instances, the agent selection engine110can identify the agent module that is most suitable for obtaining the additional information from. The identified agent module can then be queried by the automated assistant104for the additional information, and cause the identified agent module to either transmit the additional information to the automated assistant104and/or the requesting agent module. FIG.2illustrates a method200for providing a command to an automated assistant that causes the automated assistant to invoke multiple different agent modules to perform different tasks for fulfilling the command. The method200can be performed by a client device, server device, and/or any other apparatus suitable for controlling an automated assistant. The method200can include a block202of determining that content of a natural language input provided to an automated assistant interface includes a multitask command. For example, the command can be a request for ordering ingredients, such as, “Assistant, please order the ingredients for my pad thai recipe.” The command can be parsed by the automated assistant and the phrase “order the ingredients” can be identified as being a multitask command. The multitask command can be a command that is configured by the user at an automated assistant interface, or configured by the automated assistant based on a previous interaction between the user and the automated assistant. In some implementations, that multitask command can be preconfigured based on interactions between another user and an automated assistant, or multiple users and multiple automated assistants. For example, the automated assistant can access historical interactions between another user and another automated assistant to identify a multitask command that may be of interest to the user. The automated assistant can then associate the multitask command with the user and allow the user to invoke the automated assistant to perform various tasks in receiving the multitask command. In response to receiving the multitask command, the automated assistant can, at block204, identify one or more agent modules suitable for performing the multiple tasks associated with the multitask command. For example, the multitask command of “order the ingredients” can correlate to a spice ordering agent module, a produce ordering agent module, and a restaurant agent module. Each of the agent modules can be identified in an index that is accessible to the automated assistant and includes correlations between multitask commands and agent modules. The automated assistant can manage the index and add multitask commands whenever the user elects to have a group of commands to be stored as a single multitask command understood by the automated assistant. At block206, at least one agent module can be invoked for performing at least one task of the multiple tasks. For example, the spice ordering agent, which can be associated with a website for ordering spices, can be invoked by the automated assistant and queried to identify food ingredients that are available through the spice ordering agent module. The agent module can respond to the automated assistant with an indication that certain ingredients (e.g., vinegar and soy sauce) are available, and the automated assistant can respond with a request to order the ingredients. The ordering of these ingredients can mark the completion of at least some of the multiple tasks, and the automated assistant can then, at block208, identify any remaining tasks for completion. If there are no remaining tasks, then, at block210, the automated assistant can provide an output indicating that the multitask command has been completed by the automated assistant (e.g., “Your pad thai ingredients have been ordered”). If there are remaining tasks of the multitask command to be completed, then block206can be repeated and another agent module can be invoked for performing one or more of the remaining tasks of the multitask command. For example, the automated assistant can determine that there are remaining ingredients to be ordered (e.g., basil, rice noodles, peanuts, etc.), and invoke the produce ordering agent module for ordering the remaining ingredients. This can be a series invocation process where agent modules are invoked one after the other. In some implementations, each agent module can be invoked simultaneously, and the responses from the agent modules can be used to determine how to continue interacting with the user and/or agent modules. For example, each of the spice ordering agent module, the produce ordering agent module, and the restaurant agent module can be simultaneously invoked and tasked with reporting whether they can provide all the ingredients. If the agent modules can collaboratively provide all the pad thai ingredients, then each agent module can be tasked with providing certain ingredients. However, if one or more tasks (e.g., ingredient orders) cannot be fulfilled by the agent module, the automated assistant can either query the user about how to proceed, or dynamically alter the tasks. Tasks can be dynamically altered by the automated assistant in response to feedback from an agent module and/or the user. For example, if at least one of the ingredients is not available to the agent modules, the automated assistant can alter the tasks from ordering the individual ingredients to ordering a carryout order from the restaurant agent module. This decision by the automated assistant can be preconfigured by the user, or be based on past activities of the user (e.g., the user previously attempted to order pad thai ingredients, via the automated assistant, but then defaulted to ordering pad thai carry out). It should be noted that in some implementations and/or situations, task altering and delegation is performed in the background by the automated assistant and/or any agent modules invoked by the automated assistant. In this way, the user may only provide the command, “Assistant, order pad thai ingredients,” and receive an output from the automated assistant such as “Ok, the ingredients are ordered,” or “Ok, I ordered you pad thai carry out because ingredients were not available.” This saves the user from having to recite each ingredient and prevents the automated assistant from having to process multiple different commands for each ingredient, thereby conserving computational recourses. FIG.3illustrates a method300for dynamically modifying tasks to be delegated to one or more agent modules based on feedback received from a user or an agent module. The method300can be performed by a client device, server device, and/or any other apparatus suitable for controlling an automated assistant. The method300can include a block302of determining that content of a natural language input provided by an automated assistant includes a multitask command. For example, the natural language input can be a spoken phrase from the user such as “Assistant, please plan a night out with my friends.” The automated assistant can convert the natural language input into text and identify a multitask command within the text. The multitask command (e.g., “plan a night out with my friends”) can be a command that was preconfigured collaboratively by the user and the automated assistant. At block304, the automated assistant can identify agent modules suitable for performing the multiple tasks associated with the multitask command. The agent modules can be applications loaded onto a client device associated with the user, or otherwise accessible to the automated assistant over a network (e.g., the internet). Each of the agent modules can be associated with a task to be performed for the completing the multitask command. For example, the multitask command, “plan a night out with my friends,” can be associated with a social network agent module, a calendar agent module, and/or a restaurant agent module. The social network agent module can be associated with at least a task of identifying friends of the user; the calendar agent module can be associated with at least a task of identifying when the friends are free; and the restaurant agent module can be associated with at least a task of identifying restaurants to go out to. At block306, at least one agent module of the agent modules can be invoked for performing at least one task of the multiple tasks. For example, the social network agent module can be invoked and the automated assistant can use the agent module to identify friends to invite to the night out being planned by the automated assistant. The automated assistant can query the social network agent module regarding, for example, how many friends of the user live within the same city of the user. The agent module can provide, in response, a list of friends of the user that live in the same city as the user. At block308, a determination is made whether the output received from the agent module is feedback. If the output (e.g., the list of friends) is not feedback, then the method300can proceed to block318. At block318, a determination is made whether there are other tasks to be performed to complete the multitask command. If there no other tasks to be completed, then the method300can proceed to block320where an output is provided, by the automated assistant, to the user confirming that the task was completed. However, if other tasks remain to be completed, then, block306can be repeated. Block306can be repeated for performing another task (e.g., identifying when the friends are free) in the multiple tasks associated with the multiple task (e.g., plan a night out with friends) command provided by the user. For example, when the calendar agent module performs the task of identifying when friends are free, the calendar agent module can provide feedback indicating that all friends but one friend are available during an upcoming weekend. At block308, a determination is made that feedback was provided from the agent module (e.g., the calendar agent module). The feedback can be provided to the automated assistant and the automated assistant can determine whether a response should be provided to the agent module from the user or another agent. For example, when the calendar agent module communicates to the automated assistant that one friend from the group of friends identified by the social network agent module is not free, the automated assistant can, at block312, query the user regarding the feedback. Specifically, the automated assistant can query the user regarding whether it is ok to proceed with planning the night out without including the friend that is not available. Thereafter, at block314, a response can be received from the user. The user can indicate in the response that it is not okay to proceed without inviting the friend and, at block316, an agent module (e.g., the calendar agent module) can be identified for performing a task associated with the user response. For example, the automated assistant can receive the response from the user and provide a supplemental task to the calendar agent module for identifying a time when at least the unavailable friend would be free. Should the calendar agent module provide an output that corresponds to feedback, then block310can be repeated. Otherwise, the method300can proceed to block318to determine whether other tasks are to be performed. If no other tasks are to be performed then, at block320, the output can be provided by the automated assistant to confirm the completion of the command. If there are other tasks (e.g., using the restaurant agent module to identify restaurants to go to), the method300can proceed to block306. At block306, a restaurant reservation can be made for the friends at the date provided by the calendar agent module. Thereafter, the method300can proceed to308. If no other feedback is provided and no other tasks are to be performed, the method300can terminate at block320, where output is provided to the user confirming the completion the command. In some implementations, method300enables the user and/or an agent module to provide feedback to the automated assistant during the execution of the multitask command. Feedback can be provided from an agent module to the automated assistant, and the automated assistant can provide a response back to the same or a separate agent module. Alternatively, the feedback can be provided from an agent module to the automated assistant, and the automated assistant can query the user for a response, which can be provided back to the same agent module or a separate agent module. In this way, the user does not have to personally identify each suitable agent module to the automated assistant and/or individually control each agent module. Rather, these steps can be performed by the automated assistant, which can preserve computational resources given that less voice to text processing is necessary when the user is providing less commands. FIG.4illustrates a method400for configuring a multitask command for invoking multiple different agent modules via an automated assistant. The method400can be performed by a client device, server device, and/or any other apparatus suitable for controlling an automated assistant. The method400can include a block402of identifying multiple different natural language commands received by at least one automated assistant interface. For example, the natural language commands can be spoken or textual commands such as “reserve a table at a nearby restaurant,” “find a place to get drinks after dinner,” and “send an invitation to my girlfriend.” Each of these natural language inputs can be associated with a specific task that is undertaken by the automated assistant, which can delegate each task to a suitable agent module. At block404, a query is provided to a user regarding whether to associate the multiple different natural language commands with a multitask command. The multitask command can be a natural language input, such as an audible or textual word or phrase, that can be provided to the automated assistant interface for performing multiple different tasks. The multitask command can be provided by the user or generated by the automated assistant. For example, the user can be operating a graphical user interface (GUI) corresponding to the automated assistant interface, and type in each of the multiple different natural language commands. The automated assistant can provide a query to the user regarding whether the user would like to associate the multiple different natural language commands with a multitask command, which can also be provided by the user at the GUI. The multitask command can also be configured through a verbal interaction between the automated assistant and the user. For example, over the course of a week, the user can provide a variety of different natural language commands associated with a date night that the user is planning. The automated assistant can identify a commonality between the different natural language commands and, in response, provide the query to the user regarding associating the different natural language commands with a multitask command. The commonality can be content of the natural language commands (e.g., mentioning a date night in each command), a time or location associated with the natural language commands (e.g., mentioning the event time or location each command), a time or location associated with the user when providing the commands (e.g., each Monday after work the user plans the date night), and/or any other commonality that can be associated with natural language commands. For example, the commonality can be that each natural language command was provided within a threshold time of each other. Alternatively, the commonality can be that all the natural language commands were provided and resolved within a total threshold time period. At block406, a response can be received from the user confirming that the multiple different natural language commands should be associated with a multitask command. The user can provide such confirmation through the GUI (e.g., by typing in the multitask command “plan a date night”), or through a spoken command to the automated assistant. For example, the user can communicate the multitask command to the automated assistant by saying, “Assistant, please associate the date night tasks with the command: ‘plan a date night.’” In response, at block408, the automated assistant can determine agent modules, associated with the multiple different natural language commands, to be invoked to perform tasks in response to receiving the multitask command. For example, previously the user may have provided the command “reserve a table at a nearby restaurant.” The command can be processed by the automated assistant and converted into a task that is delegated to a restaurant agent module. In a similar manner, the automated assistant can compile a list of tasks from the multiple different natural language commands. Thereafter, at block410, the automated assistant can store identifiers for the agent modules and/or tasks in association with the multitask command (e.g., “plan a date night”). In this way, the user is able to invoke, via the automated assistant, multiple agent modules to perform different tasks. This can streamline various interactions between the user and the automated assistant, thereby saving the user time as well as conserving computational resources available to the automated assistant. FIG.5provides a diagram500that illustrates an example of a user502invoking an automated assistant with a multitask command associated with multiple different agent modules. Specifically, diagram500illustrates an example of a user502requesting that the automated assistant plan a business trip using the multitask command “Assistant, plan my business trip.” The multitask command can be provided as a spoken user input508to a client device, such as a mobile device504or an assistant device506, and the client device can transmit, over a network512, the spoken user input508to a remote server that hosts an automated assistant application. The automated assistant application can determine that the phrase “plan my business trip” corresponds to a multitask command and identify the agent modules associated with completing the multitask command. The automated assistant can access a storage that includes an index providing a correlation between multitask commands and agent modules available for completing the multitask command. For example, the index can include an entry that identifies the multitask command “plan my business trip” and corresponding entries that identify the agent modules that can be employed to complete subtasks of the multitask command. The agent modules can include a calendar agent module516, a rental car agent module520, and a hotel agent module524. The automated assistant can further identify, from the index, the tasks involved with completing the multitask command. Such tasks can include: providing details of the business trip in a calendar managed by the user, reserving a rental car, and booking a hotel. In some implementations, each task can be delegated in parallel, series, or a combination thereof to each of the agent modules. For example, the automated assistant can communicate with a first remote server514for delegating the task of finding the details of the business trip using a calendar agent module. In response to receiving the details of the business trip from the calendar agent module516, the automated assistant can delegate the tasks of reserving the rental car and booking the hotel. Specifically, the automated assistant can communicate with a second remote server518for delegating the task of reserving the rental car to the rental car agent module520, and communicate with a third remote server522for delegating the task of booking the hotel. Each of the tasks performed by the rental car agent module520and the hotel agent module524can be done so concurrently in order conserve time. In some implementations, the automated assistant can collect information from one agent module and provide the information to another agent module. For example, the calendar agent module can complete the task of providing details of the business trip, and provide the details to the automated assistant. The automated assistant can parse the details and identify the details that would be relevant to the remaining tasks. The details can include a destination for the business trip and dates for the business trip. When the automated assistant delegates the tasks to the rental car agent module520and the hotel agent module524, the automated assistant can include the location and the dates. In this way, the user502does not have to be queried to provide such details, and the automated assistant can preserve computational resources by not having to process unnecessary natural language inputs from the user502. In some implementations, the automated assistant can use environmental data, such as a current location of the user502, to modify tasks to be delegated to the agent modules. For example, the automated assistant can determine a distance between a current location of the user502and the destination for the business trip. The rental car agent module520can receive the distance information from the automated assistant and query the automated assistant regarding whether the user would like to reserve an electric car because the distance is below a particular threshold. The automated assistant can, in response, generate a query as an output510for the user502(e.g., Would you like to rent an electric car?). Alternatively, the automated assistant can pass the query from the rental car agent module520to the user, thereby allowing the automated assistant to act as an intermediary between the user and the rental car agent module520. If the user502provides a response confirming the electric car reservation (e.g., “Yes, please.”), the automated assistant can communicate to the rental car agent module520that the user502would like an electric car. The rental car agent module520can then reserve a first type of electric car for the user502to drive to the destination for the business trip. In some implementations, feedback from an agent module can be provided and used by the automated assistant to determine whether a previously performed task should be repeated. For example, the automated assistant can communicate to the hotel agent module524that the user502has booked the first type of electric car. The first type of electric car can include a charging receptacle that is not supported by a charging station at a hotel being booked by the hotel agent module524. In response to the hotel agent module524determining this incompatibility, the hotel agent module524can provide an indication to the automated assistant104identifying the first type of electric car as one that is not supported by the chargers at the hotel. In response, the automated assistant104can delegate a supplemental task to the rental car agent module520for modifying the reservation to reserve a second type of electric car that is supported by the charging stations at the hotel. In response to the rental car agent module520reserving the second type of electric car, the automated assistant can direct the hotel agent module524to book the hotel and provide an output510to the user502indicating that the trip has been booked. This process allows for resolutions of conflicts between agents to be performed by the agent module with little or no interaction with the user502. In this way, the user502is able to perform other actions while the automated assistant coordinates completion of tasks in the background. FIG.6is a block diagram600of an example computer system610. Computer system610typically includes at least one processor614which communicates with a number of peripheral devices via bus subsystem612. These peripheral devices may include a storage subsystem624, including, for example, a memory subsystem625and a file storage subsystem626, user interface output devices620, user interface input devices622, and a network interface subsystem616. The input and output devices allow user interaction with computer system610. Network interface subsystem616provides an interface to outside networks and is coupled to corresponding interface devices in other computer systems. User interface input devices622may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system610or onto a communication network. User interface output devices620may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system610to the user or to another machine or computer system. Storage subsystem624stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem624may include the logic to perform selected aspects of methods200,300, and/or400, and/or to implement one or more of the automated assistant104, voice to text engine106, text parser engine108, agent selection engine110, agent interaction engine112, client device, server device, remote device, and/or any other apparatus or process discussed herein. These software modules are generally executed by processor614alone or in combination with other processors. Memory625used in the storage subsystem624can include a number of memories including a main random access memory (RAM)630for storage of instructions and data during program execution and a read only memory (ROM)632in which fixed instructions are stored. A file storage subsystem626can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem626in the storage subsystem624, or in other machines accessible by the processor(s)614. Bus subsystem612provides a mechanism for letting the various components and subsystems of computer system610communicate with each other as intended. Although bus subsystem612is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computer system610can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system610depicted inFIG.6is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system610are possible having more or fewer components than the computer system depicted inFIG.6. In situations in which the systems described herein collect personal information about users (or as often referred to herein, “participants”), or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. Also, certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed. For example, a user's identity may be treated so that no personal identifiable information can be determined for the user, or a user's geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and/or used. While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
36,431
11861394
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. DETAILED DESCRIPTION The ensuing description provides preferred exemplary embodiment(s) only, and is not intended to limit the scope, applicability or configuration of the disclosure. Rather, the ensuing description of the preferred exemplary embodiment(s) will provide those skilled in the art with an enabling description for implementing a preferred exemplary embodiment. It is understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope as set forth in the appended claims. The automated semantic tagging system monitors execution of threads within a processing environment and tags additional data to the record of the execution to generate process specifications that can recreate the state of a process at a point in which the thread of the process executed. A configuration layer enables configuration of objects, threads, and processes that can be monitored during execution of the application. An interface between an application class and the configuration layer may enable detection of the objects, threads, or processes during execution. Upon detecting execution of a monitored thread, for instance, triggers generation of a process specification that encapsulates the relationships between the thread, the object that called the thread, the process within which thread is executing, and a thread definition of the thread that indicates the design time properties of the thread. THe process specification may be stored locally or remotely and used to refine the application during or after runtime. Each process may be represented by multiple process specifications with one process specification corresponding to each thread of the process. This can allow for tracing the state of the process through the entire execution of the process. In some instances, the process may be replayed in a simulation that reproduce the exact functionally of the particular process when it executed including any particularities of that particular execution such as errors, faults, resource leaks, cycles, or the like. The computing device may step through each thread of the process to identify root cause of the error, fault, resource leak, cycles, or the like (e.g., the particular thread, the particular execution conditions, particular instructions, or the like). In some instances, the process specification may be used to modify the processes of the application either during runtime as the processes are executing or before a subsequent execution. For instance, the root cause of particular functionality (e.g., errors, execution time, resource use, or the like) may be used to modify threads of a process prior to the threads subsequent execution in order to reduce or eliminate the functionality. The thread definitions may be modified by adding resource constraints to the allocation of resources, modifying loops, modifying conditional branches, adding exception handling, modifying network targets to redirect requests of the threads to different computing devices, or the like. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative aspects but, like the illustrative aspects, should not be used to limit the present disclosure. FIG.1illustrates a block diagram of a semantic tagging system100according to at least one aspect of the disclosure. Semantic tagging system100may execute one or more processes of one or more software applications. In some instances, the at least one software application of the one or more software applications may be a distributed software application that executes on one or more computing devices or thread execution cores. A software application may include one or more processes (e.g., components) that provide the functionality of the application. The one or more processes may execute in series (e.g., one process at a time), concurrently (e.g., such as one process executing in parallel with another process), or a combination thereof. When executing in parallel the processes may execute synchronously or asynchronously. A process may include one or more threads (e.g., a set of instructions that provide a function such as a transaction) that execute to provide an intermediate function of the process. Each thread can be executed independently from other set threads. Like processes, threads can be scheduled to execute in series, concurrently, or a combination thereof. Some threads may be called by an object of the application layer. For instance, objects can include one or more thread definitions (e.g., activities), each thread definition can be processed to generate an instance of a thread. The thread definition may include, for example, one or more instructions executed by the thread, identification of one or more other threads, resource types necessary to execute the thread, a root process within which the thread can execute, combinations thereof, and the like. In some instances, processing the thread definition to generate an instance of the thread can include compiling or interpreting the set of instructions of the thread. A process can be executed by one or more processors on one computing device or across multiple computing device (e.g., in a distributed processing system). For instance, computing device104may execute one or more applications. Executing each application may include generating one or more processes, which may include generating one or more threads. Applications124may be stored in persistent memory in a compiled or uncompiled state. When executed, the one or more sets of instructions may execute by processor108to spawn a process. As the process executes, instructions may execute to generate one or more threads that can be executed by processor108to provide the function of the process. Memory116may also include instructions120that can include instructions that are independent from applications124such as an operating system or firmware, or the like or instructions that facilitate execution of at least one application such as hardware drivers, interfaces, previously executed processes, threads, or objects, or the like. In some instances, an application may use a process specification of process specifications128to modify a process of an application prior to spawning the process. For instance, computing device104may identify an error such as an missed branch or an infinite loop that may waste the resources of computing device104and semantic tagging system100. A process specification may indicate the state of a thread of a process. Since threads are typically stateless, a particular thread may not indicate the cause of the error. A thread specification may provide the state of the process at the point in which the thread executed to provide an indication as to the cause of the error or wasted resource. Instructions120may include one or more routines for analyzing process specifications such as by tracing the threads that come before or after the particular thread of the thread specification. The one or more routines may additionally, and automatically, execute to modify the process to eliminate redundant threads; eliminate cycles such as infinite loops; reduce resource consumption such as processor cycles; memory use; network bandwidth; or the like. Process specifications128may include process specifications that were generated from threads of processes previously executed by processor108. In some instances, process specifications128may include additional process specifications received over network148from other computing devices152. Computing devices152may have a same, similar, or different structure from computing device104. In some instances, process specifications may be analyzed using one or more machine-learning models132. For instance, one or more thread specifications may be packaged into a feature set that can be input into a machine-learning model to derive characteristics of process that may be less prone to errors or a characteristic of processes that may execute with less resources of computing device104or semantic tagging system100. A feature set may be defined using sets of process specifications over a time interval. In some instances, a feature set may include each process specification of a particular process. In other instances, a feature set may include may include process specifications generated over a time interval such as process specification generated from multiple threads across one or more processes. This may be advantageous to capture errors in a process that may execute correctly once despite previous executions ending in error. In some instances, features sets may be defined over variable time intervals such as a first feature set that may include one or more process specifications over a first time interval and a second feature set that may include process specifications over a second time intervals. Features sets may be defined from process previously executed by computing device104, from process executed by other computing devices152, or from a combination thereof. In some instances, feature sets may be generated from manufactured process specifications. Manufactured process specifications may be procedurally generated to include particular data, random data, or combination thereof. Manufactured process specifications may be generated by automatically or by an operator rather than from a process that executed by processor108. The machine-learning models may be trained using feature sets from process specifications128, manufactured thread specifications, process specification received from computing device152, or the like. Machine-learning models132may be trained using supervised or unsupervised learning. In supervised learning, the feature sets can include labeled data that indicates an expected output such as an ideal process, an ideal thread of a process, a state of a thread, properties of the process or thread, an error or fault, resources consumed by the process or thread, or the like. For example, the feature set may be labeled with a particular error. The machine-learning model may use the feature set, as input, and the labels, as expected output, to define one or more functions that will output identify a process or thread that may cause a similar error. The accuracy of the one or more functions, and the machine-learning model, may depend on the number of feature sets used to train the machine-learning model. Examples of algorithms that can be used for supervised learning include, but is not limited to, regression such as random forest, linear and non-linear; Bayesian statistics; neural networks; decision trees; Gaussian process regression; nearest neighbor; long short-term memory; deep learning algorithms; combinations thereof; and the like. In unsupervised learning, the feature sets may not be labeled such that the machine-learning model may not have access to the expected values of the one or more additional properties associated with a given input feature set. Since the expected values are unknown, the machine-learning model may use different algorithms from those used during supervised learning. Unsupervised learning may focus on identifying correlations between (1) two or more thread specifications of a feature set, (2) two or more processes of a feature set, (3) two or more threads of a feature set, or (4) two or more feature sets. The machine-learning model may indicate that certain properties of a process specification are a better indicator of predicting an error or identifying a root cause of an error than other properties. For instance, the machine-learning model may identify a correlation between a particular threads of a process and an error detected upon executing the process that may indicate the order of the particular threads may be the cause of the error. In some instances, correlated properties may be weighted higher than other properties to further improve the identification of particular characteristics of thread specifications. Examples of unsupervised learning algorithms for machine-learning models include, but are not limited to, clustering, neural networks, outlier detection, combinations thereof, or the like. The machine-learning models may be trained over a predetermined interval of time that can be based on the size of the feature sets (e.g., the quantity of process specifications in each feature set) and the number of feature sets used for training. In some instances, training may continue until a predetermined threshold is met. For instance, training may continue until a predetermine number of feature sets are processed by the machine-learning model132. In another example, training may continue until the machine-learning model132reaches a predetermined accuracy threshold. Accuracy may be determined by passing labeled feature sets into the machine-learning model and matching the output to the label. In other instances, accuracy may be determined based on user analysis of the training process, the output of the machine-learning models on contemporaneously collected process specifications, or the rate at which the machine-learning model generates an output from a given input. In some instances, the machine-learning models may be continuously trained, first using the training feature sets and then using contemporaneously obtained process specifications from process specifications128to further improve the accuracy of machine-learning models132. An accuracy value associated with machine-learning models132may be used to trigger re-training or provisioning new machine-learning models. If the accuracy value falls below a first threshold value then the re-training or provisioning may be triggered. In the instance of re-training, machine-learning models132may continue to analyze process specifications, but the output may include an indication that re-training has occurred to warn an operator that the output may not be up to the threshold level of accuracy. In the instance of provisioning, the machine-learning model may be replaced with a new machine-learning model. The new machine-learning model may be trained in the same manner as described above. In some instances, the output of machine-learning models132may be compared to a second and lower accuracy threshold, such that if accuracy falls below the first threshold but is above the second threshold, retraining may occur. If the accuracy falls below both the first threshold and the second threshold, then a new machine-learning model may be provisioned. The new machine-learning model may be trained in the same manner as described above. Computing device104may include one or more input/output devices140such as a keyboard, mouse, other human-machine interface devices, or the like accept input from one or more users of computing device104. Computing device104may include one or more display devices136that can provide a graphical user interface for the one or more users to interact with applications124, to provide or review analysis of process specifications, modification of processes and threads, or the like. Computing device104may include network interface144that provides wired and/or wireless communications with other devices of network148. The network interface may enable computing device104and computing devices152to operate a distributed environment for one or more applications of applications124. For instance, an application of applications124may be a distributed application that executes on computing device104and on computing devices152. Client devices156may transmit commands to the application via computing device104or computing devices152through network148to coordinate the operation of the application. In this instance, computing device104and computing devices152may provide all of the resources needed to execute the application and client device156may enable a remote user to access the application as if the application was executing locally. This may enable faster execution of the application as the application can pool the resources of each device of the distributed environment. In addition, the user of client device156does not need to install the application locally to access the full functionality of the application. Servers160may store one or more applications that can be provisioned onto the computing device104and computing devices152. For instance, client devices156may request access to a distrusted application that is not currently running. Servers160may provision one or more computing devices, or as many computing devices as needed to provide the functionality or efficiency requested by the client devices156, to by remotely installing the application onto the one or more computing devices and establishing the distributed environment. Servers160may store historical process specifications, error logs, event logs, and the like. Servers160may store the historical process specifications, error logs, event logs, and the like remotely in one or more databases such as threads database168and process specifications database164. Threads database168, may store threads that have previously executed as well as thread definitions. Process specification database164may store the process specifications from previous executions of applications. In some instances, computing device104may store thread definitions and process specifications within threads database168and process specifications database164for later access. Central storage may enable process specifications generated by computing device104to be accessed by other computing devices152executing the same distributed application. This can further improve analysis of thread specifications across the entire distrusted environment rather than at a single computing device. FIG.2is a block diagram of a semantic tagging system framework according to at least one aspect of the disclosure. In some instances, the semantic tagging system framework includes a processing layer that is between the application processes and system processes to enable capturing state information of the application processes during and after execution by the processor. The added processing layer enables capturing process specifications for any application executing by a computing device and the granularity detail captured to generate the process specification may be configurable by a user operating user interface204. In other instances, the application may be modified to include the added processing layer. During configuration time, user interface204may enable user input to modify a the processing layer or a build of the application. One or more hooks may be added to the processing layer or build to trigger data acquisition upon detecting particular events such as thread execution. Prebuild208may include a portion of the processing layer or application that is preconfigured (e.g., without the hooks). User interface204may enable a user to add the configuration to the build and execute a SavePostChange212command to post the change to the build and save it. In some instances, SavePostChange212may require the software build to be recompiled prior to execution. In other instances, the added content may be store separately from the build and compiled or interpreted at runtime (e.g., using a just-in-time compiler or the like). The modifications added by user input may be used to configure a semantic mapping application programming interface (API)216provides an interface between the component processing of the application and the processing layer that is specific to the application class. Semantic mapping API216includes one or more functions that enable access to the application class of the application. The application class of the application includes the objects, thread definitions, attributes, of the application. The semantic mapping API enables access to the data of the application class during runtime such that threads of a process of the application that are executed by the processor can be monitored. For instance, the semantic mapping API216may monitor an application and trigger a flag, registry entry, Once semantic mapping API216is configured to monitor particular aspects of the applications, the application may be executed. During execution, the semantic tagging220may use the semantic mapping API216to access the details of the application class. For instance, the semantic mapping API216may trigger a flag when a particular thread executes. Semantic tagging220may access semantic mapping API in response to detecting the flag to gather details about the state of the thread. Semantic tagging220may generate a thread log224indicating that the thread executed including the thread definition for the thread. In addition, semantic tagging may generated a process specification log228. For example, in response to detecting the flag, the semantic tagging220may use the semantic mapping API216to identify a process (e.g., component) of the application within which the particular thread (e.g., transaction) executed and map the process to an object that called the particular thread. The semantic tagging may further identify one or more thread definitions of the object, identify the particular thread definition that corresponds to the particular thread, identify the resource types (e.g., attributes) that correspond to the particular thread definition, and identify the value that correspond to each resource type. The data may be packaged into the process specification log228. FIG.3is a block diagram of the semantic tagging system300according to at least one aspect of the disclosure. Automated semantic tagging system300provides for logging process specifications that can be used to modify future processes at runtime. Configuration layer304may include an interface to receive input that configures the processes, threads, and resources that can be monitored during execution of one or more applications. For instance, mapping block308may provide a mapping between two properties of the automated semantic tagging system300, such as object identifiers and process identifiers. Configuration layer304may include mapping block312, which provides a mapping between object identifiers and thread definition identifiers, and mapping block316, which may provide a mapping between thread definition identifiers and attributes of a thread. In some instances, the mapping of each property of mapping blocks308,312, and316may include may include wildcard operators, timestamps, types, or the like to indirectly map the object identifiers and process identifiers. For instance, mapping block308may indicate objects of a particular type may be associated with processes of a particular type. Automated semantic tagging system300may not store static associations that may be known prior to runtime. Instead, configuration layer304may provide abstract mappings that in which the mappings may be detected at runtime. Configuration layer may include more or less mapping blocks than mapping blocks308,312, and316. In some instance, a mapping block may map more than two properties such as, by example, object, process, and thread definition. Mapping blocks may maps one-to-one, one-to-many, or many-to-many. Configuration layer304may include a semantic mapping API that may include logic to inspect application class processes320. Application class processing320may expose an interface that provides access to particular types of processes, threads, variables, resources and the like of an application to configuration layer304. In some instances, each application may include its own instance of application class processing320. In other instances, one instance of application class process320may operate for one or more applications. Application class processing320may expose all processes, threads, variables, resources and the like of an application or only portion thereof. For instance, for an application that provides varied functionality such as signal processing and data modeling, application class processing320may expose the processes, threads, variables, resources and the like that correspond to the signal processing but not the data modeling. Application class processing320may mark some processes, threads, variables, resources and the like as exposed or protected through metadata or through code of the underlying application. Application layer inputs324may be an interface between a input/output devices and the application executing on a computing device. For instance, the application layer inputs324may direct application layer328to generate a process or execute one or more threads or modify one or more processes or threads. Application layer inputs324may modify the application layer data by tagging semantic processing data via semantic mapping layer332to generate process specifications upon execution of a particular thread. Application layer328may represent the high level operations of an application executing on the computing devices. Application layer328may include compiled instructions that generate processes that provide the functionality of the application, objects, object definitions, data, and metadata. Object definitions provides a flexible architecture for establishing and maintaining relationships between various kinds of thread definitions for each of the processes and forms the foundation for the data layer model. Application layer328may instantiate object definitions thereby generating an object of the application. Objects can be a logical entity around which a specific threads may execute. Objects may include multiple thread definitions from which the object may call to execute a thread to provide intermediate functionality of a process. Objects may establish links to processes along with process specific keys. Application layer328generates processes and threads for execution by processors376. During or after prosecution of each thread the state the thread may be captured as a process specification indicating the state of the process at the point in which the thread executed. Execution of thread356by processor376may trigger application class processing320to execute process336to identifying the process344that corresponds to thread356and map process344to the object identifier of object340that called the thread. Once mapped, application class processing320may execute process348to identify each thread definition352of object340to identify the particular thread definition that corresponds to thread356. Thread definitions (e.g., activities of an object) may include individual actions specific to the application functionality, which can be executed in series, concurrently, or a combination thereof, without dependency on other threads. Thread definitions may be associated with objects and may be interweaved with respect to time and sequence to provide a process. Threads from thread definitions may execute more than once in a particular process and are not bound to execute in particular sequences. The quantity of distinct thread definitions of an object may determine a level of granularity of data that may be captured within the process specification. The particular thread definition within the list of thread definitions is identified, application class processing320may then execute process360to obtain resource requirements368that indicate a list resource types (e.g., record fields, attributes, processing cycles, processing time, volatile memory, non-volatile memory, registers, network bandwidth, or the like). For instance, some threads may execute on a local processor using random access memory or cache memory. Other threads may require more substantial resource such as multi-core processors, a network bandwidth, volatile and non-volatile memory, etc. Resource requirements368may indicate the resource types needed to execute the application. Application class processing320may then execute process372obtaining values for each resource type and generate a process specification for thread356. The values of each resource type indicate the quantity of resources consumed when thread356executed. In addition to the state of the process at the point in which thread356executed, process specifications may include a trace of the process including the order in which each thread of the process executed before the process terminated. The trace may represent the threads a nodes within a tree structure with process344being the root node of the tree and each node thereafter being a thread that executed. In some instances, process keys may assigned to each thread and to process344. Process keys may indicate the position within the tree that the corresponding thread is located. In other instances, the trace may represent the threads as a directed graph where each node of the directed graph represents a thread. Arrows from a node to another may represent a thread executing after another thread or a thread calling another thread. In still yet other instances, the trace may represent the threads as tables that include the resource types, resource values, process keys, and the like for each thread of process344. Once generated process specifications may be output380to local or remote persistent storage. In some instances, process specification may be used to modify future processes or currently executing processes. For instance, a particular process for executing a resource request may include threads that generate a resource request, transmit the resource request to a first computing device within the distributed environment, transmit a new resource request to another computing device. As a result of the first computing device lacking the appropriate resources, additional threads had to be executed. The process may be modified to request a manifest of available resources from available computing devices to reduce the threads of the process. The process may also modified by modifying the initial resource request to obtain available resources from the first computing device and generate a second resource request for the difference from another computing device. Application layer328may execute the modified process in place of the process next time the process is initiated. In some instances, the process may be modified at runtime. Application layer328may detect the point in which the current thread is executing and modify a subsequent pointer to point to an address of the modified thread. Processors376may reach the pointer and continue execution using the modified process. FIG.4illustrates a block diagram of various processing views of a variant case during operation of semantic tagging according to at least one aspect of the disclosure.FIG.4andFIG.5depict an example processing operation of a distributed application in which threads execute to request and obtain resources (e.g., articles, processor cycles or bandwidth, network bandwidth, memory, or the like). For instance, a three-dimensional modeling application require resource from multiple computing devices in order to execute in real-time. Throughout execution the application may request resources such that small portions of application may be executed by different computing devices. The output may be transmitted to primary device which may then render the three-dimensional model. The application identify for each discrete process or thread, the resources necessary to execute the process or thread. In some instances, a portion of the resources may not be accessible to the process (e.g., requested computing device lacks sufficient memory, processor bandwidth, etc.). If a particular computing device has the resources, the process or thread may be transferred for execution by the computing device and a result of the execution (e.g., the articles, data, graphical user interface, calculation, or the like) may be returned to the requester. If no particular computing device has the available resources, the process may be sub-divided (e.g., based on threads, independent sets of instructions, or the like) into smaller processing units with a lower resource requirement. Sub-dividing generate entirely new processes/threads or modify the existing process/thread to require less resources (e.g., reducing instruction count, reducing memory usage such as variables or registers, reducing loops such that a loop may execute with less iterations with the removed iterations executing within another process/thread, or the like) and generate one or more new processes/threads. The application may again initiate a request for resource to the multiple computing devices. In some instances, process may be sub-divided down to individual instructions such that each instruction of the process may be executed by a different computing system. In some instances, sub-dividing processes or threads may cause an error when the new process or thread cannot be linked to the original process or thread. This may occur when the process or thread initiated prior to the sub-dividing process. Process specifications, as represented in the semantic tagging view, may provide the association between the root process/thread and the new processes/threads. Maintaining the link to the root process/thread may enable improving the root process as redundancy and cycles may be identified and eliminated. Application process view may provide a representation of the process from the perspective of the application layer. Application layer may initiate a process404that executes a resource identification thread408that identifies the resources by the application. Resource identification thread408may determine that the requested resources cannot be acquired from any particular computing device. Resource identification thread408may call resource request thread412, which transmits a resource request to a first computing device. The first computing device may transmit an acknowledgement communication back to process404. Resource allocation thread416may then transmit an allocation command to the first computing device to lock the resources to prevent another device or process form interfering. When the application no longer needs the resources a new thread may execute a communication to release the locked resources. Since the first computing devices cannot provide all of the requested resources, a thread definition may be used to generate resource request thread420. Since resource request thread420was generated after process404initialized, resource request thread420may not be associated with process404. Resource request thread420, which transmits a resource request to a second computing device. The second computing device may transmit an acknowledgement communication back to process404. Resource allocation thread424may then transmit an allocation command to the second computing device to lock the resources. The time line view represents the order in which each thread of process404executes. Despite the resource request thread420being generated in parallel to the execution of resource request thread412, the resource request thread412and resource allocation thread416executed and terminated before resource request thread420and resource allocation thread424initiated. The semantic tagging may provide a representation of the complete process reconstructed from one or more process specifications. The semantic tagging view includes additional data that links the add resource request thread420and resource allocation thread424to the process404. In some instances, linking threads to a process may be based on detecting a command from a thread calling another thread or requesting a new thread be generated. In other instances, linking threads to a process may use indirect data such as a first thread being associated with a process and a second thread being associated with the first thread. Examples of criteria that may be used to link processes to threads include, but is not limited to, proximity of execution order such as when a first thread initiates execution within a threshold time interval of a second thread initiating execution, thread type such as a resource request, resource types, resource values, types of variables, previously executed thread, subsequently executed thread, combinations thereof, and the like. FIG.5is a block diagram of various processing views of a divergent case during operation of semantic tagging according to at least one aspect of the disclosure. The divergent case may occur when threads diverge, but each thread of the process can be traced back to the process initialization. For instance, process504initializes and calls identify resources thread508. In this case, the identify resources thread508generates two requests for resources, one to a first computing device and a second to a second computing device. Identify resources thread508calls resource request512, which the calls resource allocation thread516to lock the resources of the first computing device. Identify resources thread508also calls resource request520, which the calls resource allocation thread524to lock the resources of the first computing device. The timeline view can represent the order in which the threads of the branching process execute. Since the second resource request/allocation was called with the first resource request/allocation, the resource request520executed immediately after resource request512. In some instances, the execution order may be reversed with resource request512executing after resource request520. Resource allocation thread524may execute after resource allocation thread516. In some instances, resource request512and resource allocation thread516may execute in parallel with resource request520and resource allocation thread524. The semantic tagging may provide a representation of the complete process reconstructed from one or more process specifications. The semantic tagging view includes additional data that indicates how the process504diverged into two parallel thread paths. For instance, process may execute a fork system call to generate the divergent thread path. Rather than execute a single identify resources thread508, a second identify resources528may execute with each identify resource thread executing to identify a smaller set of resources. Under the semantic tagging view each thread can be traced back to the initiation of process504thereby providing a complete recreation of the state of process504through the execution of each thread. FIG.6is a flowchart of a process for generating process specifications according to at least one aspect of the disclosure. At block604, a semantic mapping API detects an execution of a particular thread by one or more processors of a computing device. The semantic mapping API may be an interface between an application class that instantiates processes and threads for execution and a configuration layer that indicates what types of processes/threads and data may be captured by the semantic mapping API. In some instances, each thread may include a thread key that acts as a signature of the thread. Thread keys may be unique to the particular thread and be generated by generating a hash (e.g., using a cryptographic or checksum based hashing function) of the all or a portion of the instructions of the thread. The semantic mapping API may including functions that may monitor a scheduler of the one or more processors for the thread key. Threads may also include a processor key that indicates the process within which the thread is executing. In some instances, the thread keys may be generated based on the position of the thread within the process similar to an address. The thread key can be traced to identify other threads and processes associated with the process by using the thread key. For instance, processes may be represented as a tree with the root process at the root node of the tree. The second layer of the tree may include threads (as represented by nodes) called by the root process initialization instructions. The next layer may include threads (as represented by nodes) called from the threads of the previous layer and so on. Thread keys may use a mime-type syntax that uses the calling thread's address to generate the address for the called thread. For instance, given a root process with the address P001, the second layer of threads may be addressed as P001.01, P001.02 . . . P001.n. If the P001.01 thread called two more threads, those threads may be represented as P001.01.01 and P001.01.02 and so on. The address of a particular thread of a process may be used to trace the threads that executed prior to the particular thread. At block608, a root process of the particular thread may be identified. In some instances, the root node may be identified by tracing the thread key of the particular thread. In other instances, the root process may be identified by tracing memory addresses of the instructions executed by the processor. Tracing may span the entire cache memory, random-access memory, and persistent memory. At block612, a process-object link may be generated by linking an object that called the particular thread and the process within which the thread executed. The object that called the particular thread may be identified using the thread key, by a value of a field of the initiating process, or by a value of a field of the particular thread. Objects may include data, metadata, and instructions that execute to provide functions of the application. For instance, for a resource request process, an object may execute one or more threads for detecting resources of computing devices, requesting resources, allocating resource, and the like. Objects may include a one or more thread definitions that can be instantiated to execute various functions associated with the object. The object's type may dictate the one or more thread definitions included within the object such that different object types have different thread definitions. Each thread definition may be impendent and instantiated by the object into a thread, the thread being an instance of the thread definition. Each thread definition may include instructions to provide the functionality of the thread, resource types necessary to execute the thread, a value for each resource type indicating a quantity of the resource type, an expected input, an output generated as a result of receiving the expected input, metadata, and the like. At block616, the process-object link may be used to identify a thread list. The thread list includes the one or more thread definitions of the object. One of the one or more thread definitions includes the thread definition that was instantiated to generate the instance of the particular thread. The thread definition that correspond to the particular thread may provide additional data that may indicate how the thread was expected to execute, the resources that were expected to be consumed, and the like. The thread definition may indicate why a particular branch was taken, why the thread induced a cycle or infinite loop, and the like. At block620, A process specification may be generated for the process based on the particular thread. The process specification may be generated by: matching the particular thread definition that corresponded to the particular thread to the particular thread at block624. At block628, the particular thread definition may be used to identify one or more resource types that may be necessary for the thread to execute. The one or more resource types may include resources of the computing device, resources of other computing devices, resources of the network, resources of other networks, combinations thereof, and the like. Examples of resource types can include, but is not limited to, expected input types, attributes, processor cycles, processor bandwidth, cache memory, random-access memory, persistent memory, network bandwidth, combinations thereof, and the like. At block632, a value corresponding to each resource type may be defined. The value may correspond with a quantity of the resource that may be necessary to execute the particular thread, a quantity of the resource type that was actually consumed by the thread, or a combination thereof. For instance, a thread may be executed to consume 100 processing cycles. During a previous execution of the thread, 112 processing cycles were consumed as a result of unanticipated instruction path of the thread executing. The difference between the expected value of the resource and the actual consumed value of the resource may be used to identify the cause the of extra resource consumption (e.g., the unanticipated instruction path) and modify the thread to reduce the resource consumption. Resource consumption may be used to identify other inefficiencies including, but not limited to, improper memory allocations (e.g., allocating too much or too little memory), inefficient looping (e.g., loops that execute more than necessary to produce an expected output), recursion (e.g., when thread calls another instance of itself), invalid memory typing, unused variables, and the like). At block636, the process specification may be generated. The process specification may include some or all of: the thread definition, an identification of the root process, the thread key, the process-object link, a memory dump of the entire process, list of threads, an identification of the one or more resource types, an identification of the one or more values that correspond to the one or more resource types, metadata, and the like. The process specification may be used to reproduce the state of the process at the point in which the particular thread executed. Multiple process specifications may correspond to a same process with each process specification representing the state of the process at a point of execution of different threads. In some instances, a process may execute more than once (e.g., multiple executions of the same function within the application). Each time the process executes it may be associated with a different set of process specifications. This may enable tracing each individual execution of the process. For instance, some processes may include errors such as memory leeks may only be apparent during some executions of the process. Process specifications corresponding to each execution of the process may be used isolate the root cause of the error by maintaining the state of both processes in which the error occurred and processes in which the error did not occur. Process specifications may be used for error detection, root cause analysis, error correction, and increase efficiency (e.g., reduce processing time or resources). For instance, process specifications may be used to trace the root process to identify the threads that executed before the particular thread (e.g., using the thread key) and the threads that executed after (using memory tracing or the like). The process specification may generate a directed graph in which each node of the graph represents a thread that executed in the process based on the trace. Each node may include a pointer to the node of the thread that executed after the node. Other data structures may be used in addition to or in place of a directed graph such as tree, a table, a linked list, or the like. The directed graph may not be acyclic meaning one or more cycles may exist in the graph. A cycle may represent a redundant thread path in which a thread executed instead of point to a subsequent thread, pointed to a previously executed thread causing the previously executed thread to execute again. In some instances, a cycle may represent wasted resources in which some thread may execute more than necessary. In other instances, the cycle may represent an infinite loop in which the processor may stall executing the same set of threads over and over preventing other functions from executing on the processor. The direct graph may converted into a direct acyclic graph my removing the cycle. Cycles may be removed by modifying the instructions of the threads that are part of the cycle. For instance, the threads may be modified to point to new threads rather than previously executed threads, conditional branching may be removed, threads may be tested to identify the cause of the redundancy (e.g., what input or processing is causing the redundancy). This may lead to a modification of the instructions or instruction order to eliminate the redundancy. In some instances, threads that request or otherwise access resources of other computing devices may be modified to request or otherwise access the resources of different computing devices. For instance, some computing devices may, at runtime, lack requested resources. A first thread that requests those resources may stall or terminate as a result. The first thread may then call a previously executed thread to force the process to call the first thread again in an attempt to re-try the resource request. Since the computing lacks the requested resources, this loop may continue forever until the computing device has the available resources (if ever). Process specifications may be used to modify the thread to request resources from a different computing device or from multiple different computing devices, which may thereby eliminate the cycle of the direct graph converting the graph to a directed acyclic graph. A process may include a set number of threads that may increase or decrease at runtime. For instance, given the resource request example above, the first thread may request resources that cannot be satisfied by the requested computing device. The first thread may be modified to reduce the amount of resources request and spawn a second thread to request the difference. Since the first thread spawned the second thread rather than the process, the second thread may not be associated with the same process. The process specification may be used to modify the second thread to link the second thread with the root process. The resource request of the second thread may be associated with the process requesting the resources, which may ensure the process is able to allocate the appropriate resources once the resource request threads (e.g., the first and the second) terminate. In some instances, associating the second thread may necessitate splitting the process specification into two process specifications the first process specification corresponding to the root thread, the first thread, and each thread that was called from the first thread (e.g., excluding the second thread, spawned as a parallel execution flow) and the second process specification corresponding to the root thread, the first thread, and each thread that was called from the second thread. The first thread and the second thread may be identified using the thread key associated with each thread and a timestamp of the initiation of execution of each thread. Since the second thread spawned from the first thread, the timestamps can indicate the threads executed at close to the same time interval. The thread keys may then indicate that the second thread spawned from the first thread. Process specifications may be defined from other process specifications as well as from executed threads. For instance, a root process or a parent thread may generate multiple threads that execute concurrently (e.g., synchronously or asynchronously) on one or more processors of one or more computing devices (e.g., within a distributed environment). A process specification of the parent process or thread may be used to generate a process specification of each concurrently execution path. A first process specification may include (1) the root process or parent thread that initiated the concurrent execution flow by calling a first thread, second thread . . . and nth thread, (2), the first thread, and (3) each thread that was called by the first thread and the threads that were called by those threads and so on until that concurrent execution path terminates (e.g., there are no more threads). A second process specification may include the root process or thread, the second thread, and each thread that was called by the second thread and the threads that were called by those threads and so on. A process specification may be generated for each concurrent thread path to individually trace the parallel execution flow of the process. Specific details are given in the above description to provide a thorough understanding of the embodiments. However, it is understood that the embodiments may be practiced without these specific details. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Implementation of the techniques, blocks, steps and means described above may be done in various ways. For example, these techniques, blocks, steps and means may be implemented in hardware, software, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above, and/or a combination thereof. Also, it is noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a swim diagram, a data flow diagram, a structure diagram, or a block diagram. Although a depiction may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. Furthermore, embodiments may be implemented by hardware, software, scripting languages, firmware, middleware, microcode, hardware description languages, and/or any combination thereof. When implemented in software, firmware, middleware, scripting language, and/or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium such as a storage medium. A code segment or machine-executable instruction may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a script, a class, or any combination of instructions, data structures, and/or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, and/or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory. Memory may be implemented within the processor or external to the processor. As used herein the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other storage medium and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. Moreover, as disclosed herein, the term “storage medium” may represent one or more memories for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information. The term “machine-readable medium” includes, but is not limited to portable or fixed storage devices, optical storage devices, and/or various other storage mediums capable of storing that contain or carry instruction(s) and/or data. While the principles of the disclosure have been described above in connection with specific apparatuses and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the disclosure.
57,700
11861395
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein. DETAILED DESCRIPTION FIGS.1through11, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. For the purpose of explaining the present disclosure, reference will now be made to the examples illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the disclosure relates. It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the present disclosure and are not intended to be restrictive thereof. Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting. FIG.1illustrates an example of a method for managing memory for applications in a computing system according to various embodiments of this disclosure. The method comprises receiving (Step102) a selection of a preferred application with respect to a user. In certain embodiments receiving of the selection of the preferred application comprises receiving a selection done by user in a user-interface. In some embodiments, the selection may be a machine learning (ML) model driven selection based on its learning of user's application usage behaviour. Referring to the non-limiting example ofFIG.1, a method according to some embodiments further comprises monitoring (step104) at least a transition of selected application between foreground and background during a user operation. The monitoring of the application upon having been selected as the preferred application comprises monitoring one or more of memory consumption, or the types of content allocated across memory regions of the computing system. The method further comprises triggering (step106) retention of the application in memory, wherein the triggering is based on, at a minimum, a transition of the preferred application to the background in response to the user operation received at step102. According to some embodiments, the retention comprises compressing memory portions of said application and thereby freeing memory occupied by said application at foreground during the user operation. The compression may be defined as compressing memory portions of the application in a dedicated data store by a compression method based on characteristics of the data in memory, and maintaining one or more computing processes supported the selected application as active to retain the application in the compressed state. Thereafter, application is retained based on said compressed memory portions. The storage of the compressed portion of the application comprises storing at least portions of the application in either a primary memory or a secondary memory based on one or more of primary memory and secondary storage utilization, performance, load and I/O operations associated with the device. In certain embodiments, at operation106the triggering of retention of the application comprises initiating compression based on using a timeout upon the application transiting into the background, said timeout being a static timeout or a decaying timeout dynamically decided by a machine learning model based on the application usage pattern. Referring to the illustrative example ofFIG.1, at step108, a requirement of restoring the retained application is sensed based on either a user selection or an automatically done prediction. The sensing comprises one or more of automatically predicting a likelihood of launch of the retained application, and initiating decompression of the retained application at a time prior to predicted launch time, thereby minimizing the likelihood of a computing overload during launch of the retained application. According to certain embodiments, automatically generating the prediction as to when restoring a retained application is to occur comprises predicting the likelihood of a launch of the retained application by the machine learning model based on the user behaviour. According to various embodiments, at step110, the application is restored from the retained state back to the foreground based on sensing that restoration of the application is required. in some embodiments, the restoration comprises decompressing the compressed portions of the application in the memory to enable occupation of additional space in the memory by the application. Further, the method comprises allowing the user operation again over the application. While not shown in the example ofFIG.1, according to some embodiments, may receive a command or other input for deselection of a preferred application as a preferred application. In certain embodiments, a deselected application is not retained in a compressed state in response to certain user inputs, such as minimizing, or moving the application to the background. In this way, deselection can free up memory and related resources otherwise employed for retaining a preferred application. While the example ofFIG.1has been described with reference to gaming-applications, embodiments according to the present disclosure are not limited thereto. FIG.2aillustrates an example of a method for retaining a user preferred resource-intensive (also known as a “heavy” application) application in a smartphone according to certain embodiments of this disclosure. Referring to the illustrative example ofFIG.2A, at step202, a first agent executing on a smartphone allows selection of application preferred to user. This selected app is registered to a proposed module, which is a Heavy-app Memory Management unit (HAMM). The first agent may be a game, or a state of the art game—There is no such as game booster/game-launcher. In addition, the first agent may be AI that automatically generates a recommendation for selection of application as preferred application. According to various embodiments, the actions of the first agent at step202correspond to step102. The first agent which performs selection of application may either resort to a selection done by user in a UI or a learning agent which may select automatically on user behaviour based on its learning of user's application usage behaviour. At step204, after an application is registered with HAMM, the HAMM starts monitoring transitions of the selected application, including, without limitation, switching between running the application in the foreground and running the application in the background. Additionally, the monitored-characteristics of the selected-applications such as memory-consumption, type of content across its allocated memory regions (allocation, file, I/O, etc.). At step206, when the registered app is sent to background by user interaction, a second agent is activated. This agent is responsible for implementing decision logic governing retaining of the app. According to various embodiments, the operations undertaking at steps204and206correspond to step104ofFIG.1. In certain embodiments, the second agent performs the initiation of compression may be using a timeout after initiation of background switch event of the app. This time out may be a static or based on a decaying value depending of selected applications usage pattern. At step208, on switching to background, the second agent informs proposed module to initiate compression of memory regions of selected application. As shown in the illustrative example ofFIG.2A, at step210, the proposed module compresses memory of the application which is later stored in a dedicated data store, and the original memory earlier occupied by the application is freed, thereby making more memory available for entire system. Meanwhile, the process backing the selected application is kept alive. Furthermore, the choice of compression method to be used is identified based on the characteristics of the data in memory for the app. In certain embodiments, steps208and210correspond to step106ofFIG.1. The data store used by proposed module for storing a compressed application may be provided through at least one of primary memory or secondary memory. The decision to select the primary or secondary memory as the data store for storing the compressed application may be based on one or more of primary memory and secondary storage utilization, performance & load on I/O operations on memory or storage available in the device. Referring to the illustrative example ofFIG.2a, at step212, a third agent informs the HAMM to perform decompression of the already compressed data to its original form. This third agent identifies when the selected app is likely to be launched, hence achieving a proactive decompression ahead of launch, thereby reducing any performance overhead associated with decompressing the application postponing launch of the application. The third agent which initiates the trigger of so called proactive decompression, may be a UI agent when opened sends the event trigger, or it may be an AI agent which may predict the likelihood of immediate launch of retained, compressed application. At step214, the cycle of compression and decompression is managed together by second and third agents, until the application is selected to be deregistered from the proposed module by a fourth agent. In the example ofFIG.2a, steps212and214may correspond to steps108and110ofFIG.1. According to various embodiments, at step216, once the fourth agent selects and informs the proposed module that a managed application is to be deregistered, it will free up the compressed memory and related resources to retain the app. After which the app may get removed depending on system state and other applications getting used by user in due time. The fourth agent may make use of a static or decaying timeout to decide on deregistration of application from proposed module. In some embodiments, the fourth agent may make use of an AI agent which predicts the relevance of an application to the user based on behaviour of usage. As one non-limiting example, a determination that an application is to be deregistered is performed when an application's relevance drops below a certain threshold. In certain embodiments, the first and fourth agents may consider low memory indicators in a device as factors in the control logic for compression and deregistration. FIG.2billustrates an example of a computing architecture for implementing methods according to this disclosure (for example, for implementing steps202-216ofFIG.2a). According to various embodiments, HAMM300may be configured to execute the steps of204,208and212. Similarly, in some embodiments, Agent1, Agent2, Agent3and Agent4are respectively configured to execute the steps202,206,210and212described with reference toFIG.2a. According to various embodiments, the HAMM300for executing the steps204,208and212comprises a controller302and a memory manager304. In addition, the memory manager304comprises a compression-decompression block306and a data store308(memory based and storage based). FIG.3illustrates an example of an on-device Machine Learning (ML) system400hosting a plurality of system-learning models for rendering data-management for ML models according to certain embodiments of this disclosure. According to some embodiments ML system400utilizes OnDevice Iterative Machine Learning technique such as Multi-Stochastic Gradient Descent Regression. In some embodiments, ML system400may be used to predict and identifying games or other applications that are relevant, and thus, likely to be launched by a given user at a given time. According to some embodiments, the predictions as to which applications will be of immediate relevance user are pushed to the user as notifications, so that she may provide user inputs to select the identified applications for retention in memory. Furthermore, knowledge of relevance of a game (or other application) enables a game controller to identify whether to retain the game or not via compression or recalculate the relevance of the game (or other application) and release memory resources which would otherwise be committed to the game's retention. FIG.4illustrates an example of learning a user's behaviour through the ML system400to predict which applications will be utilized according to various embodiments. More specifically, the user and system context data404is derived for model-training for training the ML system400. Referring to the non-limiting example ofFIG.4, user and system context data404comprises battery level, Wi-Fi state, Airplane mode status, Data pack, Activity status, charging status, recently-used applications, and screen orientation. According to various embodiments, user and system data494further comprises time features, which can include day, month, week, time of the data. Yet another input may be the identifiers allocated to the application. In addition, the user and system context data404may include Game Launch404-1, Contextual & System State404-2, Pre-Process404-3, Time Features404-4, Model Input404-5. The ML system400may be based on an iterative machine-learning principle which may be, for example, a combination of a multi-stochastic gradient descent regression and a gradient descent optimizer. In addition, a game relevance model402may output HAMM300based on input features (relevance model406), the user and system context data404, Leaned Model407, the multi-stochastic gradient descent regression and a gradient descent optimizer408. In certain embodiments, the game (or other type of application) relevance model402forming a part of the ML system400comprises one or more of the Agent1, Agent2and Agent3and may categorize the application under the following headings:a) Relevant Game for the user for selection (entry), which may be sub-classified as High Chance of Entry, Low chance of entry.b) Eviction (High, Medium, Low). The same may be sub-classified as:EntryHigh—High Chance of EntryExitHigh—Low usage, High probability to removeExitMedium—Medium usage, Medium probability to remove.ExitLow—Low usage, User may use the game one or 2 times in a week. FIG.5illustrates an example of retraining of the ML system400with respect to user-behaviour and modelling user behaviour, according to various embodiments of this disclosure. As mentioned before, the relevance model402appropriates iterative machine learning criteria such as Multi-Stochastic Gradient Descent Regression criteria. As a part of retraining, the ML system400personalizes the game relevance model402based on a user profile. According to certain embodiments, training data is fed as a feature map with respect to the user performed application's launches as done under various contexts. Such training data comprising the user based data and the contextual-data has been referred inFIG.4as a part of input data. In addition, as a part of retraining, this training-data has also historical data such as past done predicted output by the relevance model402and historical launches of the application as done by the user. In addition, the relevance model402may output the predicted output based on a Game Launch feature DB501, a Saved Model502. FIG.6illustrates aspects of the control logic implemented by a game relevance module (for example, the trained and retrained game relevance module402inFIG.5) according to various embodiments of this disclosure. Referring to the illustrative example ofFIG.6, in this example Agent2, is responsible for selecting game for retaining at step206, the relevance model402predicts relevance and automatically compresses the game for retaining. In the non-limiting example ofFIG.6, Agent3triggers proactive decompression with respect to step210, the relevance model402predicts a high chance of a specific game (or other type of application) to be launched within a predetermined time frame (for example, in the next 15 minutes). With respect to Agent4, that triggers proactive decompression with respect to step212, the relevance model402predicts low chance of game to be launched for next “x” hours and accordingly evicts the game from being retained. FIG.7illustrates an example of a decision flow from a game relevance model (for example, relevance model402inFIG.5) to an agent (for example, Agent1described with reference to step202inFIG.2a) at an initial stages of operation of a method according to certain embodiments (for example, step202inFIG.2a). More specifically, in the illustrative example ofFIG.7user-behaviour learning is used to recommend a game for retaining by a user during the initial stage of operation. As depicted in the illustrative examples ofFIG.4andFIG.5, in certain embodiments relevance model402learns user behaviour, predicts the user's probable application usage behaviour and groups games (and potentially other types of applications) into relevance-groups. Games in high relevance groups are determined to have a high probability of launched by a user and games in low relevance groups are determined to have a lower probability of being launched. Accordingly, at step701and step702, a recommendation from HIGH relevance games is received. At step704, subsequent to a game from a HIGH relevance list being opened, and a user being prompted for consent on retaining the game, an input corresponding whether the user consents to enable game retention is received. In certain embodiments, the user's input on whether to enable game retention is also sent as feedback to the relevance model to personalize further. As may be understood, the user may also override the recommendation and manually choose a game of his choice for retention. At step706, if the user chooses Y in step704, a request is sent to add the game in the list of games for retention and the selection Y of step704is informed to HAMM. Based on the same analogy, a decompression operation of a game in Compressed Data Store (CDS) is triggered when either the user launches a game reactively and when an agent decides to do it proactively. FIG.8illustrates an example of operations of a compression operation associated with retaining an application in memory according to various embodiments of this disclosure. Referring to the non-limiting example of step801, a compression operation may be triggered by Agent2immediately based on application switching to the background or a decaying timeout. In some embodiments, a decaying timeout may be dynamically calculated by the ML system of the device. The selection of the game (or other type of application) for such compression may be previously performed by Agent1, and may be based on a manual selection by a user or an AI enabled operation through Relevance model402. In one example, the compression operation may be triggered by a gaming aggregation system such as a Game booster or Game Launcher. In certain embodiments, upon triggering the compression operation step801, available memory resources are checked through steps804,806and810to determine a need to utilize external storage for the compression operation. According to various embodiments, at step804, a determination is performed as to whether secondary storage backed CDS is available. If secondary storage backed CDS is available, operation proceeds to step806, where a check of secondary storage performance and memory-backed CDS usage is performed. According to various embodiments, at step810, a determination of whether the cost of compression to a storage-backed CDS is acceptable. If the cost of storage backed CDS is unacceptable, operation proceeds to step808, where memory backed CDS is selected as a compression target. As shown in the illustrative example ofFIG.8, at step814storage-backed CDS is selected as the compression target, and both branches from step804reunite at step812, where a memory region for the compressed application is selected. In certain embodiments, at step816, the application is compressed and sent to the selected data region. At step818, memory regions previously holding uncompressed application data are reclaimed, thereby freeing up memory, and the method terminates at step820. FIG.9illustrates an example of decompression operation according to various embodiments of the present disclosure. Referring to the non-limiting example ofFIG.9, at operation904a decompression operation of a game in Compressed Data Store (CDS) is triggered when a user launches a game in a reactive manner (e.g. an user input for starting the game or an user input in response to a displayed screen). In certain embodiments, the relevance module402acting as Agent2issues a trigger for HAMM300proactively (even in absence of user launching the game) and issues an instruction to decompress the game in advance of an expected launch time for the game (or other application), such as, 15 minutes before an expected launch of the game. Step902refers triggering the decompression process and refers analysis of compressed portions or compressed-pages of the application. Step904refers issuance of the decompression command. Step906refers to walking through pages in CDS corresponding to the Process ID of a game. Step908refers to issuing a decompression instruction to memory. Step910refers removal of processed pages created due to compression. Step912represents completion of the decompression operation. The following table (Table 1) illustrates performance benefits created by proactive decompression of retained applications by relevance models according to various embodiments of this disclosure. TABLE 1Warm Launch Timing (Ms)Warm launchWarm launch(No proactivewith ProactiveAppBasedecompression)decompressionBenefit %Game 1207.4242.219021.55Game 2425533.6418.821.51Game 3411575464.819.17Game 4513.2616514.416.49 FIGS.10A and10Billustrate an example of application of a game relevance model according to various embodiments of this disclosure to low RAM slot management. Referring to the illustrative example ofFIG.10A, an Out of Memory (OOM) score for an application is set to foreground visible. The out of memory (OOM) score adjustment is based on, without limitation, game retention timeouts (or retain timeouts,1001) & relevance (Game relevance model1002). In this example, a trend of memory Pressure increase and threshold levels due to compression/retention/eviction of applications may be used for updating OOM score. In another example, the likelihood of Memory increase may be learnt from the relevance model and OOM score is adjusted accordingly. In addition, Out of Memory Manager1004may obtain the OOM score based on the retain timeouts1001, the Game relevance model1002, memory pressure monitor1003. Referring to the illustrative example ofFIG.10B, an example of multi-endpoint slot management is described in the application. The example shown inFIG.10Bmay be specific for devices with multiple endpoints such as Game Launcher, Game Booster & Never Kill App. Accordingly, the Out of Memory (OOM) score management may be based on relevance model & memory pressure build up. The OOM Score may be updated for processes in packages which are not likely to be used as memory pressure is building. In an example, based onFIG.10AandFIG.10B, updating of an OOM score may be contemplated instead of evicting the application from being retained. FIG.11illustrates an example of hardware configuration suitable for implementing methods according to various embodiments of this disclosure. The computer system800may include a set of instructions that may be executed to cause the computer system800to perform any one or more of the methods disclosed. The computer system800may operate as a standalone device or may be connected, e.g., using a network, to other computer systems or peripheral devices. In a networked deployment, the computer system800may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system800may also be implemented as or incorporated across various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system800is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions. The computer system800may include a processor802e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor802may be a component in a variety of systems. For example, the processor802may be part of a standard personal computer or a workstation. The processor802may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analysing and processing data. The processor802may implement a software program, such as code generated manually (i.e., programmed). The computer system800may include a memory804, such as a memory804that may communicate via a bus808. The memory804may include, but is not limited to computer-readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, memory804includes a cache or random access memory for the processor802. In alternative examples, the memory804is separate from the processor802, such as a cache memory of a processor, the system memory, or other memory. The memory804may be an external storage device or database for storing data. The memory804is operable to store instructions executable by the processor802. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor802for executing the instructions stored in the memory804. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like. As shown, the computer system800may or may not further include a display unit810, such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display810may act as an interface for the user to see the functioning of the processor802, or specifically as an interface with the software stored in the memory804or in the drive unit816. Additionally, the computer system800may include an input device812configured to allow a user to interact with any of the components of system800. The computer system800may also include a disk or optical drive unit816. The disk drive unit816may include a computer-readable medium822in which one or more sets of instructions824, e.g. software, may be embedded. Further, the instructions824may embody one or more of the methods or logic as described. In a particular example, the instructions824may reside completely, or at least partially, within the memory804or within the processor802during execution by the computer system800. Embodiments according to the present disclosure include computer-readable medium that includes instructions824or receives and executes instructions824responsive to a propagated signal so that a device connected to a network826may communicate voice, video, audio, images or any other data over the network826. Further, the instructions824may be transmitted or received over the network826via a communication port or interface820or using a bus808. The communication port or interface820may be a part of the processor802or may be a separate component. The communication port820may be created in software or may be a physical connection in hardware. The communication port820may be configured to connect with a network826, external media, the display810, or any other components in system800, or combinations thereof. The connection with the network826may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system800may be physical connections or may be established wirelessly. The network826may alternatively be directly connected to the bus808. The network826may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, 802.1Q or WiMax network. Further, the network826may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. The system is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet-switched network transmission (e.g., TCP/IP, UDP/IP, HTML, and HTTP) may be used. While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims. Although the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
34,200
11861396
DETAILED DESCRIPTION OF EMBODIMENTS Understanding Limitations of Wearables Wearable devices are generally electronic devices which are ubiquitous, pervasive, interactive, and interwoven into everyday lives of its users. As mentioned above, examples of wearables (short for “wearable devices”) include glasses, watches, clothing, accessories, and any electronic device wearable on a living being or carried by a living being. In some cases, the electronic device is at least in part implantable on a living being, e.g., healthcare-related electronic devices. In many cases, these wearable devices can be paired with a larger form-factor “host-companion” device such as a smartphone, laptop, tablet, personal computer, etc. The wearable device has one or more of the following characteristics when compared with the host-companion device: fewer computing resources, fewer communication or network resources, fewer power resources (e.g., not connected to a power socket at all times), and fewer user input methods (e.g., no input methods or limited input methods). In some embodiments, examples of wearable devices can include portable electronic devices having limited functionality when compared with a companion device. Generally speaking, the wearable device does not have to be worn or carried by a living being at all times. Within the context of the disclosure, the wearable device is termed “tethered-companion” or while the smartphone is called a “host-companion” or a “companion device”. In general, wearable device designs are constrained by the compute and battery power they can have. Their capabilities are designed to match the resources they have within their form factor limitations, and thus their resources (CPU horsepower, battery and sensors) are usually constrained. For example, smart watches are designed to act only as tethered-companion devices that pair with a smartphone using a low energy link like Bluetooth Low Energy (LE). These devices generally do not have location and other sensors, cellular and cellular data connectivity, thus saving battery requirements, etc. Those devices who have full-fledged functionality suffer from poor performance on compute and battery life. However, there are many potential applications that do need more feature-rich wearable device such as (examples not intended to be limiting, but merely illustrative):A safety wearable that can alert in case of trouble and provide context information such as location, ambient data like noise, images, etc., to a companion device.A wearable device that occasionally (such as when a more capable companion device is away) needs to use data connection onboard to receive incoming mail notifications, weather, any useful data, etc. To provide rich features for the user without suffering from poor performance on compute or battery life, the wearable device can cooperate with the companion device while taking measures to conserve the resources of the wearable device intelligently based on context and/or priority. One challenge to cooperation between the wearable device and the companion device includes providing management mechanisms that allows a wearable device to be securely registered, attached to a service to be managed by a cloud service. The secure registration allows the cooperation to operate in a secured environment. Exemplary management of a wearable includes, among others, operations, registration, attaching the device to an account/subscription of a service, and configuration. Another challenge to cooperation is how resources should be conserved based on context and/or priority. Overview: Configuration of Wearable Devices The present disclosure describes systems and methods for securely registering and configuring the wearable device that has no or constrained input methods. For example, a smartwatch has no keyboard (at best a limited keypad) or any other input method to enter registration information like e-mail, etc. One scenario is the wearable is configured (only) via a management console (e.g., a Web application or native application) due to security constraints. The security constraints prevent unauthorized users having possession of the wearable device from changing the configuration of the wearable. Another security aspect is any other person who has knowledge of the protocol and the communication address of the wearable (e.g., phone number) should not be allowed to hijack and configure the wearable device without authorization. FIG.1Ais a block diagram illustrating a high-level architecture of a wearable scenario, according to some embodiments of the disclosure. The wearable device104is provided with a service application106, wherein the service application106is implemented to provide features for the user. For instance, if the wearable is a smart watch, the service application106can provide notifications to the user regarding time, weather, emails, text messages, etc. In some cases, the service application106can implement procedures that enables secure pairing, registration, and/or communication between the wearable device and a companion device (not shown). The service application106can be communicably connected to a cloud service102, wherein the cloud service102can be implemented to offer a service and/or content to the wearable device. For instance, the cloud service102can provide e-mail services, telecommunication services, priority calling services, weather service, notification services, emergency services, etc. The cloud service102can be implemented on a server computer, or a computing cluster (having one or more processors140and storage elements142). The cloud service102is generally located remotely from the wearable device104. The cloud service102can manage configuration of wearable devices, e.g., pairing/registration wearable device104with a companion device. For instance, the cloud service102can maintain user profiles which allows user-based services to be provided to the wearable device and the companion devices. A management console108can be provided (e.g., as a web application displayed to a user via a web browser provided by a computing device, or a native application that is executed by processor150using instructions stored in memory152of the computing device) to allow a user to provide input for enabling (secure) configuration of the wearable device. For instance, the management console can include a user interface for allowing users to select and/or create configurations, and provide any suitable credentials to enable a user to be authenticated and/or authorized. Usually, providing input to such user interface is easier for the user to do than to provide the same input to the wearable device. The management console108can be communicably connected to the cloud service102over a communication network (e.g., the device on which the management console is provided can be remote from the cloud service). This disclosure further describes multiple ways to configure the wearable securely depending on the device capabilities and application requirements, e.g., as illustrated byFIGS.2-4. FIG.1Bis a block diagram illustrating a high-level architecture of a wearable device, according to some embodiments of the disclosure. The wearable device104includes one or more processors120(e.g., digital signals processor, etc.), one or more memories132, service application106, communication interface(s)136, (optionally) input part138, (optionally) output part140, and (optionally) one or more sensors142. Various parts of the wearable device104can be communicably connected with each other over a communication bus or a connectivity fabric/network of wires. Broadly speaking, the one or more memories132is operable to store electronic code, and the one or more processors120is operable to execute instructions associated with the electronic code, such that the wearable device is configured to carry out any one or more functions described herein. The communication interface(s)136can include a communication stack that allows the wearable device104to communicate with one or more companion devices (e.g., using a low energy communication channel, such as Near Field Communication (NFC) channels or Bluetooth Low Energy). In some embodiments, the communication stack can allow the wearable device104to communicate with the cloud service102(e.g., via the Internet and/or a cellular network). Depending on the wearable, the input part138may include one or more user input devices such as an imaging device, gesture sensor, light sensor, microphone, buttons, keypad, touch-sensitive display, scroll wheel/ball, etc. The output part140can include one or more user output devices such as an electronic display, haptic output (e.g., vibration, programmable movable surfaces), speaker, etc. The sensor(s)142can include one or more sensors such as a capacitive sensor, light sensor, global positioning system sensor, antenna, magnetic sensor, accelerometer, gyroscope, compass, moisture sensor, humidity sensor, pressure sensors, etc. FIGS.2-4illustrates some examples of how a wearable device can be configured, e.g., paired or registered with a companion device (and a cloud service) in a secure manner. This configuration process can setup the wearable device with information, data, and/or program(s) that the wearable device can use to cooperate with its companion device. The wearable device would generally only complete such configuration process if an authenticated/authorized user configures the wearable device with the companion device through a management console. Once configured, the wearable device may be authorized to only communicate with the companion device with which the wearable device is paired or registered, and would not be authorized to communicate with other devices with which the wearable device is not paired or registered. Advantages of the secure configuration include providing the ability to prevent unauthorized users having possession of the wearable device from changing the configuration of the wearable device, preventing person who has knowledge of the protocol and the communication address of the wearable (e.g., phone number) from hijacking and configuring the wearable device without authorization. Exemplary Configuration: When the Wearable has No Addressable Communication Mechanism In an exemplary scenario, the wearable has no addressable communication mechanism like a phone number through which a command can be sent. Instead, the wearable device can include communication stack like Transmission Control Protocol or Internet Protocol (TCP/IP). The wearable device, e.g., using the service application, can transmit a registration request to a service remote from the wearable device (e.g., cloud service). For instance, a user having the wearable device in his/her possession can provide user input to trigger the wearable device to begin the configuration process and thus triggering the wearable device (e.g., the service application) to transmit the registration request. In response to transmitting the registration request, the service application can receive a token generated by the service over a message channel between the wearable device and the service (e.g., a push message channel), wherein the message channel is mapped to the token, and the token has a limited time to live. Mapping the message channel to the token can advantageously ensure the token is not being transmitted to some other wearable device, or ensure that data being transmitted over the message channel requires validation of the token. Upon receiving the token, the output of the wearable device can output the token to a user. After the user provides the token to a management console (separate from the wearable device) in communication with the service (e.g., cloud service), the service application can receive a message from the service over the message channel indicating that the registration request is complete. Additionally, the service application can receive information, data, and/or programs for configuring the wearable device, e.g., to enable secure pairing/registration of the wearable device with a companion device. FIG.2is a sequence diagram (or messaging diagram) illustrating configuration of a wearable device having no addressable communication mechanism, according to some embodiments of the disclosure. The example illustrates processes carried out by a service application106, cloud service102, and management console108. The service application106launches on the wearable device (202). The service application106can register itself with the cloud service102by transmitting a register request to the cloud service102, e.g., via the Internet or an intranet (204). The cloud service102can generate a unique token that is valid for a limited period of time (e.g., having a limited time to live to advantageously prevent someone else reusing the token at a later time without authorization) (206). The service application106of the wearable device and the cloud service102can configure a push message channel between each other (allowing the cloud service102to push data to the service application106) and map the push channel to the unique token (208). The cloud service102can send the token to the service application106of the wearable device over the push message channel (210). The service application106can cause the token to be output to the user, e.g., render the token for display on an electronic display of the wearable device (212). User214can open a management console108and logs into the service account via a wearable configuration page (thus ensuring the user is authenticated and authorized to configure the wearable device) (218). User214, who has learned the token from the wearable device, can enter the token using the wearable configuration page (216), which advantageously offers confirmation that the authenticated/authorized user has the wearable device in his/her possession. In response, the management console108can select/create the configuration for the wearable device (or provide information which can enable the configuration to be created for the wearable device at the cloud service102) submits it along with the token (220). The cloud service can validate the token received with the token previously generated by the cloud service, and identify the push channel setup previously (222). The cloud service102can save/create the configuration for the wearable device identified by the token, and push the configuration to the service application106of the wearable device through the push channel set up previously (e.g., the push channel mapped to the same token) (224). In alternative embodiments, the wearable device (e.g., the service application106) can receive a registration request from a service remote from the wearable device if the cloud service102triggers the configuration process (through a suitable broadcast mechanism). In such a scenario, the cloud service102can also generate and transmit the token, e.g., along with the registration request. The push message channel being configured between the cloud service102and the service application106may include a companion device in its path, especially if the wearable device does not have the communication stack capable to communicate with the cloud service102directly, and/or the resources of the wearable device is to be conserved by leveraging the companion device. In such a scenario, the communications between the cloud service102and the service application106can be tunnelled through the companion device. Exemplary Configuration: When the Wearable has a Direct Addressable ID In a variant, the device has a direct addressable communication identifier (ID) such as phone number, Bluetooth Identifier (ID), or Internet Protocol (IP) address, email address, etc. A wearable device, e.g., the service application106, can receive a token generated by a service remote from the wearable device, e.g., the cloud service, in a text or multimedia message, wherein the token has a limited time to live. The text or multimedia message is transmitted using an identifier of the wearable device (e.g., the direct addressable communication ID). After the message is received, an output of the wearable device can output the token to a user. For instance, the token can be displayed using an electronic display. After the user inputs the token at a management console (separate from the wearable device) in communication with the service, the wearable device can receive a message from the service over a message channel between the wearable device and the service. The message channel is mapped to the token. Mapping the message channel to the token can advantageously ensure the token is not being transmitted to some other wearable device, or ensure that data being transmitted over the message channel requires validation of the token. The message can indicate that the registration request is complete. Additionally, the service application can receive information, data, and/or programs for configuring the wearable device, e.g., to enable secure pairing/registration of the wearable device with a companion device. FIG.3is a sequence diagram (or messaging diagram) illustrating configuration of a wearable device having direct addressable ID, according to some embodiments of the disclosure. The example illustrates processes carried out by a service application106, cloud service102, and management console108. The cloud service102may learn of the wearable device having the service application106. The service application106launches on the wearable device (302). The service application106can register itself with the cloud service102by transmitting a register request (e.g., having the direct addressable ID) to the cloud service102, e.g., via the Internet, an Intranet, a cellular network, etc. (304). User314can open the management console108and logs into the service account via a wearable configuration page (thus ensuring the user is authenticated and authorized to configure the wearable device) (300). User314can enter the direct addressable ID (e.g., the phone number, or some other direct addressable ID) of the device on the management console108via a wearable configuration page (306). The management console108can submit or transmit the device address to the cloud service102(308). The cloud service102can generate the token (310). The cloud service102can send/transmit the token to the service application106by Short Message Service (SMS) or some other suitable manner (312). Further to transmitting the token, the cloud service102can configure a push message channel between the cloud service102with service application106(allowing the cloud service102to push data to the service application106) and map the push channel to the unique token (316). The service application106can cause the token to be output to the user, e.g., render the token for display on an electronic display of the wearable device (318). User314, who has learned the token from the wearable device, can enter the token using the wearable configuration page (320), which advantageously offers confirmation that the authenticated/authorized user has the wearable device in his/her possession. In response, the management console108can select/create the configuration for the wearable device (or provide information which can enable the configuration to be created for the wearable device at the cloud service102) submits it along with the token (323). In some cases, the management console108can further identify the configuration and token by providing the direct addressable ID for validation purposes. The cloud service can validate the token received with the token previously generated by the cloud service, and identify the push channel setup previously (324). The validation can further include validating the direct addressable ID with the identified push channel. The cloud service102can save/create the configuration for the wearable device identified by the token, and push the configuration to the service application106of the wearable device through the push channel set up previously (e.g., the push channel mapped to the same token) (326). In alternative embodiments, the cloud service102(upon receiving the device address from an authorized/authenticated user), can trigger the configuration process on the wearable device (e.g., the service application106) by transmitting a register request to the service app106using the device address. In some cases, the cloud service102can broadcast the request to the service application106. The cloud service102can also provide the token with the register request at that time, if desired, indicating a configuration process is to be carried out. The push message channel being configured between the cloud service102and the service application106may include a companion device in its path, especially if the wearable device does not have the communication stack capable to communicate with the cloud service102directly, and/or the resources of the wearable device is to be conserved by leveraging the companion device. In such a scenario, the communications between the cloud service102and the service application106can be tunnelled through the companion device. Exemplary Configuration: Using a Companion Device In another scenario, a companion device is used to configure the wearable device. The companion device like a smartphone can provide the management console through which the device is configured and the configuration can be pushed through Bluetooth or Near Field Communication (NFC) channel.FIG.4is a block diagram illustrating using a companion device to configure a wearable device, according to some embodiments of the disclosure. Both the wearable device104and the companion device402(e.g., a smart phone or some other suitable computing device) has a service application. The wearable device has service application106, and the companion device is configured with service application with a management console404. The companion device402, can include one or more memories450operable to store electronic code, and one or more processors640operable to execute instructions associated with the electronic code to implement one or more functions of the companion device402described herein. The companion device402may communicate with the cloud service102to obtain wearable device configurations. The companion device402can also push configuration the wearable device through a low energy communication channel between the wearable device and the companion device (e.g., Bluetooth Low Energy, Near Field Communication (NFC), etc.). In some embodiments, the wearable device104, e.g., the service application106, can transmit a registration request to the companion device402, e.g., the service application with management console404. In response to transmitting the registration request, the service application with management console404can generate a token for the service application106of wearable device104. The service application with management console106can transmit the token via a message channel between the wearable device and the companion device, wherein the message channel is mapped to the token, and the token has a limited time to live. The message channel can be provisioned over the low energy communication channel. An output of the wearable device104can output the token to a user. After the user provides the token to the service application with management console404of the companion device402(e.g., via a user interface470of companion device402), the service application with management console404can determine whether the token provided by the user is valid against the token previously generated for the wearable device104. The service application106of the wearable device104can receive a message from the companion device402(e.g., e.g., service application with management console404) over the message channel indicating that the registration request is complete. Additionally, the service application106can receive information, data, and/or programs for configuring the wearable device, e.g., to enable secure pairing/registration of the wearable device with a companion device. As a security step, the service application with management console404can perform authentication/authorization of the user with the assistance of cloud service102. After the user is authenticated/authorized, the cloud service102can provide wearable device configuration to the service application with management console404. Generally speaking, this scheme can allow a user having possession of both the wearable device104and the companion device402and having been authenticated/authorized by the cloud service102to fetch suitable wearable device configuration from cloud service102and/or configure the wearable device104to pair the wearable device104with companion device402. In alternative embodiments, the wearable device (e.g., the service application106) can receive a registration request from service application with management console404if the service application with management console404triggers the configuration process (through a suitable broadcast mechanism). In such a scenario, the service application with management console404can also generate and transmit the token, e.g., along with the registration request. In some cases, the companion device402, using the service application with management console404can detect that wearable device is104nearby and initiates the configuration process. Token The token (as used herein) can include a one-time password or string, which can only be used during a limited period of time. The token can be randomly generated to be valid during the limited period of time, or can be generated based on a token generating function or mathematical formula. If the user is authenticated and authorized by the cloud service, the token being generated can be unique to the user. The token (as used herein) can be embodied in text form, audio form, image form, video/animation form. Using non-text forms can further increase the chance of the user being an actual person, and not a computer program trying to hijack the token. For instance, the token (as used herein) can include a series of numbers and/or letters, and the wearable device can output the token via an output part (e.g., electronic display, haptic output, speaker, etc.) so that a user in possession of the wearable device can learn/receive/consume the token. If the token is transmitted to the wearable device using a text or multimedia message and an identifier of the wearable device (e.g., the direct addressable communication ID), the message can include the token as a string, an image having the token, an audio clip that vocalizes the token, etc. In some cases, the token can be delivered using a “robocall”, wherein a computer generated voice call can deliver the token to the wearable device and the user in possession of the wearable device via audio. Overview: Leveraging Full Featured Proxy Devices in Proximity to Conserve Resources of Wearable Device Based on Context and/or Priority This part of disclosure describes the method and system through which the wearable device that have constrained CPU (computer processing unit), sensor and energy resources optimize the resource usage by leveraging companion device. Wearable device has low computing power and energy resources (battery). Due to this although the wearable has full capabilities like sensors, communication capabilities, it is constrained by resources like battery and computing power. It will be of immense help if wearable can leverage the capabilities of a companion device nearby whenever possible to conserve its resources. However, solving the problem is not trivial. In view of one or more of the above-mentioned issues, mechanisms described herein can allow a wearable to have enough resources on board for them to work independently when the wearable is not in proximity with a companion device, and leverage the proximal (full-featured) device if it is available to offload battery consuming tasks (when the wearable is in proximity with the companion device). The embodiments disclosed herein provides for using the onboard resources optimally based on context (e.g., proximity to a companion device) and/or priority (e.g., priority level of a particular task). Generally speaking, the wearable device and the companion device can communicate with each other via a low energy communication link to perform functions such as discovery, implement services involving, e.g., sensor data, use of Internet connectivity, and compute tasks. In one example use case, the wearable can leverage the Internet access of a smartphone that is nearby (in proximity), to gain Internet access if a wearable itself does not have Internet access. The wearable could also leverage a stronger or faster Internet access (or network connectivity), and/or save battery by using the companion device's Internet access. In another example use case, the wearable can get a more accurate location without compromising on the battery life by using the companion device's global positioning system sensor. In another example use case, the wearable and the companion device are paired for proximity detection. In yet another example use case, a wearable (e.g., a watch) detects when the preconfigured companion device is nearby (in proximity) and switch to companion proxy mode. In this mode, the virtual tunnel to the external work is opened through the companion device. All the communication is done through this tunnel and leveraging the sensors, communication stack and/or CPU of the companion device wherever possible. In another example use case, prioritization of tasks/application activities based on urgency and the ones that can be deferred till the time a companion device is in proximity. High priority task is done using the resources of wearable device if the companion device is not in proximity. In yet a further example use case, other tasks (low priority tasks) are queued and when the (paired/registered/trusted) companion device is in proximity, the queued tasks are performed using the resources of the companion device. The mechanisms described herein are typically implemented for wearable devices and companion devices which have been configured/paired/registered using the methods described herein (e.g., schemes illustrated inFIGS.2-4). Process Flow Based on Context and Priority One important features of intelligent conservation of resources of the wearable device paired with the companion device is that the wearable device can perform tasks based on context and priority. Within this present disclosure, context can include the state of the wearable device (e.g., available battery life, available compute resources, current processes being executed by the wearable device) and the state of the environment of the wearable device (e.g., time of day, proximity to companion devices, day of the week, temperature, location, etc.). Priority is generally associated with the nature of the task of interest. The wearable device may include a data structure (e.g., stored in memory) which associates different tasks with different priorities. The priorities can be predefined for various types of tasks. In some cases, a task may include metadata which specifies the priority of the task. Priority can be associated with varying levels of urgency and/or importance. FIG.5is a process flow diagram illustrating a wearable device conserving its resources based on context and priority, according to some embodiments of the disclosure. The example show is merely for illustration, it is understood by one skilled in the art that other suitable process flows can be prescribed based on different contexts/priorities. The illustrative example shows the wearable device checking whether a task is of high priority (502). If yes, the wearable device performs the task (504). If no, the wearable device queues the task (506). The illustrative example further shows the wearable device checking whether the companion device is in proximity to the wearable device (508). If yes, the wearable device performs the task while leveraging the resources of the companion device (510). If no, the wearable device continues to defer the performance of the first task and waits until the companion device is in proximity to the wearable device. FIG.6is a block diagram illustrating a high-level architecture of a wearable device, according to some embodiments of the disclosure. This example shown supplements the wearable device104ofFIG.1B. Specifically, the memory132can be provided with a queue146to allow the wearable device104to queue tasks that are being deferred until the wearable device is in proximity to the companion device. Moreover, the wearable device104can be provided with a proximity detector148, which can actively search the surroundings of the wearable device104for its companion device, and/or can receive notification from the companion device when the companion device detects the wearable device104is in proximity to the companion device as an indication that the wearable device is in fact in proximity to the companion device. The proximity detector148can be implemented through, e.g., Bluetooth Low Energy, Near Field Communication channel, some other suitable wireless communication mechanism enabling proximity detection. FIG.7is a flow diagram illustrating a method for leveraging a companion device in proximity to a wearable device to conserve resources of the wearable device, according to some embodiments of the disclosure. The method includes a wearable device queuing a first task based on a priority level associated with a first task (702). The wearable device (e.g., a proximity detector148ofFIG.6) can determine that the companion device is in proximity to the wearable device (704). The wearable device can configure a first communication channel between the wearable device and the companion device when the companion device is in proximity to the wearable device (706), e.g., using a low energy communication channel. To intelligently conserve the resources of the wearable device, the wearable device can perform the first task using the first communication channel and one or more resources of the companion device (708). Example: When the Wearable Device is in Proximity to a Companion Device FIG.8is a block diagram illustrating a scenario when a wearable device is in proximity to a more capable device, according to some embodiments of the disclosure. In this scenario, both the wearable device104and the companion device402has a service application (service application106and service application802). To enable queuing and prioritization of service application activities, sensors and/or communication stack (sensors and/or communication stack802and sensors and/or communication stack804) are provided on both or at least one of the wearable device and the companion device. The sensors can enable either one or both of the wearable device104and companion device402to determine whether the other device is nearby. The communication stack can provide the stack for transmitting and receiving communications between any two or more of the following: the wearable device104, the companion device402, and the cloud service102. Furthermore, the companion device402(generally having more resources) can be communicably connected to the cloud service102to act as a proxy for communications between the cloud service and the wearable device. When a task has a low priority level, the wearable device104(e.g., using the service app106) can queue the task. When the companion device is nearby, a backlog (queue) of low priority tasks or all tasks are performed by the companion device402, and the companion device402cooperates with the wearable device104to complete those tasks in order to leverage the processing power and resources of the companion device104. The following describes some examples of performing such tasks while leveraging the resources of the companion device402. In one example, performing of a task includes communicating with a service remote from the wearable device (e.g., the cloud service102) and the companion device via a communication channel between the wearable device104and the companion device402and a communication channel configured between the companion device402and the service (e.g., the cloud service102). Data can be communicated through a tunnel that is established between the wearable device104and the cloud service102. Typically, the communication channel between the wearable device104and the companion device402is a low energy communication channel, e.g., Bluetooth Low Energy, a near field communication channel, and any suitable wireless communication channel. The communication channel between the companion device402be provisioned over a cellular network, the Internet, and/or an Intranet, and may consume more power, be more capable, and/or have higher bandwidth than the communication channel between the wearable device104and the companion device402. In another example, performing of a task includes obtaining, by the wearable device104, sensor data measured by the companion device402and/or derived data from the sensor data via a communication channel between the wearable device104and the companion device402. Advantageously, the wearable device104can offer rich features without having to physically include the sensors offered by the companion device402and/or consume computational resources or power of the wearable device104to make measurements using those sensors. The sensor data and/or the derived data can enrich the service application106of wearable device104. In yet another example, performing of a task includes triggering, by the wearable device104, a computation task to be performed using resources of the companion device402to generate a result. For instance, the wearable device104can transmit a batch of data to the companion device402and requests the companion device402to process the batch of data to generate derived data. Optionally, the wearable device104can receive the result of the computation task from the companion device402over the communication channel between the wearable device104and the companion device402. Advantageously, the wearable device402can request the companion device402to perform computationally expensive tasks (e.g., processing or filtering data on behalf of the wearable device104), without having to consume significant computational resources and/or power of the wearable device104. Example: When the Wearable Device is Not Proximity to a Companion Device FIG.9is a block diagram illustrating a scenario when a wearable device is not in proximity to a more capable device, according to some embodiments of the disclosure. In this scenario, the wearable device104has a (direct) communication channel with the cloud service102for communicating and completing high priority activity only (when the companion device is not nearby or in proximity to the wearable device104). In such scenarios, the wearable device104can perform a task based on a priority level associated with the task, when the companion device is not in proximity to the wearable device. For example, if the task has a high priority level, the wearable device104does not queue the task, and can perform the task without significant delay. A task with a high priority level can be associated with associated with one or more of following: emergency communication, priority communication, and (certain) communication with a service remote from the wearable device. In some cases, the wearable device104is configured to process incoming requests from the cloud service102requesting the wearable device104perform a task (e.g., display information, output notifications, generate data in response to the request, etc.). The wearable device104can determine the priority level based on metadata that is provided with the request. In some cases, the metadata includes a priority level. In some cases, the metadata includes an identifier that is usable by the wearable device104to determine a priority level associated with the request. Based on the priority level, the wearable device104can determine whether to queue or not to queue the task being requested by the incoming request. Other Embodiments and System Illustrations Note that with the examples provided herein, interaction may be described in terms of two, three, or more computing devices. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of computing devices. Moreover, the wearable and companion systems are readily scalable and can be implemented across a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of wearable and companion systems as potentially applied to a myriad of other architectures. It is also important to note that the functions related to wearable and companion systems as disclosed herein, illustrate only some of the possible wearable and companion systems functions that may be executed by, or within, systems illustrated in theFIGS.1A-B,2-4,6,8and9. Some of these operations (e.g., in relation to all the FIGURES) may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of the present disclosure. In addition, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by embodiments described herein in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure. Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. Additionally, although systems inFIGS.1A-B,2-4,6,8and9have been illustrated with reference to particular elements and operations that facilitate the functions of the wearable and companion systems, these elements and operations may be replaced by any suitable architecture, protocols, and/or processes that achieve the intended functionality of the wearable and companion systems. In one example implementation, various devices or components involved in implementing the embodiments described herein can include software for achieving the described functions, and these devices or components disclosed herein may comprise software embodied in one or more non-transitory, tangible media for facilitating the activities described herein. At least a part of the systems and devices (e.g., wearable device, service application, sensors, communication stack, companion device, cloud service, management console (could also be referred to a “configuration console”), proximity detector, and any components shown inFIGS.1A-B,2-4,6,8and9for enabling wearable and companion systems) disclosed herein may also include a memory device (or memory element) for storing information to be used in achieving the functions as outlined herein. Additionally, the systems and devices (e.g., wearable device, service application, sensors, communication stack, companion device, cloud service, management console (could also be referred to a “configuration console”), proximity detector, and any components shown inFIGS.1,4-6for enabling wearable and companion systems) described herein may include one or more processors that is capable of executing software or an algorithm to perform the functions as discussed in this Specification. These devices may further keep information in any suitable memory element (random access memory (“RAM”), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. For instance, the memory element can include a queue for deferred tasks. Any of the memory items discussed herein should be construed as being encompassed within the broad term “memory element.” Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term “processor.” Each of the network elements can also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. Note that in certain example implementations, the functions outlined herein and in any of the figures/drawings included herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit (“ASIC”), digital signal processor (“DSP”) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element is provided to store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (“FPGA”), an erasable programmable read only memory (“EPROM”), an electrically erasable programmable ROM (“EEPROM”)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. Program instructions may be used to cause a general-purpose or special-purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include one or more non-transitory, tangible, machine readable media having stored thereon instructions that may be used to program a processing system or other electronic device to perform the methods. The term “machine readable medium” used herein shall include any medium that is capable of storing or encoding a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. The term “non-transitory machine readable medium” shall accordingly include, but not be limited to, memories such as solid-state memories, optical and magnetic disks. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action or produce a result. It should be noted that some of the infrastructure discussed herein (e.g., cloud service, management console (could also be referred to a “configuration console”), and any components shown inFIGS.1A-B,2-4,6,8and9for enabling wearable and companion systems) can be provisioned as part of any type of network element. In particular, the infrastructure can facilitate management and configuration of wearable devices with companion devices, and/or provide services subscribed by the wearable device and/or companion device. As used herein, the terms e.g., cloud service, management console (could also be referred to a “configuration console”), and any components shown inFIGS.1A-B,2-4,6,8and9for enabling wearable and companion systems can encompass computers, servers, network appliances, hosts, routers, switches, gateways, bridges, virtual equipment, load-balancers, firewalls, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Moreover, the network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. In one implementation, networked elements/devices (e.g., wearable device, service app, sensors, communication stack, companion device, cloud service, management console (could also be referred to a “configuration console”), and any components shown inFIGS.1A-B,2-4,6,8and9for enabling wearable and companion systems having network connectivity or communication channel with another component) can include software to achieve (or to foster) the concept of wearable and companion systems. This could include the implementation of instances of any of the components, engines, logic, etc. shown in the diagrams included herein. Additionally, each of these devices can have an internal structure (e.g., a processor, a memory element, etc.) to facilitate some of the operations described herein. In other embodiments, these management activities may be executed externally to these devices, or included in some other network element to achieve the intended functionality. Alternatively, these network devices may include software (or reciprocating software) that can coordinate with other network elements in order to achieve the wearable and companion systems described herein. In still other embodiments, one or several devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. Note that with the example provided above, as well as numerous other examples provided herein, interaction may be described in terms of two, three, or four network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that topologies illustrated in and described with reference to the figures/drawings included herein (and their teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the illustrated topologies as potentially applied to a myriad of other architectures. It is also important to note that the steps in the preceding flow diagrams (e.g., shown inFIGS.5and7) illustrate only some of the possible signalling scenarios and patterns that may be executed by, or within, communication systems shown in the figures/drawings included herein. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by communication systems shown in the figures/drawings in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure. Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. For example, although the present disclosure has been described with reference to particular communication exchanges, embodiments described herein may be applicable to other architectures. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. In accordance with the well-established principle that an “applicant is entitled to be his or her own lexicographer,” MPEP 2111.01 (IV), citing In rePaulson,30 F.3d 1475, 1480 (Fed. Cir. 1994), certain terms have been expressly defined herein. It is expressly intended that those terms have the definitions supplied, and that they not be given any interpretation inconsistent with those definitions. EXAMPLES AA: Configuring a wearable device comprises the one or more of processes outlined in the description forFIGS.2and3using the system shown inFIG.1, where the steps are performed by one or more of the following: the wearable device, service app, management console, cloud service, and the user. AB: A system implementing AA can optionally include a companion device which can proxy communications which occurred between the service application and the cloud service inFIGS.2and3, such that the service application communicates with the cloud service through the companion device instead of directly with the cloud service. The companion device may serve the functions of the management console shown inFIGS.2and3. BA: Performing service application activities of a wearable device while leveraging a companion device having more computing resources than the wearable device comprises one or more of the processes outlined in the description forFIGS.4-6. BB: A system implementing BA can optionally include sensors provided on either the wearable device or the companion device to detect whether the companion device is in proximity to the wearable device (or vice versa). Communication stack is provided to maintain communications (e.g., a queue) between the wearable device and the companion device. BC: Performing service application activities in BA can optionally include prioritizing service application activities, performing low priority activities using the wearable device when the companion device is nearby the wearable device; and using the companion device as a proxy. In such a case the companion device may actually perform the activities on behalf of the wearable device. BD: Performing service application activities in BA or BC can optionally include performing high priority activities using the wearable device and not using the companion device as a proxy. Example 1 is a wearable device (or broadly, an apparatus), comprising: a memory element operable to store electronic code; and a processor operable to execute instructions associated with the electronic code, said instructions for leveraging a companion device in proximity to a wearable device to conserve resources of the wearable device, such that the wearable device is configured to queue a first task based on a priority level associated with a first task, determine that the companion device is in proximity to the wearable device, configure a first communication channel between the wearable device and the companion device when the companion device is in proximity to the wearable device, and perform the first task using the first communication channel and one or more resources of the companion device. In Example 2, the wearable device of Example 1 can optionally include the wearable device having one or more of the following characteristics when compared with the companion device: fewer computing resources, fewer communication or network resources, fewer power resources, and fewer user input methods. In Example 3, the wearable device of any one of Examples 1-2 can optionally include the first task having a low priority level. In Example 4, the wearable device of any one of Examples 1-3 can optionally include performing of the first task comprising: communicating with a service remote from the wearable device and the companion device via the first communication channel and a second communication channel configured between the companion device and the service. In Example 5, the wearable device of any one of Examples 1-4 can optionally include the first communication channel being a near field communication channel and/or a wireless communication channel. In Example 6, the wearable device of any one of Examples 1-5 can optionally include performing of the first task comprising: obtaining sensor data measured by the companion device and/or derived data from the sensor data via the first communication channel. In Example 7, the wearable device of any one of Examples 1-6 can optionally include performing the first task comprising: triggering a computation task to be performed using resources of the companion device to generate a result; and receiving the result of the computation task from the companion device over the first communication channel. In Example 8, the wearable device of any one of Examples 1-7 can optionally include the wearable device being further configured to: perform a second task based on a priority level associated with the second task, when the companion device is not in proximity to the wearable device. In Example 9, the wearable device of Examples 8 can optionally include the second task having a high priority level. In Example 10, the wearable device of any one of Examples 8 or 9 can optionally include the second task being associated with one or more of following: emergency communication, priority communication, and communication with a service remote from the wearable device. In Example 11, the wearable device of any one of Examples 1-10 can optionally include the wearable device being further configured to: transmit a registration request to a service remote from the wearable device; in response to transmitting the registration request, receive a token generated by the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the token has a limited time to live; output, by an output of the wearable device, the token to a user; and after the user provides the token to a management console in communication with the service, receive a message from the service over the message channel indicating that the registration request is complete. In Example 12, the wearable device of any one of Examples 1-10 can optionally include the wearable device being further configured to: receive a token generated by a service remote from the wearable device in a text or multimedia message, wherein the token has a limited time to live, and the text or multimedia message is transmitted using an identifier of the wearable device; output, by an output of the wearable device, the token to a user; and after the user inputs the token at a management console in communication with the service, receive a message from the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the message indicates that the registration request is complete. In Example 13, the wearable device of any one of Examples 1-10 can optionally include the wearable device being further configured to: transmit a registration request to the companion device; in response to transmitting the registration request, receive, by the wearable device, a token generated by the service via a message channel between the wearable device and the companion device, wherein the message channel is mapped to the token, and the token has a limited time to live; output, by an output of the wearable device, the token to a user; and after the user provides the token to a management console of the companion device, receive a message from the companion device over the message channel indicating that the registration request is complete. In Example 14, the wearable device of any one of Examples 1-13 can optionally include the wearable device being a computing system. Example 15 is an apparatus for leveraging a companion device in proximity to a wearable device to conserve resources of the wearable device, comprising: (means for storing electronic code;) means for queueing a first task based on a priority level associated with a first task; means for determining that the companion device is in proximity to the wearable device; means for configuring a first communication channel between the wearable device and the companion device when the companion device is in proximity to the wearable device; and means for performing the first task using the first communication channel and one or more resources of the companion device. In Example 16, the apparatus of Example 13 can optionally include the wearable device having one or more of the following characteristics when compared with the companion device: fewer computing resources, fewer communication or network resources, fewer power resources, and fewer user input methods. In Example 17, the apparatus of any one of Examples 15-16 can optionally include the first task having a low priority level. In Example 18, the apparatus of any one of Examples 15-17 can optionally include the means for performing of the first task comprising: means for communicating with a service remote from the wearable device and the companion device via the first communication channel and a second communication channel configured between the companion device and the service. In Example 19, the apparatus of any one of Examples 15-18 can optionally include the first communication channel being a near field communication channel and/or a wireless communication channel. In Example 20, the apparatus of any one of Examples 15-19 can optionally include the means for performing of the first task comprising: means for obtaining sensor data measured by the companion device and/or derived data from the sensor data via the first communication channel. In Example 21, the apparatus of any one of Examples 15-20 can optionally include the means for performing the first task comprising: means for triggering a computation task to be performed using resources of the companion device to generate a result; and means for receiving the result of the computation task from the companion device over the first communication channel. In Example 22, the apparatus of any one of Examples 15-21 can optionally include: means for performing a second task based on a priority level associated with the second task, when the companion device is not in proximity to the wearable device. In Example 23, the apparatus of any one of Examples 22 can optionally include the second task having a high priority level. In Example 24, the apparatus of any one of Examples 22 or 23 can optionally include the second task being associated with one or more of following: emergency communication, priority communication, and communication with a service remote from the wearable device. In Example 25, the apparatus of any one of Examples 15-24, can optionally include: means for transmitting a registration request to a service remote from the wearable device; means for, in response to transmitting the registration request, receiving a token generated by the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the token has a limited time to live; means for outputting, by an output of the wearable device, the token to a user; and means for, after the user provides the token to a management console in communication with the service, receiving a message from the service over the message channel indicating that the registration request is complete. In Example 26, the apparatus of any one of Examples 15-24 can optionally include: means for receiving a token generated by a service remote from the wearable device in a text or multimedia message, wherein the token has a limited time to live, and the text or multimedia message is transmitted using an identifier of the wearable device; means for outputting, by an output of the wearable device, the token to a user; and means for, after the user inputs the token at a management console in communication with the service, receiving a message from the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the message indicates that the registration request is complete. In Example 27, the apparatus of any one of Examples 15-24 can optionally include: means for transmitting a registration request to the companion device; means for, in response to transmitting the registration request, receiving, by the wearable device, a token generated by the service via a message channel between the wearable device and the companion device, wherein the message channel is mapped to the token, and the token has a limited time to live; means for outputting, by an output of the wearable device, the token to a user; and means for, after the user provides the token to a management console of the companion device, receiving a message from the companion device over the message channel indicating that the registration request is complete. In Example 28, the apparatus of any one of Examples 15-27 can optionally include the apparatus being a computing system. Example 29 is a method for leveraging a companion device in proximity to a wearable device to conserve resources of the wearable device, comprising: queueing, by a wearable device, a first task based on a priority level associated with a first task; determining that the companion device is in proximity to the wearable device; configuring a first communication channel between the wearable device and the companion device when the companion device is in proximity to the wearable device; and performing the first task using the first communication channel and one or more resources of the companion device. In Example 30, the method of Example 29 can optionally include the wearable device having one or more of the following characteristics when compared with the companion device: fewer computing resources, fewer communication or network resources, fewer power resources, and fewer user input methods. In Example 31, the method of Example 29 or 30 can optionally include the first task having a low priority level. In Example 32, the method of any one of Examples 29-31 can optionally include performing the first task comprising: communicating with a service remote from the wearable device and the companion device via the first communication channel and a second communication channel configured between the companion device and the service. In Example 33, the method of any one of Examples 29-32 can optionally include the first communication channel being a near field communication channel and/or a wireless communication channel. In Example 34, the method of any one of Examples 29-33 can optionally include performing the first task comprising: obtaining sensor data measured by the companion device and/or derived data from the sensor data via the first communication channel. In Example 35, the method of any one of Examples 29-34 can optionally include performing the first task comprising: triggering a computation task to be performed using resources of the companion device to generate a result; and receiving the result of the computation task from the companion device over the first communication channel. In Example 36, the method of any one of Examples 29-35 can optionally include performing a second task based on a priority level associated with the second task, when the companion device is not in proximity to the wearable device. In Example 37, the method of Example 36 can optionally include the second task having a high priority level. In Example 38, the method of Example 36 or 37 can optionally include the second task being associated with one or more of following: emergency communication, priority communication, and communication with a service remote from the wearable device. In Example 39, the method of any one of Examples 29-38 can optionally include: transmitting a registration request to a service remote from the wearable device; in response to transmitting the registration request, receiving a token generated by the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the token has a limited time to live; outputting, by an output of the wearable device, the token to a user; after the user provides the token to a management console in communication with the service, receiving a message from the service over the message channel indicating that the registration request is complete. In Example 40, the method of any one of Examples 29-38 can optionally include: receiving a token generated by a service remote from the wearable device in a text or multimedia message, wherein the token has a limited time to live, and the text or multimedia message is transmitted using an identifier of the wearable device; outputting, by an output of the wearable device, the token to a user; and after the user inputs the token at a management console in communication with the service, receiving a message from the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the message indicates that the registration request is complete. In Example 41, the method of any one of Examples 29-38 can optionally include: transmitting a registration request to the companion device; in response to transmitting the registration request, receiving, by the wearable device, a token generated by the service via a message channel between the wearable device and the companion device, wherein the message channel is mapped to the token, and the token has a limited time to live; outputting, by an output of the wearable device, the token to a user; and after the user provides the token to a management console of the companion device, receiving a message from the companion device over the message channel indicating that the registration request is complete. Example 42 is one or more machine-readable media including code that, when executed, causes a machine to perform the method of any one of claims 29-41. Example 43 is an apparatus comprising means for performing the method of any one of claims 29-41. In Example 44, the apparatus of claim 43 can optionally include the means for performing the method comprising a processor and a memory. In Example 45, the apparatus of claim 43 can optionally include the memory comprising machine-readable instructions that, when executed, cause the apparatus to perform the method of any one of claims 29-41. In Example 46, the apparatus of any one of claims 43-45 can optionally include the apparatus being a computing system. Example 47 is at least one computer-readable media comprising instructions that, when executed, implement the method of any one of claims 29-41 or realize the apparatus of any one of claims 43-46. Example 48. One or more non-transitory, tangible, computer-readable storage media encoded with instructions that, when executed, cause one or more processing units to perform operations for leveraging a companion device in proximity to a wearable device to conserve resources of the wearable device, wherein the operations comprise: queueing, by a wearable device, a first task based on a priority level associated with a first task; determining that the companion device is in proximity to the wearable device; configuring a first communication channel between the wearable device and the companion device when the companion device is in proximity to the wearable device; and performing the first task using the first communication channel and one or more resources of the companion device. In Example 49, the media of Example 48 can optionally include the wearable device having one or more of the following characteristics when compared with the companion device: fewer computing resources, fewer communication or network resources, fewer power resources, and fewer user input methods. In Example 50, the media of Example 48 or 49 can optionally include the first task having a low priority level. In Example 51, the media of any one of Examples 48-50 can optionally include performing the first task comprising: communicating with a service remote from the wearable device and the companion device via the first communication channel and a second communication channel configured between the companion device and the service. In Example 52, the media of any one of Examples 48-51 can optionally include the first communication channel being a near field communication channel and/or a wireless communication channel. In Example 53, the media of any one of Examples 48-52 can optionally include performing the first task comprising: obtaining sensor data measured by the companion device and/or derived data from the sensor data via the first communication channel. In Example 54, the method of any one of Examples 48-53 can optionally include performing the first task comprising: triggering a computation task to be performed using resources of the companion device to generate a result; and receiving the result of the computation task from the companion device over the first communication channel. In Example 55, the media of any one of Examples 48-54 can optionally include the operations comprising performing a second task based on a priority level associated with the second task, when the companion device is not in proximity to the wearable device. In Example 56, the media of Example 55 can optionally include the second task having a high priority level. In Example 57, the media of Example 55 or 56 can optionally include the second task being associated with one or more of following: emergency communication, priority communication, and communication with a service remote from the wearable device. In Example 58, the media of any one of Examples 48-57 can optionally include the operations comprising: transmitting a registration request to a service remote from the wearable device; in response to transmitting the registration request, receiving a token generated by the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the token has a limited time to live; outputting, by an output of the wearable device, the token to a user; after the user provides the token to a management console in communication with the service, receiving a message from the service over the message channel indicating that the registration request is complete. In Example 59, the media of any one of Examples 48-57 can optionally include the operations comprising: receiving a token generated by a service remote from the wearable device in a text or multimedia message, wherein the token has a limited time to live, and the text or multimedia message is transmitted using an identifier of the wearable device; outputting, by an output of the wearable device, the token to a user; and after the user inputs the token at a management console in communication with the service, receiving a message from the service over a message channel between the wearable device and the service, wherein the message channel is mapped to the token, and the message indicates that the registration request is complete. In Example 59, the media of any one of Examples 48-57 can optionally include the operations comprising: transmitting a registration request to the companion device; in response to transmitting the registration request, receiving, by the wearable device, a token generated by the service via a message channel between the wearable device and the companion device, wherein the message channel is mapped to the token, and the token has a limited time to live; outputting, by an output of the wearable device, the token to a user; and after the user provides the token to a management console of the companion device, receiving a message from the companion device over the message channel indicating that the registration request is complete.
77,972
11861397
DETAILED DESCRIPTION Cloud Computing in General It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.1are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.2, a set of functional abstraction layers provided by cloud computing environment50(FIG.1) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.2are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and mobile desktop96. Data Processing System in General FIG.3is a block diagram of an example DPS according to one or more embodiments. The DPS may be used as a cloud computing node10. In this illustrative example, the DPS100may include communications bus102, which may provide communications between a processor unit104, a memory106, persistent storage108, a communications unit110, an I/O unit112, and a display114. The processor unit104serves to execute instructions for software that may be loaded into the memory106. The processor unit104may be a number of processors, a multi-core processor, or some other type of processor, depending on the particular implementation. A number, as used herein with reference to an item, means one or more items. Further, the processor unit104may be implemented using a number of heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, the processor unit104may be a symmetric multi-processor system containing multiple processors of the same type. The memory106and persistent storage108are examples of storage devices116. A storage device may be any piece of hardware that is capable of storing information, such as, for example without limitation, data, program code in functional form, and/or other suitable information either on a temporary basis and/or a permanent basis. The memory106, in these examples, may be, for example, a random access memory or any other suitable volatile or non-volatile storage device. The persistent storage108may take various forms depending on the particular implementation. For example, the persistent storage108may contain one or more components or devices. For example, the persistent storage108may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by the persistent storage108also may be removable. For example, a removable hard drive may be used for the persistent storage108. The communications unit110in these examples may provide for communications with other DPSs or devices. In these examples, the communications unit110is a network interface card. The communications unit110may provide communications through the use of either or both physical and wireless communications links. The input/output unit112may allow for input and output of data with other devices that may be connected to the DPS100. For example, the input/output unit112may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, the input/output unit112may send output to a printer. The display114may provide a mechanism to display information to a user. Instructions for the operating system, applications and/or programs may be located in the storage devices116, which are in communication with the processor unit104through the communications bus102. In these illustrative examples, the instructions are in a functional form on the persistent storage108. These instructions may be loaded into the memory106for execution by the processor unit104. The processes of the different embodiments may be performed by the processor unit104using computer implemented instructions, which may be located in a memory, such as the memory106. These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in the processor unit104. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as the memory106or the persistent storage108. The program code118may be located in a functional form on the computer readable media120that is selectively removable and may be loaded onto or transferred to the DPS100for execution by the processor unit104. The program code118and computer readable media120may form a computer program product122in these examples. In one example, the computer readable media120may be computer readable storage media124or computer readable signal media126. Computer readable storage media124may include, for example, an optical or magnetic disk that is inserted or placed into a drive or other device that is part of the persistent storage108for transfer onto a storage device, such as a hard drive, that is part of the persistent storage108. The computer readable storage media124also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory, that is connected to the DPS100. In some instances, the computer readable storage media124may not be removable from the DPS100. Alternatively, the program code118may be transferred to the DPS100using the computer readable signal media126. The computer readable signal media126may be, for example, a propagated data signal containing the program code118. For example, the computer readable signal media126may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communications links, optical fiber cable, coaxial cable, a wire, and/or any other suitable type of communications link. In other words, the communications link and/or the connection may be physical or wireless in the illustrative examples. In some illustrative embodiments, the program code118may be downloaded over a network to the persistent storage108from another device or DPS through the computer readable signal media126for use within the DPS100. For instance, program code stored in a computer readable storage medium in a server DPS may be downloaded over a network from the server to the DPS100. The DPS providing the program code118may be a server computer, a client computer, or some other device capable of storing and transmitting the program code118. The different components illustrated for the DPS100are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a DPS including components in addition to or in place of those illustrated for the DPS100. Other components shown inFIG.1 Multiple Special Function Queues in a Scheduler Containers and container orchestration are utilized to effectively and efficiently accomplish cloud computing tasks. A container is a ready-to-run software package that can be sent from a host and run/operated/performed on a node. Each container can include all features required to run an application. The container can be imaged onto any set of hardware that is capable of running the software included in a container. Each container is included in a pod. Each pod can include more than one container, however for purposes of this application they will generally be considered to have a 1:1 ratio. In some embodiments, pods can be defined as a self-contained deployable unit managed by a container orchestration solution. A container orchestration solution (a “container manager”) can be designed to distribute containers/pods onto one or more remote computing nodes (or “nodes”). Various embodiments allow for constraints and/or rules that dictate how and when pods are distributed and executed on one or more different nodes. The distribution can be based on several factors. The factors can include CPU type, node type, disk type, node hardware, pod configuration, node configuration, and other similar factors. Generally, container orchestration managers can include a scheduler. The scheduler can manage how pods are distributed to which nodes. In some embodiments, the scheduler includes several components to perform the distribution of pods such as an event handler, an error handler, and a pod queue. Each scheduler can contain one of each component. For example, a newly created and/or terminated pod can be handled by one queue, one error handler, and one event handler. However, having a single queue can cause higher priority pods to get less favorable placement, later deployment based on location in the queue, and/or a less efficient distribution than pods that precede the higher priority pods in the queue. Additionally, creating a complete scheduler for the higher priority pods can greatly increase the usage of computational resources. Embodiments of the present disclosure can include a scheduler that has two or more separate queues in one scheduler. All of the queues in the scheduler can share the remaining scheduler components. This can be an improvement on previous systems, where a second queue would require additional time and resources to build and execute all of the additional components within a second scheduler (e.g., second error handler, etc.). Additionally, a second scheduler will consume the resources to generate and run additional pods. A single container cannot be managed by more than one container manager. In some embodiments, the two queues in the scheduler can be configured to manage different types of pods. In some embodiments, one queue can be a standard pod queue (or standard queue), and the other queue can be a special function pod queue (or special queue). In some embodiments, one queue can be a first special queue for a first pod type and the second queue can be a second special queue for a second pod type. The two queues can share the remaining resources of the scheduler (e.g., event handler). The other components of the scheduler can be configured to identify a special pod and a standard pod. For example, the event handler and error handler can identify when a pod is a special pod, and/or which type of special pod and send it to the appropriate queue. In some embodiments, one or more of the queues can include a special function. Said differently, the queues can be associated to pod level requirements. For example, if a pod includes regulated data, the pod can include regulatory compliant workloads. The container manager can then sort the pod into a queue based on the regulatory framework related to the pod. Embodiments of the present disclosure can include regulatory aware handling (e.g., placement, management, etc.) of pods. In some embodiments, the container manager can be configured to identify special function pods/containers. In some embodiments, a pod can be special based on an indication of being special. The indication can be added to the pod upon creation by a human and/or an application. In some embodiments, a pod can be identified as special based on an application included in the pod. For example, if the application is a financial application than the pod can be considered special. As another example, if the application is related to processing personal information of users, the pod can be special. In some embodiments, the pod can be special if the pod is subject to regulations. The regulations can be government regulation (e.g., laws, regulations, etc.) and/or organization regulations (e.g., corporation mandated monitoring). In some embodiments, a pod can be special if it requires a particular set of components to run on the node (e.g., hardware and/or software). In general, any predefined characteristics or attributes associated with the pod may be used to designate it as special. The aforementioned advantages are example advantages, and embodiments exist that can contain all, some, or none of the aforementioned advantages while remaining within the spirit and scope of the present disclosure. Referring now to various embodiments of the disclosure in more detail,FIG.4is a representation of a computing environment400that is capable of running a container manager in accordance with one or more embodiments of the present disclosure. Many modifications to the depicted environment may be made by those skilled in the art without departing from the scope of the disclosure. Computing environment400includes host410, node430(1), node430(2), through node430(n), where n is an integer, and network450. Node430(1), node430(2), through node430(n) can be referred to, individually and/or representatively as nodes430. Network450can be, for example, a telecommunications network, a local area network (LAN), a wide area network (WAN), such as the Internet, or a combination of the three, and can include wired, wireless, or fiber optic connections. Network450may include one or more wired and/or wireless networks that are capable of receiving and transmitting data, voice, and/or video signals, including multimedia signals that include voice, data, and video information. In general, network450may be any combination of connections and protocols that will support communications between host410, nodes430, and other computing devices (not shown) within computing environment400. In some embodiments, each of host410, and/or nodes430may include a computer system, such as the data processing system100ofFIG.3. Host410can be a standalone computing device, a management server, a web server, a mobile computing device, or any other electronic device or computing system capable of receiving, sending, and processing data. In other embodiments, host410can represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment. In some embodiments, host410includes container manager412, application416, and scheduler420. Container manager412can be any combination of hardware and/or software configured to operate the lifecycle of containers (e.g., container433). In some embodiments, container manager412controls and automates tasks including, but not limited to, provisioning and deployment of containers, redundancy and availability of containers, allocation of resources between containers, movement of containers across a host infrastructure, and load balancing between containers and/or nodes430. In some embodiments, container manager412includes a container orchestration system (e.g., Kubernetes®). In some embodiments, one or more of application416and scheduler420are included in container manager412. They are shown separately for description purposes. Application416can be any combination of hardware and/or software configured to carry out a function on a computing device (e.g., host410). In some embodiments, application416is a web application. In some embodiments, application416can be packaged in one or more pods. In some embodiments, application416can represent any number of separate applications. The applications can be combined/grouped into one or more pods or containers. In some embodiments, application416can initiate the generation of a pod. Scheduler420can be any combination of hardware and software configured to distribute pods/containers to a node within computing environment400. In some embodiments, scheduler420identifies pods and distributes the pods to a node. The identified pods can be newly created pods and/or terminated/failed pods (e.g., node failure). In some embodiments, scheduler420can determine a pod is a special pod and/or determine a particular node is a special node. Any pod that is not a special pod can be a standard pod. In some embodiments, a pod can be special based on an indication of being special. The indication can be added to the pod upon creation by a human and/or an application, and may be designated as special according to the description above. In some embodiments, scheduler420includes event handler421, error handler422, placement rules423, standard queue425, and special queue426. Event handler421can be any combination of hardware and/or software configured to identify a pod sent to/created by scheduler420and/or container manager412. In some embodiments, event handler421can determine a pod is available to be distributed to a node. In some embodiments, event handler421can determine if the container is a special container/pod. The determination can be based on data within the pod. Event handler421can analyze the pod to determine if it meets any criteria to be considered a special pod. Error handler422can be any combination of hardware and/or software configured to manage a terminated/failed pod. On occasion, a pod will be terminated from a node (e.g., power failure at node, etc.). Error handler422can identify the pod is no longer running on the node and resend the pod to a queue. In some embodiments, error handler422can determine if the pod is a standard pod and/or a special pod. In some embodiments, error handler422can perform similar functions as event handler421. However, error handler422handles pods that are being returned from a node. For example, if a special pod is terminated, error handler422can identify the special pod and send the special pod to special queue426to be redistributed to a node. Placement rules423can be a set of instruction that direct how container manager412distributes pods to nodes (e.g., nodes430). In some embodiments, placement rules423includes one or more configuration files (e.g., YAML Ain't Markup Language (YAML) configuration files). Placement rules423can be edited/updated. In some embodiments, placement rules423can include a standard rule set and a special rule set. The special rule set can be rules that are applied to special pods and/or pods in special queue426. In some embodiments, special rule set can be used to distribute special pods to special nodes. Standard queue425can store one or more pods to be distributed to a node. In some embodiments, standard queue425receives pods from event handler421and/or error handler422. Standard queue425can identify characteristics of the pod, compare the characteristics of the pod against placement rules423, determine availability of nodes430, and/or determine the capabilities (e.g., disk type, etc.) of nodes430. In some embodiments, standard queue425can send a pod to a node that satisfies all the rules, availability and/or capability limitations of the pod. Special queue426can be consistent with standard queue425with the exception that it handles special pods instead of standard pods. In some embodiments, special queue425can receive special pods from event handler421and/or error handler422. In some embodiments, special queue426only sends/assigns special pods to special nodes. In some embodiments, special pods can be sent to any node (e.g., special node and standard node). This can be based on the special rule set of placement rules423. In some embodiments, special queue426can include one or more special functions. The special functions can be toggled on and off. One example of a special function includes key management. The key management special function can manage a life cycle of encryption keys. This can include key storage, key distribution, key rotation, and key revocation for each pod. The container may only be valid (for the host and/or the node) as long as it includes a valid key. Another example special function includes a vulnerability scanner. The vulnerability scanner can analyze the node and/or container to identify potential security vulnerabilities and/or errors. An identified vulnerability can cause special queue426to remove the pod from the queue without assigning it to a node. It may also notify (e.g., error message, email, etc.) the source of the identified vulnerability. Another example special function can include a compliance manager. The compliance manager can detect and mitigate regulatory risks. For example, if a container is configured to process personal information, the compliance manager can ensure appropriate permissions have been obtained prior to deploying the pod. In some embodiments, the one or more special functions can be used for one pod collectively (e.g., one pod can utilize all or some of the special functions). In some embodiments, the one or more special functions can be applied to a single pod (e.g., each pod is associated to one special function). In some embodiments, standard queue425and special queue426both include special functions. Said differently, both queues can be special function queues (e.g., a first special queue and a second special queue). The special functions can be different. For example, a first special queue can have a vulnerability special function and a second special queue can have a personal information regulatory special function. Placement rules423can be configured to accurately place pods from either queue. In some embodiments, special functions can overlap. For example, the first special queue can have a key management and personal information regulatory special function, and the second special queue can have a key management and financial transaction regulatory special function. Node430(1) can be any combination of hardware and/or software configured to run one or more pods. In some embodiments, node430(1) can be a standalone computing device, a management server, a web server, a mobile computing device, or any other electronic device or computing system capable of receiving, sending, and processing data. In some embodiments, node430(1) can represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment. In some embodiments, nodes430(1) includes container client431, and pod432. Container client431can be any combination of hardware and/or software configured to run a container on a node. In some embodiments, container client431works complementarily with container manager412. Container client431can be a node agent that runs on each node and/or a container runtime agent (e.g., Kubelet® as an agent to Kubernetes®). Container client431can interface with container manager412to receive and execute containers as instructed by container manager412. Container client431can receive one or more pods from container manager412and/or scheduler420. Pod432can be a virtual structure configured to transfer containers between a host (e.g., host410) and a node (e.g., node430(1)). In some embodiments, a pod can include one or more containers. However, a single container is depicted for discussion purposes. In some embodiments, pod432is a special pod, and will include a special container. Container433can be any combination of hardware and/or software configured to run an application on a remote node. A container can be a software package that includes the necessary instructions and/or data to perform a specified task. It can include a runtime, all system libraries, and application libraries needed to fully accomplish the task of the application. Node430(2) and node430(n) can be consistent with node430(1) (e.g., they can contain a client, a pod, and a container). In some embodiments, one or more or nodes430is a standard node, and one or more of nodes430is a special node. For example, nodes430(1) through430(3) can be standard nodes and nodes430(4) and430(5) can be special nodes. This example is not limiting—in various embodiments, any number of nodes can be normal node and any number can be special nodes in no particular order (e.g., odds nodes are special, even numbered nodes are standard). A special node can be configured to run special pod and/or a special container. Special nodes can include particular hardware and/or software that is designed to run one or more containers. The particular hardware and/or software can be for security, processing speed, location, and/or other similar factors. For example, a governmental regulation may limit the location where personal data of a customer can be sent. The special node can be a node located in a particular location (e.g., in the same geographical boundary as the host). As another example, the special node may contain a particular security system (e.g., encryption, key management, etc.). In some embodiments, a special node can accept and run a standard pod. In these embodiments, the standard pod can be removed and replaced by container manager412. The replacement can be in response to a new special pod being received by special queue426. The replaced standard pod can be returned to the standard queue425and reassigned. This allows for the higher priority special pods to fully utilize the special nodes. FIG.5depicts a flowchart of an example method500, for assigning special pods to special nodes. Method500can be performed in a computing environment (e.g., computing environment400and/or cloud computing environment50). One or more of the advantages and improvements described above for using a special scheduling queue in parallel with a standard queue may be realized by method500, consistent with various embodiments of the present disclosure. Method500can be implemented by one or more processors, host410, container manager412, application416, nodes430, their subcomponents, and/or a different combination of hardware and/or software. In various embodiments, the various operations of method500are performed by one or more of host410, container manager412, application416, scheduler420and/or its subcomponents, nodes430, container client431, pod432, and container433. For illustrative purposes, method500will be described as being performed by container manager412. At operation502, container manager412receives/generates one or more pods. In some embodiments, container manager412generates the one or more pods. The generation can be in response to host410and/or application416providing instruction to generate the pods. In some embodiments, a user can initiate the receipt/generation of a pod. In some embodiments, the one or more pods are received by event handler421. In some embodiments, the pods are received from error handler422. The received pod can be a pod that was previously assigned to a node and terminated. The termination can be in response to a node error (e.g., losing power, etc.). The termination can be instructed by container manager412(e.g., part of load balancing, normal operation of moving pods, etc.). At operation504, container manager412obtains pod information. In some embodiments, container manager412may analyze the received pod. Container manager412can determine a pod type, if the pod is marked special, one or more applications included in the pod, data type, pod type, and/or other information. At operation506, container manager412determines if the pod is a special pod. The determination can be based on the pod information obtained at operation504. In some embodiments, a pod is determined to be special based on the pod being designated as special. In some embodiments, the pod is determined to be special based on a pod type, a pod source, and/or an application included in the pod. In some embodiments, the pod is determined to be special based on the type of data processed by the pod. If it is determined the pod is a special pod (506:YES), then container manager412proceeds to operation510. If it is determined the pod is not special, a standard pod, (506:NO), then container manager412proceeds to operation508. At operation508, container manager412adds the standard pod to standard queue425. Upon completion of operation508, container manager412proceeds to operation514. At operation510, container manager412adds the special pod to special queue426. At operation512container manager412initiates/performs the special function on the special pod. In some embodiments, all of the special functions for the pod can be performed. In some embodiments, the special function performed is based on the results of operation504. More than one special function can be performed on each pod. At operation514, container manager412assigns the pod to a node from the appropriate queue. In some embodiments, the assignment is based on placement rules423. Container manager412monitors the availability and capability of each node, and selects an appropriate node for the queue. In some embodiments, a special rule set included in placement rules423is used to place special nodes and a standard rule set is used for the standard nodes. In some embodiments, one or more nodes are designated as special nodes. A special node is any node that can run a particular special node. For example, a first special feature can require a first special node, and second special function can require a second special node. In some embodiments, a special node can be any node that meets a minimum set of requirements to operate the special node. The minimum set of requirements can be obtained from placement rules423and/or obtained from the pod in operation504. In some embodiments, a standard node can be assigned to a special node. This can occur when the special queue is empty and/or the special node meets all other rules of the standard set of rules. In these embodiments, the standard node can be terminated in response to a new pod being added the special queue and needing deployment to the special node. In some embodiments, the special queue pods are assigned prior to the standard queue pods. This allows computing resources to be allocated to higher priority tasks. In some embodiments, the queues are assigned in parallel. For example, assume a first pod is in the special queue, and a second pod is in the standard queue. The special queue can be performing the special function on the first pod. During this time, and based on available resources, container manager412can assign the second pod to an appropriate node. As soon as the special function is complete, container manager412can assign the first pod to an appropriate node. This can increase the overall efficiency of the system. One example efficiency benefit is that the second node can be deployed and not wait in a single queue behind the first pod while the special function is performed. At operation516, container manager412operates the pods. The operation can include monitoring, balancing, managing data transfer, and other tasks associated with operation of one or more pods. In some embodiments, operation516includes terminating the pod. The termination can be temporary (e.g., for rebalancing), or more permanent (e.g., batch workload complete). The operations of method500are described for a single pod. However, the operations of method500can be performed in parallel for more than one pod simultaneously. For example, it is possible to have a bottleneck of pods in the queues. Operations502and504may be performed multiple times as pods are added to the queues at a faster rate than the pods are assigned to nodes. Computer Technology and Computer Readable Media The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
46,638
11861398
DETAILED DESCRIPTION OF EXAMPLES In general, the following detailed description is given first with reference to task, also referred to as missions, performed by movable vehicles based on instruction data items received from a remote control apparatus. It is, however, noted that although some of the examples are given in the context of unmanned vehicle, or autonomous vehicles, a device controlled based on the principles disclosed herein can be any machine or another apparatus that can perform a task without direct human operator control based on instruction data items received from control apparatus in response to a request for instructions. For example, an unmanned device may comprise a robot, another manipulator, a surveillance device and so forth. FIG.1shows a control apparatus10configured for remotely controlling an unmanned device. In the example such device comprises an unmanned aerial vehicle (UAV)20. In the case of UAVs, the control apparatus may be provided, for example, by a ground control station (GCS) configured to provide control instruction for and communicate with at least one unmanned aerial vehicle. Unmanned aerial vehicles are often called “drones”. The control apparatus10can comprise at least one processor11,12and at least one memory15. The at least one memory15comprises computer code that, when executed on the at least one processor, causes the apparatus to perform at least one of the herein described functions. The control apparatus10can be configured to communicate via appropriate data communication system using appropriate one or more communication protocols. The communications may be via local networks, wide area networks or even direct communications between the control station and the unmanned device. For example, communication may be based on 4thor 5thgeneration (4G, 5G) communication systems and protocols, or later developments of the currently envisaged communication systems. The communications may be carried at least in part on wireless links24. The protocols may be based an appropriate connectionless protocol. What is needed for the purposes of the herein escribed examples is that the remote control station10is capable of receiving messages from and sending messages to an on-board data processing apparatus of the unmanned vehicle20. Thus the control apparatus comprises data communications circuitry, denoted by reference14, for receiving and transmitting data. It is understood that although the receiving circuitry and transmitting circuitry and various possible components thereof are shown as one block, the circuitry can comprise a number of circuitries. Such circuitries may share at least some components between them. Instruction data items16are shown to be available at the at least one memory15. The instruction data items can comprise control instructions such as information about next location coordinates, e.g., in longitude and latitude, altitude, speed, acceleration, breaking, distance, and so on. Control instruction items for a path of travel are often called waypoints. Examples for communications of the data items16to the unmanned device20will described below. The unmanned vehicle20is configured to receive control information in the form of the data items16from the control station10. The unmanned aerial device apparatus of the example ofFIG.1comprises appropriate on-board data processing apparatus that can be located within the body21thereof and adapted for processing instructions form the control stations10and controlling operation of the unmanned device20accordingly. An example of the data processing apparatus will be described with reference toFIG.2. The UAV apparatus20further comprises an energy source22. The energy source may be embodied in a variety of different ways. For example, the apparatus may be powered by electrical energy or by a chemical fuel. Electrical energy may be stored in an energy storage arrangement, such as for example a battery or ultracapacitor. The apparatus may also be arranged to receive electrical power via a cable while in active operation/whilst executing a mission, providing effectively unlimited flying time but with range limited by the length of the cable. The energy source may also be provided by photovoltaic cells which power, in part or in full from light. Chemical fuel may be stored in the apparatus in a tank or other kind of suitable arrangement. Chemical fuel may comprise, for example, hydrogen for generating electrical energy on board in a fuel cell, or a combustible hydrocarbon fuel for combustion in a generator to generate electricity, and/or an engine to power the apparatus directly. FIG.2shows an example of control apparatus30that may execute any of the herein described operations at an unmanned device. The apparatus30comprises at least one processor32,33and at least one memory31. The at least one memory comprises computer code that, when executed on the at least one processor, causes the apparatus to perform at least one of the herein described operations. The apparatus further comprises communications interface34. The interface provides appropriate circuitry to enable receiving and transmitting of data by the control apparatus. The memory31can provide a data buffering function for received instruction data items35. The at least one data processor can read the data items35from the buffer and cause performance of operations relating to the task to be performed accordingly. The instruction data items35can be read in sequential order until the buffer is empty. The on-board data processing apparatus of the unmanned vehicle can generate autopilot-specific instructions from the received commands. The control station10may receive telemetry information from unmanned vehicles under its control. For example, an unmanned vehicle may be configured to transmit mission progress information (e.g. information about the current waypoint and remaining distance, about near-by objects, other moving vehicles, other moving vehicles in a swarm of vehicles, sensor data from other devices and so on) to the ground control station. Information about the operating condition and/or state of the device may also be provided. For example, information about the remaining energy levels may be communicated back to the control station. The control station can then take the received information into account in the control actions, including in decision making regarding what to include in response messages. In accordance with a non-limiting example mission control information may be transmitted as Micro Air Vehicle Link (MAVLink) commands. MAVLink is an open source, point-to-point communication protocol used between a ground control station and unmanned vehicles to carry telemetry and to command and control unmanned vehicles. It may be used to transmit the orientation of an unmanned vehicle, its GPS location and speed. The current form of the MAVLink message has a maximum length of 17 bytes, consisting of a 6 bytes header, 9 bytes payload and 2 bytes checksum (acknowledgments do not comprise a payload and thus have the minimum size of 8 bytes). The exact form of the MAVLink protocol is not static, and may evolve/change over time. The MAVLink protocol operates at the application layer. It is noted that MAVLink is only given herein as an illustrative example of a protocol operating at this level for this purpose, and other protocols and message sizes may be used instead of this. Messages for an unmanned vehicle are traditionally sent using separate packets. For the exemplifying MAVLink protocol, the following are examples of messages that may be communicated for a mission comprising N waypoints. A control station first sends to an unmanned vehicle a MISSION_COUNT message that can contain the amount of mission items for a mission. The control station then sends a MISSION_ITEM message that contains coordinates and parameters for a first waypoint. The control station then sends a MISSION_ITEM message that contains coordinates and parameters for a second waypoint, and so forth. Finally, the control station sends a last MISSION_ITEM message that contains coordinates and parameters for the Nth waypoint. After this the control station can send to the unmanned vehicle a MISSION_ACK message that indicates that the last waypoint has been transmitted. The unmanned vehicle responds with an acknowledgement of all of these messages. The inventor has realised that certain aspects of the communications may be improved by a mechanism where several of the messages for a mission each providing an instruction data item are included into a single response message for transmission to the unmanned device. This can allow for a faster information transfer and thus reduced latency because the needed data item is already available in the vehicle, mobile or another device. This can also be used to save processing energy at the device. This may be of particular importance in battery operated devices such as drones as other moving vehicles. Lower signalling-to-payload overhead may also be achieved. To this effect, the following describes several aspects, which may be implemented in isolation or in combination with each other, for achieving at least some of these and possible other advantages. FIG.3shows an example of a method of communicating instruction data items from a control apparatus to a device, for example an unmanned device. In the method the control apparatus receives at100a request for at least one instruction data item from the device. The control apparatus responds to the request at101by including a selected number of instruction data items in a response message. The number of items to be included in the response message can be selected based on knowledge whether the requested at least one data item has already been sent to the device or not. In accordance with a possibility one or more instruction data items are communicated from a control station after receiving a request for an instruction data item from a vehicle or a mobile. The request is responded by selectively including one instruction data item or more than one instruction data items in a response message to the request for an instruction data item. The responding can comprise selectively including at least one instruction data item in the response message based at least partly on determination that the requested at least one data item has been sent before. The included at least one data item can be different than any of the requested one or more data items. FIG.4shows a flowchart for operation where, after reception of the request at110, multiple data items are included in the response message at111such that the number of instruction data items in the response message is greater than the number of data items indicated by the request. According to an aspect, the requested at least one data item is not included amongst the multiple data items in the response message. According to a possibility a single instruction data item or multiple instruction data items is selectively included in a response message to a request for a single instruction data item. A single instruction data item may be included in the response message in response to determining that the request concerns an instruction data item that has already been sent to the unmanned device. The single data item included can be different from the requested data item. Multiple instruction data items may be included in the response message in response to determining that the request concerns an instruction data item that has not yet been sent to the unmanned device. It is also possible for the control apparatus to dynamically adjust the number of instructions data items in a response message. The adjustment can take into account whether the requested data items has already been transferred and other factors, for example status information from the controlled unmanned device. A command can be included as a single instruction data item or it can be included in a multiple instruction data items. A further command for stopping the task may comprise two or more identical instruction data items. This may be desired, for example, if some reason the first command of stopping is not effective, for example. Some of the commands can be short and/or require urgent attention (such as a command for stopping the unmanned vehicle) and so should be sent as soon as possible. Other commands may be much longer in form, such as detailing a plurality of waypoint locations for a mission. The urgency can be taken into account as a part of the decision making process when decision what to include in a response message. FIG.5shows an example of a response message40. The message is provided by a data packet comprising a header part41and payload part42. The header part can be arranged according to the protocol used for the communications. The payload part42comprises multiple of data items43. In the example ofFIG.5the payload is shown to carry nine data items but this can be any number accommodated by the protocol. The number of data items can also be changed during the mission. The transmission of the message40may be via an Internet Protocol (IP) mechanism. The transport protocol may be a connectionless protocol, such as user datagram packet (UDP) protocol or another suitable protocol. What is relevant is that the control apparatus can selectively include more than one instruction data item in the payload is response to determination that is based at least in part on a received request how many instruction data items shall be sent in the response message to the requesting unmanned device. In accordance with a possibility priority information is included in the packet. FIG.6shows an example of operation at an unmanned device. At200the unmanned device can send to a control apparatus a request for at least one instruction data item. The unmanned device then received at201from the control apparatus a response message to the request. The response message comprises a number of instruction data items, the number having been selected based at least partly on determination whether the requested at least one data item has been sent before to the unmanned device. According a possibility the response message comprises at least one instruction data item that was not requested by the request. Operation of the unmanned device can then be controlled at203based on the received at least one instruction data item in the response message. FIG.7shows another example for operation at an unmanned device. A first response message to a request send at210can be received at211from a remote control apparatus, the first response message including multiple instruction data items. An instruction data item of the received multiple instruction data items is then used at212for control of the unmanned device while the other instruction data items can be saved in a memory of the unmanned device. In case of a failure to obtain one of the saved instruction data items from the memory, a second request for said non-obtained instruction data item is sent at213. A second response message is received at214. The second response message can include at least one instruction data item, the at least one instruction data item being different from the instruction data item requested by the second request. The operation of the unmanned device is then continued at215based on the at least one instruction data item received in the second response message. The unmanned device may be arranged to transmit a plurality of acknowledgements, each acknowledgement corresponding to a respective one of the plurality of mission items. Thus, on receipt of a message comprising multiple data items, the unmanned device may be arranged to provide acknowledgments for each of the item. Each of the acknowledgements by the unmanned vehicle for each of the items may be transmitted separately to each other. Alternatively, a message with multiple data items can be acknowledged by a single acknowledgement message, or acknowledgements of multiple individual data items can be bundled into an acknowledgement message. An acknowledgement message may also include status information (e.g., rendered, not rendered) of the instruction data item and/or whether the waypoint of the instruction data item is reached by the vehicle. The acknowledgement message may also comprise the respective identification of instruction data items as used in the response message. The control station may be configured to wait between transmissions of messages with multiple data items. This can be provided in order to enable the unmanned device to receive and process the multiple packets in orderly fashion. The following describes certain more detailed examples. The examples illustrate principles that can be applied to any unmanned vehicle. For the purposes of illustration, in some of the examples the device is specified to comprise an unmanned aerial device (UAV). Control apparatus adapted for selective sending of data items in response messages to requests is occasionally referred to as a ground control station (GCS). The apparatus of the unmanned device for controlling operations at the device end is referred to as an autopilot. Instruction data items are referred to as mission items. A ground control station (GCS) application, e.g., in one or more nests or centrally in a cloud can be used to control drones over a Long-Term Evolution (LTE) network or any other cellular network, new radio (NR, 5G) for example. Both the drone and the GCS may have radio modem to control the drones. A drone may have an on-board computer associated with autopilot for example. Relevant user equipment (UEs) can be provided by a ground station laptop and the drones themselves connected to at least one base station. An LTE modem with a registered subscriber identity module (SIM) may be required to enable a laptop or a drone to connect to an LTE network. The modems can often be connected to antennas for improved range. The E-UTRAN, as the name implies, forms the access network, which comprises of multiple evolved base stations called eNodeB (eNB) or (e/g)NodeB. These base stations serve one or more cells, which the UEs can connect to. The main task of the eNB is to handle the communications between the UEs and an evolved packet core (EPC). In addition to being connected to UEs and the EPC, each eNB is also connected to its nearby peers for the purpose of signalling and handover packet forwarding. Each UE and drone can belong to only one cell and communicate with only one eNB at a time. A handover can be performed whenever the UE moves to a new cell. The EPC forms the core network and it can contain entities such as a Home Subscriber Server (HSS), a Packet Data Network Gateway (PGW), a Serving Gateway (SGW) and a Mobility Management Entity (MME). The HSS is a central database for information related to users and subscriptions. It is queried by the MME, which is responsible for control plane operations and UE authentication. Each UE is assigned their own SGW, which handles the routing, forwarding and buffering of packets. PGW on the other hand handles IP address allocation for UEs and does all IP packet operations required for the connections towards a Packet Data Network (PDN). One of the advantages of the cellular technology like LTE or 5G is the capability to use spatial multiplexing with Multiple Input Multiple Output (MIMO) technique. This means that both the sender and receiver use multiple antennas simultaneously to transfer multiple data streams, increasing the bandwidth of the link and latency is decreased, and yet more in 5G. 5G is expected to have multiple radio interfaces, namely below 6 GHz, cmWave and mmWave. It is also expected that 5G can be integrated with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6 GHz-cmWave, below 6 GHz-cmWave-mmWave). One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility. The current architecture in LTE networks is fully distributed in the radio and fully centralized in the core network. Low latency applications and services in 5G may require bringing of the content close to the radio which leads to local break out and multi-access edge computing (MEC). 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors. MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time. Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (e.g., autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications). Drones and other remotely controlled vehicles are traditionally controlled through a direct radio link, where both the drone itself and the ground station or handheld controller are equipped with radio transceivers and antennas. In this case, the vehicle must stay within range of the control station and within line of sight, or risk losing the connection. On the other hand, if an existing LTE network is used, then the vehicle can freely move wherever there is network coverage. This also means that the control software may be physically in a different location. Teleoperation refers to the traditional low-level control method where the drone is controlled with a radio controller or joysticks that allow manoeuvring. An alternative real-time control method is guided waypoints or corridors. With this method, instead of controlling the drone directly, the user specifies a location to which the drone or other vehicle will directly attempt to move to. Typically, this is implemented as a map application on a ground station application, where the user may click to set the current target waypoint and destination for a drone. Autonomous control can be considered a more sophisticated control method, which differs from direct teleoperation by having a device capable of performing operations even when communications are lost for a period of time. In its simplest form, autonomous control can be implemented as a list of mission or corridor items that the UAV or another vehicle can carry out in sequence. Communications can also go other way. For example, an UAV may send during flight a continuous stream of telemetry reports back to a ground station. These reports contain information about the location and status of the UAV, as well as acknowledgements for commands and updates for current mission status as well battery charge and error status, for example. According to an example shown in the signalling flowchart ofFIG.8a ground control station (GCS) starts a mission operation by sending a message50informing an autopilot on-board of a vehicle of the number of mission data items of the mission. The ground control station then receives a request51for a first mission item from the autopilot. The ground control station can then, instead of responding with the requested first mission item, send in response message52several mission items packaged into one response message. When generating the response message the ground control station included a greater number of mission items in the response message than was requested, the message including at least one data item that was not requested. The autopilot may substantially immediately request the next mission data item after receiving a mission data item. On the other hand, the ground control station can predict mission data items that it would need to send in the foreseeable future. Therefore it is possible for the ground station control apparatus to send several mission data items at once in a response message instead of waiting and sending the data items one by one in response to requests for subsequent items. The autopilot can store the received mission data items in a buffer, read the mission data items directly from the buffer without needing to request for them individually, and wait for the response messages. The autopilot can use the saved mission data items from the buffer in the sequential order. According to a possibility the autopilot can continue fetching the instruction/mission data items from the buffer as long as there are mission items available, and request for a new one once there no items left to read, or it fails for some other reason to obtain a next item from the buffer. According to another possible way of operation the autopilot sends a new request after each mission data item it has processed. When the GCS receives a request it can respond by sending one or more mission data items in a response message, and ignore requests coming while responding to said request. If the next mission data item is available in the buffer then there is no need to request it from the remote control apparatus, and wait for the requested data item to arrive in a response message. This can reduce latency and signalling overhead in obtaining the mission items. In the example ofFIG.8ten mission items, items 0-9, are shown to be send in the response message52. However, the number of mission items in a response message can be set differently, and be any appropriate integer that a protocol message can carry. For example, 2, 5, 9, 10, 20, 25 instruction data items may be included in a response message. The number of items may be determined based on, for example, information such as whether the requested item has already been transmitted, the state of energy level at the unmanned device, processing and/or memory capacity and/or state of the unmanned device, operating conditions of the unmanned device and so forth. Sending too many mission items at a time, for example about more than thirty, may cause in certain circumstances the autopilot to fail in reading correctly the mission items. A possible cause for this is that the buffer may become full and/or data items are lost because the autopilot receives data items faster than it can parse them. A solution to address this while maintaining the multiple data item per message transmission capability is to selectively change between sending single item and multi-item response messages to requests. As mentioned above, the number of data items per a response message can also be selectively adjusted. That is, a response message can contain, e.g., four data items and a subsequent response message, e.g., six items. According to a possibility the number of mission data items may be dynamically changed during an operation. The change can be made, e.g., in response to changed conditions, changes in the mission, changes in connection quality, battery state, memory state, and so on. The control apparatus can be configured to be self-learning in this regard. For example, if the control apparatus determines that there is a risk of buffer overflow with the currently set number of mission items per message, e.g. based on noting reception of repeated requests for items that have already been sent exceeding a threshold, the number of mission items per response message can be reduced, for example halved. The flowchart ofFIG.9and signalling flowchart ofFIG.10show an example where a ground control station (GCS) selectively switches between two different states of sending data items, i.e. between sending of response messages containing multiple data items and response messages containing a single data item. The state can depend on determination how the requests are sent and/or received from the unmanned device. In the example, if an autopilot at an unmanned device requests a mission item that has not been sent before, the ground control station sends the next multiple items (in the example ten items) to the autopilot. Otherwise the ground control station can assume that the unmanned device still has items waiting in the buffer, and the ground control station responds by sending one data item to avoid buffer overflow issues and to allow the autopilot to catch up. More particularly, after start of the operation at60, the ground control station can start mission upload by sending the number of mission items to the autopilot at61. In this example a message71is sent indicating that the mission comprises102items. The ground control station can then wait at62for a message (a request or an acknowledgement) from the autopilot. The autopilot sends a message72requesting for the first mission item [0]. The ground control station receives the message and determines at63the type of the message. The message can be determined to be either a request or an acknowledgement. If the message is acknowledgement (success or failure), the operation ends at64. Any appropriate feedback mechanisms can be used for this purpose. Autopilot can send mission acknowledgement messages (ACK) to acknowledge that mission has been successfully received, or that upload has failed (NACK). The ground control station can assume that all mission data items in positively acknowledged messages have been received and are correctly stored in the buffer by the autopilot. If the acknowledgement indicates a failure in the message delivery, the message, and all mission items therein, is resent. InFIG.10the ground control station responds to the first request72by sending the next ten items [0-9] in response message73. The instruction data items are stored in the buffer by the autopilot for use in control the operation of the unmanned device. However, as also shown inFIG.10, the autopilot can send a second request74for data item [1]. That is, rather than using data item [1] from the buffer, the autopilot sends request message74for item [1] as a missing item. In this example the missing item [1] has already been sent by the ground control station in message73, and can thus be assumed as being received. It the operation that follows, this triggers a determination of how many mission items can be included in the response message, an example of which is described in more detail below. Turning back toFIG.9, if a received message is determined at63by the ground control station to be a request message for a mission data item, the ground control station can then perform the determination at65whether to respond to the request by a message carrying one mission data item or a message carrying multiple mission data items. A determination can be made whether or not the requested item has yet been sent. If the item has not been sent yet, this information can be considered to imply that the request is made by the autopilot because the buffer thereof is empty. Such determination can be made at66. The ground control station can then send at67a message carrying the next multiple data items. InFIG.10this corresponds to, e.g., message72including the first ten items [0-9] sent in response message62requesting for item [0]. The first of the items [0] can be used first by the autopilot and the upcoming nine items [1-9] are stored in the buffer. Thus the next data items are already available in the buffer for use by the autopilot. A second message77carrying multiple items [11-20] can be sent in response to request76requesting for item [11], and so forth. In other words, multi-item messages73and77are sent in response to determining that items [0] and [11] had not been previously requested and successfully sent. It is also possible to determine at block65ofFIG.9that the requested item has already been sent. A feedback mechanism can have confirmed that the message delivering the item had been received by the autopilot.FIG.10shows request74as an example of such situation. A determination can now be made at block68ofFIG.9that it is probable that the autopilot buffer is not yet empty. Such determination can be made based on assumption that because of the previous sending of message73the autopilot still has some items left in the buffer to read. This assumption can take into account the time that it would take to process the data items in the buffer. Sending next ten items in such case could risk overflowing the buffer and failing the upload. Therefore just one instruction data item may be sent at69. An example of such message is denoted inFIG.10by message75including one data item [10]. In more general terms, and in examples not limited to autopilots and ground control stations, a control apparatus can determine at65how to respond to a request from a device. The control apparatus can decide to selectively respond by a message carrying one mission data item or a message carrying multiple mission data items. The control apparatus can also determine how many data items shall be included in the response message. A determination can be made at65whether or not the requested data item has already been sent. If the item has not been sent yet, this information can be considered to imply that the request is made because the buffer at the controlled device thereof is empty. Such determination can be made at66. The control apparatus can then send at67a response message carrying the next multiple data items. It is also possible to determine at block65ofFIG.9that the requested data item has already been sent to the device. A feedback mechanism may be provided that may have confirmed that the message delivering the data item had been received by the device. A determination can then be made at block68that it is probable that the buffer at the device is not empty yet. Such determination can be made based on assumption that because of the previous sending of the data items there still are some items left in the buffer, for example based on assumption that the time that it would take to process the data items in the buffer has not run out yet. Sending multiple data items in such case could risk overflowing the buffer and failing the upload. To avoid this a response message with one instruction data item is sent at69. After the decision to send a message with either multiple mission items, or just one mission item the operation returns to62where the control station waits for next message to arrive from the device to be controlled, for example unmanned vehicle. It is noted that it does not matter which one of items [1] to [9] of message73is requested by request74. The example operation would work in the same manner, and the response message75would include mission item [10] regardless of which item has been requested.FIG.10shows several rounds of requests and responses according to the above described process which is repeated until the autopilot sends mission ACK message79. The ACK message can be used to either acknowledge that the complete mission has been successfully received, or that the upload has failed (NACK). A new mission can then be started, or the failed mission operation restarted. FIG.11shows yet another example where a ground control station sends a ‘STOP’ command (at any point) during transaction. This may be done, e.g., because of a new mission. The autopilot can then send a ‘cancelled’ acknowledgement (MISSION NACK). Stop command can be sent substantially immediately whenever needed while the request response communication continues in the background. An overriding emergency stop mechanism can be, in practise, rarely needed but may be required for example because of local laws and rules. The UAV can be stopped at the beginning or during mission data upload, unless it has not already been stopped. The stop command may be, e.g., a similar UDP packet as the other messages. A stop packet may be assigned a priority. The stop packet can be sent at the same time with ongoing request-response packets. FIG.11also illustrates the above mentioned possibility of the autopilot requesting for each of the mission data items [2 . . . 10] while the ground control station is on wait state after having already sent items [0-9]. The GCS ignores these requests and sends mission data item [10] not sent previously. Request for instruction data items can be sent periodically and/or in response to detection that there are no instruction data items left to read. The requesting operation can be started in response to a task or mission start indication, for example in response to receipt of a mission count message at61. If the operation is cancelled by either end, the requesting process can also be stopped. FIG.12shows an example where remote control apparatus is provided by a mobile communication device, for example a handheld user device90of a user91. The mobile communication device can be a for example a smart phone, tablet or laptop computer or the like device. The mobile communication device90can be configured for wireless data communications. The mobile communication device can be adapted for communication, for example, based on 4G and/or 5G, new radio technologies or similar technologies. InFIG.12a communication system comprising at least one base station95is depicted by the cloud94. The communication system can be a wireless local area system or a wide area system such as a cellular communication system. An unmanned land vehicle92is also shown as being configured for communication with the communication system94. The unmanned vehicle92can comprise a data processing apparatus as shown e.g. inFIG.2. The wireless user device90can be provided with an application configured to provide the above described control and messaging functions. The application can be downloaded from a service provider. Instead of, or in addition to, communications via the communications system94and base station95, the mobile user device90and the unmanned vehicle may be configured to establish a direct communication link there between. Indeed, the above discussed functions and features are not limited to any particular communication environment, but may occur in any appropriate communication system where messages can be exchange between a remote control apparatus and an unmanned device configured to receive instructions from the control apparatus. The controlled device requesting for instruction data items may also comprise a control apparatus and/or a device controlled by such apparatus. Such control apparatus may comprise, for example, a processor apparatus configured to provide a device operating as an Internet of Things (IoT) type device. An Internet of things (IoT) type device can be seen as a one off device or a network of devices such as one or more vehicles, home appliances, industrial devices etc. that contain electronics, software, actuators, and connectivity which allows these devices to connect, interact and exchange data. The IoT can involve extending connectivity to any range of traditionally dumb or non-internet-enabled physical devices and everyday objects. Embedded with technology, these devices can communicate and interact over the Internet, and they can be remotely monitored and controlled. Instruction data item may be sent in, e.g., when there is a need for repair, maintenance or update of the software of such control apparatus. An example of device s operable in IoT environment is “wearable device”. These are a member of a class of electronic devices that can be worn on the body as an accessory or an implant, such as smart watches, fitness devices, so-called fashion electronics, and medical devices such as hearing aids, etc. IoT devices and associated controllers can be configured to operate according to the above described principles of request-response communication of control instructions. For example, in the detailed examples given in relation to an autopilot and control station the autopilot can be replaced by an IoT device and the control station by a control apparatus. The control apparatuses described herein can comprise appropriate circuitry. As used in this specification, the term “circuitry” may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.” This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. Unmanned vehicles may form a swarm. One of such unmanned vehicles may be configured to act as the leader of the swarm. A further aspect can be provided in relation to swarms and collision avoidance mechanisms for unmanned vehicles acting as part of a swarm of unmanned vehicles. The control apparatus can activate such a mechanism and/or be informed whenever a collision avoidance mechanism is activated. The response messages according to the herein described principles can be generated taking information of collision avoidance mechanisms into account. In accordance with a mechanism an unmanned vehicle in a swarm can be arranged to determine whether the unmanned vehicle is being operated in an auto-flight mode or a manual flight mode. The unmanned vehicle can be arranged to set a first volume surrounding the unmanned vehicle in dependence on the determined flight mode. The first volume may have a first size and a first shape. The first size and shape may be set based on the context of the unmanned vehicle. For example, the context may comprise at least one of the location of the unmanned vehicle, the altitude of the unmanned vehicle (if an UAV), the heading/velocity of the unmanned vehicle, and a flight mode of the unmanned vehicle (e.g. whether the unmanned aerial vehicle is operating in an automatic/autopilot mode, or whether the unmanned vehicle is operating in a manual mode). Thus, for example, the size of the first volume may increase with increasing speeds of the unmanned vehicle. The first volume may wholly or only partially surround the unmanned vehicle. The unmanned vehicle can be arranged to monitor the first volume to determine whether or not an object enters the first volume. If an object enters the first volume, the unmanned vehicle is arranged to execute at least one first collision avoidance mechanism for avoiding collision with the object. If no object enters the first volume, the unmanned vehicle continues to monitor the first volume (i.e. the unmanned vehicle does not execute that at least one first collision avoidance mechanism). In some swarm systems, an unmanned vehicle is configured to define a three dimensional safety boundary. If an object enters this safety boundary, then collision avoidance mechanisms may be executed to avoid this. Collision avoidance mechanisms may involve at least one of a deviation in translational motion or rotational orientation from the navigational course that was configured substantially immediately prior to detection of the object. As a further example, the unmanned vehicle may be arranged to set two volumes to monitor, one of the volumes being smaller than the other volume (and preferable being wholly enclosed by the other volume). This second (smaller) volume may be treated as a failsafe mechanism, such that at least one collision avoidance mechanism is executed automatically in response to detection of an object within the smaller volume. The failsafe mechanism may operate regardless of whether the unmanned vehicle is operating in a manual mode (e.g. under direct, real-time operator control) or in an autopilot mode. Thus, the unmanned vehicle may be arranged to set a second volume surrounding the unmanned vehicle in dependence on the determined flight mode, the second volume being smaller than the first volume. The second volume may have a second size and a second shape. The second size and the second shape may be determined/selected in dependence on the context of the unmanned vehicle. Subsequent to setting the second volume, the unmanned vehicle may be arranged to monitor second volume to determine whether or not the object enters the second volume. If an object enters the second volume, the unmanned vehicle may be arranged to execute at least one second collision avoidance mechanism for avoiding collision with the object. A second collision avoidance mechanism may be different to the first collision avoidance mechanism. For example, the first collision avoidance mechanism may depend on notifying an operator of the system of the detected object and waiting from an explicit instruction from the operator for how to avoid the detected object. In contrast, the second collision avoidance mechanism may be an automatic action that does not depend on notifying a control station of the detected object. Therefore, the first collision avoidance mechanism may comprise notifying a control station of the unmanned vehicle of the object entering the first volume, and receiving explicit instructions instructing the unmanned vehicle how to avoid colliding with the object. The second collision avoidance mechanism may comprise automatically moving to avoid colliding with the object without any instructions to do so. All of the above-mentioned aspects may be implemented in the same system. It is noted that althoughFIG.1depicts an unmanned aerial vehicle comprising rotors, other types of UAV are possible and the principles are also applicable to systems not needing to comprise any rotors. For example, an unmanned vehicle may be a lighter-than-air gas balloon with thrusters, a miniature aircraft, miniature helicopter or even a full-sized light aircraft. The required data processing apparatus and functions may be provided by means of one or more data processors. The described functions may be provided by separate processors or by an integrated processor. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASystem InformationC), gate level circuits and processors based on multi core processor architecture, as non-limiting examples. The data processing may be distributed across several data processing modules. A data processor may be provided by means of, for example, at least one chip. Appropriate memory capacity can be provided in the relevant devices. The memory or memories may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. One or more of the steps discussed in relation to the flow and signaling charts may be performed by one or more processors in conjunction with one or more memories. An appropriately adapted computer program code product or products may be used for implementing the embodiments, when loaded or otherwise provided on an appropriate data processing apparatus. The program code product for providing the operation may be stored on, provided and embodied by means of an appropriate carrier medium. An appropriate computer program can be embodied on a computer readable record medium. A possibility is to download the program code product via a data network. In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Embodiments of the inventions may thus be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate. A control apparatus for controlling a device may comprise means for receiving a request for at least one instruction data item from the device and means for sending a response message to the request, the sending comprising selectively including at least one instruction data item in the response message based at least partly on determination whether the requested at least one data item has been sent before. Means for selectively including, in response to a request for a single instruction data item, either a single instruction data item or multiple instruction data items in the response message may be provided. Means for dynamically adjusting the number of instructions data items in the response message may also be provided. Means for including in the response message at least one instruction data item that was not requested by the request may be provided. Means for responding the request by including multiple instruction data items in the response message may be provided. The number of instruction data items included in the response message may be greater than the number of data items indicated by the request. Means may be provided for including a single instruction data item in the response message in response to determining that the request concerns an instruction data item that has already been sent to the device, and including multiple instruction data items in the response message in response to determining that the request concerns an instruction data item that has not yet been sent to the device. A control apparatus for a device may comprise means for sending to a remote control apparatus a request for at least one instruction data item and means for receiving, from the remote control apparatus, a response message comprising one or more instruction data items. The number of the instruction data items can have been selected based at least partly on determination whether the requested at least one data item has been sent before to the device. The control apparatus can further comprise means for controlling operation of the device based on the received one or more instruction data items in the response message. Means for processing at least one instruction data item on the response message that was not requested by the request can be provided. According to a possibility a control apparatus for a device comprises means for receiving a first response message from a control station includes multiple instruction data items. Means for controlling are configured to substantially immediately use an instruction data item of the received multiple instruction data items for control of the device and save the other instruction data items of the multiple instruction data items in a memory of the device. Means for sending are configured for sending, in response to a failure to obtain one of the saved instruction data items from the memory, a second request for said non-obtained instruction data item. Means for receiving are configured to receive a second response message including at least one instruction data item, wherein the at least one instruction data item is different from the instruction data item requested by the second request. The means for controlling can be configured to continue control of operation of the device based on the received at least one instruction data item. Sending of multiple instruction data items in a single response message when uploading a mission can be used to reduce latency and/or signalling overhead compared to, e.g., communications according to the Mavlink protocol where the recipient device needs to send a request for each individual mission item as the per request delivery mechanisms can cause the transaction of obtaining a mission item to take unnecessarily long time due to latency in communications. It is also possible to selectively adjust the number of data instruction items in a message responding a request for a single item. It is noted that whilst embodiments have been described in relation to certain architectures, similar principles can be applied to other systems. Therefore, although certain embodiments were described above by way of example with reference to certain exemplifying architectures for wireless networks, technologies standards, and protocols, the herein described features may be applied to any other suitable forms of systems, architectures and devices than those illustrated and described in detail in the above examples. It is also noted that different combinations of different embodiments are possible. It is also noted herein that while the above describes exemplifying embodiments, there are several variations and modifications which may be made to the disclosed solution without departing from the spirit and scope of the present invention.
54,815
11861399
DETAILED DESCRIPTION Implementations described herein provide for efficient tracking of threads of events, where events may correspond to exchanged emails, messages, meetings, phone calls, etc. The described techniques may allow for context specific threads to be efficiently retrieved and displayed, where the context may depend on a specific user to the thread. A threaded view of an activity timeline may include a reverse chronological timeline of events, with a requirement that all events (e.g., emails across contributors) belonging to the same thread/conversation be collapsed into a single leading edge representing the most recent interaction involving the specified participant(s). Various filters may be applied to threaded views using the techniques described herein such that context specific views may be displayed to a user. The techniques may include maintaining a thread of events for a plurality of users, where each element of the thread corresponds to an event/activity and includes at least a next field that includes a first subset of the plurality of users and a previous field that includes a second subset of the plurality of users. These techniques may allow new events to be quickly added and prior elements may be updated to reflect the addition. Further, the thread elements may allow the thread to be quickly traversed to identify queried information such that the queried information may be displayed to a user. Aspects of the disclosure are initially described in the context of an environment supporting an on-demand database service. Aspects of the disclosure are further described with respect to a computing system, a message timeline, message chains, a thread tracking table, and a user interface. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to user specific event threading. FIG.1illustrates an example of a system100for cloud computing that supports user specific event threading in accordance with various aspects of the present disclosure. The system100includes cloud clients105, contacts110, cloud platform115, and data center120. Cloud platform115may be an example of a public or private cloud network. A cloud client105may access cloud platform115over network connection135. The network may implement transfer control protocol and internet protocol (TCP/IP), such as the Internet, or may implement other network protocols. A cloud client105may be an example of a user device, such as a server (e.g., cloud client105-a), a smartphone (e.g., cloud client105-b), or a laptop (e.g., cloud client105-c). In other examples, a cloud client105may be a desktop computer, a tablet, a sensor, or another computing device or system capable of generating, analyzing, transmitting, or receiving communications. In some examples, a cloud client105may be operated by a user that is part of a business, an enterprise, a non-profit, a startup, or any other organization type. A cloud client105may interact with multiple contacts110. The interactions130may include communications, opportunities, purchases, sales, or any other interaction between a cloud client105and a contact110. Data may be associated with the interactions130. A cloud client105may access cloud platform115to store, manage, and process the data associated with the interactions130. In some cases, the cloud client105may have an associated security or permission level. A cloud client105may have access to certain applications, data, and database information within cloud platform115based on the associated security or permission level, and may not have access to others. Contacts110may interact with the cloud client105in person or via phone, email, web, text messages, mail, or any other appropriate form of interaction (e.g., interactions130-a,130-b,130-c, and130-d). The interaction130may be a business-to-business (B2B) interaction or a business-to-consumer (B2C) interaction. A contact110may also be referred to as a customer, a potential customer, a lead, a client, or some other suitable terminology. In some cases, the contact110may be an example of a user device, such as a server (e.g., contact110-a), a laptop (e.g., contact110-b), a smartphone (e.g., contact110-c), or a sensor (e.g., contact110-d). In other cases, the contact110may be another computing system. In some cases, the contact110may be operated by a user or group of users. The user or group of users may be associated with a business, a manufacturer, or any other appropriate organization. Cloud platform115may offer an on-demand database service to the cloud client105. In some cases, cloud platform115may be an example of a multi-tenant database system. In this case, cloud platform115may serve multiple cloud clients105with a single instance of software. However, other types of systems may be implemented, including—but not limited to—client-server systems, mobile device systems, and mobile network systems. In some cases, cloud platform115may support CRM solutions. This may include support for sales, service, marketing, community, analytics, applications, and the Internet of Things. Cloud platform115may receive data associated with contact interactions130from the cloud client105over network connection135, and may store and analyze the data. In some cases, cloud platform115may receive data directly from an interaction130between a contact110and the cloud client105. In some cases, the cloud client105may develop applications to run on cloud platform115. Cloud platform115may be implemented using remote servers. In some cases, the remote servers may be located at one or more data centers120. Data center120may include multiple servers. The multiple servers may be used for data storage, management, and processing. Data center120may receive data from cloud platform115via connection140, or directly from the cloud client105or an interaction130between a contact110and the cloud client105. Data center120may utilize multiple redundancies for security purposes. In some cases, the data stored at data center120may be backed up by copies of the data at a different data center (not pictured). Subsystem125may include cloud clients105, cloud platform115, and data center120. In some cases, data processing may occur at any of the components of subsystem125, or at a combination of these components. In some cases, servers may perform the data processing. The servers may be a cloud client105or located at data center120. In some cases, the cloud platform115includes a communication manager with an event threading component that organizes events (e.g., emails) based on participants. The event threading component may support efficient querying and display of threaded events in accordance with one or more selected (e.g., queried) participants. The component may monitor communications or events between various users (e.g., users within a cloud client105) and external users (e.g., contacts110) and update the threaded events based on the communications or events. For example, the component may maintain a thread corresponding to events including an external user (e.g., a contact110). Each time an event occurs (e.g., an email is sent), an element is added to the event thread, and the element includes various fields for maintaining or retrieving a threaded view based on one of the participants. As such, as participants are included in or excluded from various events corresponding to the thread, the system automatically maintains the thread state for each participant separately, such that a participant based thread may not include events for which the participant was not a contributor (e.g., was not included in the message). Some systems may provide threaded views of communications, but these views may correspond to a message to which the viewer was a participant. That is, the view does not maintain any context for all participants of various events within the thread. Accordingly, the thread view may be static for each viewer and may not be able to dynamically maintain context. Further, querying on these threaded views may not identify threads for which a queried user is in a prior message (e.g., not a participant in the latest message) such that the query may return inaccurate results. Further, as users are added and removed from threads (based on being included or not included in subsequent emails, for example), maintaining all the messages may incur significant computing overhead as duplicate emails (e.g., for all users) may be stored to maintain a thread state. To solve these problems, the event threading component of the cloud platform115may maintain threads of messages according to all participants in the thread and such that each thread may be queried based on one or more of the participants. The results of a query may include those events for which the queried participants were a contributor. To support these techniques, the component may maintain a doubly linked list where each element of the linked list corresponds to an event (e.g., an exchanged email, a meeting, a phone call, etc.) of the thread. Each element includes various fields that may be used to maintain the context for various participants. For example, each element may include a “has next” or next field that lists users that are participants to later/subsequent activities in the thread. Each element may also include a “has previous” or previous field that lists users that are participants to prior (chronologically) activities within a thread. Each time a new activity is added to a thread, these fields may be updated using an efficient back-filling technique. Further, these fields support for efficient participant contextual thread retrieval. That is, if the system receives a query for a thread based on a particular user of the thread, the system may efficiently retrieve a “leading edge” event (e.g., the most recent event in a series of events belonging to a single thread/conversation) using the included fields (e.g., previous and/or next fields) for the elements. These techniques may also support Boolean querying such that threads based on multiple participants may be quickly retrieved. The elements may also include an insight field that lists users that were a contributor to events (e.g., an event corresponding to the current element or an event corresponding to prior elements) in the thread that have “insights.” An insight may be something detected within a message that may provide further context to the thread. For example, an insight may be a mention of times for meeting, an identification of an important contributor (e.g., an executive in an organization) to a message, etc. Thus, insights may be identified using various semantic analysis and machine learning techniques. An activity/event may correspond to any event that has a timestamp and one or more participants such that several events can be organized in a chronological fashion with or without additional filters. Events may include email, meeting, voice call, chat etc. A contributor may be a user in an organization who has connected his/her activity data sources with the systems as described herein. For example, a contributor may authorize the described system to connect to their Gmail or Exchange email and calendar services. A participant/involved contact (e.g., contact110) may be an external person who is involved in the activity. This person could be a CRM contact, lead or person type. An insight may be an additional piece of derived information that can provide quick enrichment/inference of the activity data with appropriate quick actions. Examples of insights include an executive being involved in the event, an intent to schedule a meeting, discuss pricing, angry customer, etc. Insights may be generally derived by executing machine learning models, semantic analysis, or other natural language processing (NLP) on activities. An activity timeline/thread may correspond to a stream of activities in a range of time, in the context of one or more participants where activities are sourced from the entire organization (aka multiple contributors). Additional filters can be applied to this stream for better context. For example, the system may surface activities that have one or more insights or an insight of a specific type. It should be appreciated by a person skilled in the art that one or more aspects of the disclosure may be implemented in a system100to additionally or alternatively solve other problems than those described above. Further, aspects of the disclosure may provide technical improvements to “conventional” systems or processes as described herein. However, the description and appended drawings only include example technical improvements resulting from implementing aspects of the disclosure, and accordingly do not represent all of the technical improvements provided within the scope of the claims. In one example utilization of the system supported by the cloud platform115, a user of the threading component may be a sales manager for an organization. The sales manager may want to view various threads corresponding to a particular sales lead/customer. The sales manager may indicate a thread corresponding to the sales lead and one or more participants to the thread (e.g., sales associated that communicated with the sales lead). The system may query the thread based on the one or more participants and display the thread based on the queried participant. Thus, the sales manager may be able to view all events for which the participant was a contributor. The system may allow the sales manager to quickly switch between various participants to the thread. The sales manager may also view the thread in the context of the sales lead, such as to view all events for which the sales lead participated. FIG.2illustrates an example of a system200that supports user specific event threading in accordance with aspects of the present disclosure. The system200includes user device205and a server215. The user devices may be examples of devices of cloud clients105and/or contacts110and may correspond to participants or users to a thread of activities. The server215may be an example of a communication server (e.g., an email server) or may have access to communications or events managed by a separate server (e.g., an email server). The server monitors various interactions (e.g., interactions130ofFIG.1) (events) between various users. The server215includes a threading system270according to aspects described herein. The interactions/events may be examples of exchanged emails, meetings, phone calls, text/SMS messages, etc. The threads may be implemented or maintained as doubly linked lists. The threading system270may maintain various threads that correspond to sequences of activities/events for various participants to the thread. The thread may maintain a thread identifier (thread ID), which may be based on a header of one or more of the messages. The threading system270maintains a thread that corresponds to a plurality of participants (e.g., C0, C1, C2, C3, etc.), which may correspond to user devices205. At time T1, the thread includes events 0 and 1, with corresponding event keys (EK) as fields in elements260of the threads corresponding to the events. The event keys may be used to identify activities across contributors and may also be used to deduplicate events of a thread. The elements260may include various fields such as a next field (or has next field (FIN))230, a previous field (or has previous field (HP))235, and an insight field (or has insight field (HI))240. The HN field230indicates whether a participant to the event corresponding to the current element is a participant in a next event (e.g., any of the subsequent elements) in the thread. The HP field235indicates whether a participant to the event corresponding to the current element is a participant in a previous event (e.g., any of the previous elements) in the thread. Thus, as EK0corresponds to the first element260-ain the thread. The HP field235-ais empty. After the event corresponding to EK1occurs, the system backfills HN field230-ain EK0with [C0, C1, C2] because those users participate in EK1. At time T2, the server215receives an indication of event EK2260via an endpoint such as an API that the user authorized the server215to access. The server identifies the thread based on a header in the event indication and adds element260-ccorresponding to the event. The threading system270backfills HN field230-bof EK1(indicated in bold) to include C0, C1, and C2, because those users participated in (contributed to) event EK2. Further, the HP field235-cof element260-cis populated with users that contributed to the event EK2and also contributed to previous events (e.g., EK1or EK2) in the thread. Note that C3is included because C3participated in EK0(but not EK1). Further, C3is added to HN field230-aof EK0to indicate that C3contributes to one of the subsequent events (e.g., EK2). The HI field240indicates either that the current event has an identified insight or that one of the prior events has an identified insight. The listing of users in the insight field include those users that were participants to the message in the thread that has an insight. In the displayed thread ofFIG.2, EK0has an identified insight with users C0, C1, C2, C3contributing to that event. Thus, the HI field240-ain element260-alists those users. Event EK1includes C0, C1, C2as contributors, so HI field240-blists those contributors (without C3because C3was not a contributor to EK1). However, C3is a contributor to EK2(and EK2does not include an insight) but C3is listed because C3was a contributor to EK0with an insight. Using these techniques, the threading system270may maintain a context specific thread for each participant to the thread. Thus, when a thread is queried based one or more participants, the context specific thread may be retrieved that includes those events in which the queried users were contributors. In some cases, the query returns a first element (e.g., leading edge event) for the queried event. This may be achieved by identifying, starting with a latest element (e.g., EK2) in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the plurality of users in the HN field230-b. Additionally, these techniques may allow various data (e.g., activities) to be easily removed. Further, insights specific to participants may be maintained. The thread elements may also include an additional field referred to an contributor field or owner field (e.g., PriorEmailContributors or PriorEventContributors) that lists all current and prior participants to a thread (even if not a participant to latest message). This may allow participant as a viewer to retrieve threads involving the viewer. Accordingly, given the possibility of varying participants across events comprising a thread, to support to query in the context of one or more participants and not show duplicates when paging, the described system200may: 1) Maintain participant (context) specific event chains for every thread/conversation to: a) Track the leading (most recent) edge of the thread involving that participant.b) Track if the leading (most recent) edge of the thread involving that participant has an ancestor (e.g., a prior event).c) Track if the leading (most recent) edge of the thread involving that participant or any of its ancestors has insight metadata. 2) Maintain a set of current and prior contributors for every event in the thread to support the viewing user filter (referred to as PriorEmailContributors or PriorEventContributors). 3) Compute a unique identifier aka threadID for every event such that events belonging to a single thread/conversation correspond to the threadID. The system200may achieve solution 1 by materializing the thread into per participant chains of events tracked as doubly linked list along with insight information. The system200may maintain three additional pieces of information on every indexed event: 1) A HasNext pointer (e.g., HN field230) that tracks participants that have a newer events in the chain. This value can change as new event emerge as well as if items in the chain get deleted when enforcing privacy rules (e.g., General Data Protection Regulation (GDPR)). 2) A HasPrior pointer (e.g., HP field235) that tracks participants that have an older (ancestor) event in the chain. This value may change as new events emerge but can change when items in the chain get deleted when enforcing privacy rules (e.g., GDPR). 3) A HasInsight pointer (e.g., HI field240) that tracks participants that have at least one insight on any ancestor including itself. When fetching a threaded activity timeline for one or more participants in a given time range the system200may perform the following: 1) Fetch all events in the specified time range that involve at least one participant and none of the participants exist in the HasNext collection. For each thread, the system200may find one event that is referred to as the leading edge. 2) For each leading edge use its HasPrior pointer to indicate whether it is a thread (e.g., has ancestors for the participants in question). 3) For each leading edge, the system200may use the HasInsight pointer to indicate whether the leading edge or any of its ancestors has one or more insights for the participants in question. In the case of multiple participants, the system200may return true if at least one participant is in the list. To facilitate computation of these values the system200may also create and maintain a new store called ThreadingTracker (e.g., threading tracking table500ofFIG.5) that tracks all events belonging to a thread (e.g., same threadID) along with its participants and whether each event has any additional insight metadata ordered by time. FIG.3illustrates an example of a message timeline300that supports user specific event threading in accordance with aspects of the present disclosure. The message timeline300illustrates various events and how the events affect a thread of events including elements having the fields as described with respect toFIG.2. The timeline300illustrates an email chain initiated by user U1in organization O1with threadID TH1at time T0. The first email involves participants C0, C1, C2, C3. Over times T1through T4a series of replies with different participants ensues as shown inFIG.3.FIG.3also shows how the HasNext, HasPrior and HasInsight collections transition over time. The timeline300includes the following events: 1) At time T0: Timeline300includes one email (e.g., event). Fetching the timeline for C0, C1, C2, or C3may result in one email EK0not shown as a thread (as there are no ancestors) with at least one insight. 2) At time T1: Email EK0is part of a threaded conversation for participants C0, C1, C2but not for C3. Fetching the timeline for C0, C1, C2may return email EK1shown as the leading edge of a thread with at least one insight. Fetching the timeline for C3may result in one email not shown as a thread (as there are no ancestors) with at least one insight. Fetching the timeline for both C2and C3may return email EK1as the leading edge of the thread with at least one insight. In this case, the participant check on the involved contact collection may be an OR condition. Further, to check for the existence of either while the participant check on hasNext is an AND condition to ensure both may not exist. 3) At time T2: C3and C4are added to email EK2along with C0, C1, C2. As a result, email EK0is part of a threaded conversation for participants C0, C1, C2and C3while email EK1is now part of a threaded conversation for C0, C1, C2. Fetching the timeline for C0, C1, C2, or C3may return email EK2shown as the leading edge of a thread with at least one insight. Fetching the timeline for C4may result in just one email EK2not shown as a thread (as there are no ancestors for C4) with no insights. 4) At time T3: A private response is sent involving C2. As a result, email EK2may be part of a threaded conversation for participant C2. Fetching the timeline for C2may return email EK3shown as the leading edge of a thread with at least one insight. Fetching the timeline for C0, C1, or C3may still return email EK2shown as the leading edge of a thread with at least one insight. Fetching the timeline for C4may still result in one email EK2not shown as a thread (as there are no ancestors for C4) with no insights. 5) At time T4: C0, C1, C4are added back into a reply email EK4that does not result in any insight. As a result, email EK2is now part of a threaded conversation for participant C4. Fetching the timeline for either C0or C1may return email EK4shown as the leading edge of a thread with at least one insight. Fetching the timeline for either C0, C1, or C3may still return email EK2shown as the leading edge of a thread with at least one insight. Fetching the timeline for C4may return email EK4shown as the leading edge of a thread with no insights. To visualize these changes from the perspective of each participant from time T0to time T4the system may also represent the changes as shown inFIG.4. FIG.4illustrates an example of message chains400that supports user specific event threading in accordance with aspects of the present disclosure. The message chains400illustrates various threads of message for each participant in accordance with the techniques described herein. A query for the leading event for user C0(after time T4) may return EK4, HI:false, and a query for the leading event for user C0after T3and before T4may return EK2, HI:false, etc. These results may be based on identifying the first message in the thread that does not include the C0in the HN field. Thus, leading edges may vary depending on which specific participants are used in a query and which point in time the query is received (or identified). FIG.5illustrates an example of a thread tracking table500that supports user specific event threading in accordance with aspects of the present disclosure. The system may maintain the thread tracking table500for each thread (based on threadID), which may support providing information in response to queries. The thread tracking table500includes an organization identifier (OrgID), threadID, time stamp, event type, hash, participant set, and hasInsight field for each event in a thread (e.g., each email). A thread tracking table500may be maintained for each thread, and may be used to provide an overview of the characteristics of the thread. Ins some cases, the table. The thread tracking table500shows, for a given threadID, the set of participants, the insights, etc. FIG.6illustrates an example of a user interface (UI)600that supports user specific event threading in accordance with aspects of the present disclosure. The UI600includes a leading edge event605corresponding to a participant (e.g., “Simon Fraser”), which may be retrieved according to aspects described herein. Simon Fraser may be an opportunity owner610, which may be an example of a contact (e.g., contact110ofFIG.1) corresponding to a sales lead. The view displays the leading edge event605based on the opportunity owner610, which is a participant to the thread. A user may select the UI control615, to expand the thread, as illustrated at the UI600. The selection may trigger the system as described herein to identify the chain of messages/events prior the leading edge event605. This may be achieved by identifying each element (in a thread) prior to the leading edge event element that includes a target user identifier (e.g., the opportunity owner610) in the next field of the element until an element does not include the target user in the previous field. That is, the system traverses the field while the elements include a next field with the target user and then stops when the previous field does not include the target user. FIG.7shows a block diagram700of an apparatus705that supports user specific event threading in accordance with aspects of the present disclosure. The apparatus705may include an input module710, an event threading manager715, and an output module755. The apparatus705may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). In some cases, the apparatus705may be an example of a user terminal, a database server, or a system containing multiple computing devices. The input module710may manage input signals for the apparatus705. For example, the input module710may identify input signals based on an interaction with a modem, a keyboard, a mouse, a touchscreen, or a similar device. These input signals may be associated with user input or processing at other components or devices. In some cases, the input module710may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system to handle input signals. The input module710may send aspects of these input signals to other components of the apparatus705for processing. For example, the input module710may transmit input signals to the event threading manager715to support user specific event threading. In some cases, the input module710may be a component of an input/output (I/O) controller915as described with reference toFIG.9. The event threading manager715may include a thread querying component720, a thread traversing component725, a leading edge identifier730, a leading edge indicator735, a thread identifier740, a new event component745, and a thread updating component750. The event threading manager715may be an example of aspects of the event threading manager805or910described with reference toFIGS.8and9. The event threading manager715and/or at least some of its various sub-components may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of the event threading manager715and/or at least some of its various sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The event threading manager715and/or at least some of its various sub-components may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical devices. In some examples, the event threading manager715and/or at least some of its various sub-components may be a separate and distinct component in accordance with various aspects of the present disclosure. In other examples, the event threading manager715and/or at least some of its various sub-components may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The thread querying component720may receive a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users. The thread traversing component725may identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field. The leading edge identifier730may identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users. The leading edge indicator735may transmit an indication of the leading edge event of the thread of events based on the query. The thread identifier740may identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field. The new event component745may add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The thread updating component750may update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. The output module755may manage output signals for the apparatus705. For example, the output module755may receive signals from other components of the apparatus705, such as the event threading manager715, and may transmit these signals to other components or devices. In some specific examples, the output module755may transmit output signals for display in a user interface, for storage in a database or data store, for further processing at a server or server cluster, or for any other processes at any number of devices or systems. In some cases, the output module755may be a component of an I/O controller915as described with reference toFIG.9. FIG.8shows a block diagram800of an event threading manager805that supports user specific event threading in accordance with aspects of the present disclosure. The event threading manager805may be an example of aspects of an event threading manager715or an event threading manager910described herein. The event threading manager805may include a thread querying component810, a thread traversing component815, a leading edge identifier820, a leading edge indicator825, a thread identifier830, a new event component835, a thread updating component840, a thread expander845, a thread expansion component850, a message interface855, a thread table component860, a contributor component865, and an insight component870. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The thread querying component810may receive a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users. In some examples, the thread querying component810may receive a query for a leading edge event of the thread of events for a target user of the set of user identifiers, where the query indicates the target user. The thread traversing component815may identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field. In some examples, the thread traversing component815may identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the respective next field. In some examples, the thread traversing component815may identify each element prior to the first element corresponding to the leading edge event that includes the target user in the next field until an element does not include the target user in the previous field. The leading edge identifier820may identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users. In some examples, the leading edge identifier820may identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the respective next field. The leading edge indicator825may transmit an indication of the leading edge event of the thread of events based on the query. In some examples, the leading edge indicator825may transmit an indication of the leading edge event of the thread of events based on the query. The thread identifier830may identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field. In some examples, the thread identifier830may identify the thread of events using the thread identifier. In some cases, each event corresponds to a meeting, an email, a message, or a phone call. The new event component835may add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. In some examples, the new event component835may receive an indication of a new event including at least a subset of the set of user identifiers, where the new event is associated with the thread of events in accordance with a thread identifier. In some examples, the new event component835may add, to the thread of events, a new element corresponding to the new event, where the new element includes the respective previous field including one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The thread updating component840may update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. In some examples, the thread updating component840may update, based on adding the new element to the new event, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to the current element in the thread of events. The thread expander845may receive an indication to expand the leading edge event for the target user to a thread for the target user. The thread expansion component850may transmit an indication of each element as the thread for the target user. The message interface855may detect transmission of a new message between one or more of the set of user identifiers. The thread table component860may store a table in association with the thread of events, where the table includes a summary of the thread of events. The contributor component865may include, in each element of the thread of events, a respective contributor field that includes one or more of the plurality of user identifiers that includes user identifiers of the plurality of user identifiers that either contributed to an event of a current element or an event prior to the current element. In some cases, each element of the thread of events further includes a respective contributor field that includes one or more of the set of user identifiers that includes user identifiers of the set of user identifiers that either contributed to an event of a current element or an event prior to the current element. The insight component870may include, in each element of the thread of events, an insight field indicating one or more of the plurality of user identifiers based on which users of the plurality of user identifiers were a contributor to an event corresponding to a current element with an insight or any prior element corresponding to an event with an insight. In some cases, each element of the thread of events further includes an insight field indicating one or more of the set of user identifiers based on which users of the set of user identifiers were a contributor to an event corresponding to a current element with an insight or any prior element corresponding to an event with an insight. FIG.9shows a diagram of a system900including a device905that supports user specific event threading in accordance with aspects of the present disclosure. The device905may be an example of or include the components of a database server or an apparatus705as described herein. The device905may include components for bi-directional data communications including components for transmitting and receiving communications, including an event threading manager910, an I/O controller915, a database controller920, memory925, a processor930, and a database935. These components may be in electronic communication via one or more buses (e.g., bus940). The event threading manager910may be an example of an event threading manager715or805as described herein. For example, the event threading manager910may perform any of the methods or processes described above with reference toFIGS.7and8. In some cases, the event threading manager910may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. The I/O controller915may manage input signals945and output signals950for the device905. The I/O controller915may also manage peripherals not integrated into the device905. In some cases, the I/O controller915may represent a physical connection or port to an external peripheral. In some cases, the I/O controller915may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller915may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller915may be implemented as part of a processor. In some cases, a user may interact with the device905via the I/O controller915or via hardware components controlled by the I/O controller915. The database controller920may manage data storage and processing in a database935. In some cases, a user may interact with the database controller920. In other cases, the database controller920may operate automatically without user interaction. The database935may be an example of a single database, a distributed database, multiple distributed databases, a data store, a data lake, or an emergency backup database. Memory925may include random-access memory (RAM) and read-only memory (ROM). The memory925may store computer-readable, computer-executable software including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory925may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor930may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a central processing unit (CPU), a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor930may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor930. The processor930may be configured to execute computer-readable instructions stored in a memory925to perform various functions (e.g., functions or tasks supporting user specific event threading). FIG.10shows a flowchart illustrating a method1000that supports user specific event threading in accordance with aspects of the present disclosure. The operations of method1000may be implemented by a database server or its components as described herein. For example, the operations of method1000may be performed by an event threading manager as described with reference toFIGS.7through9. In some examples, a database server may execute a set of instructions to control the functional elements of the database server to perform the functions described below. Additionally or alternatively, a database server may perform aspects of the functions described below using special-purpose hardware. At1005, the database server may receive a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users. The operations of1005may be performed according to the methods described herein. In some examples, aspects of the operations of1005may be performed by a thread querying component as described with reference toFIGS.7through9. At1010, the database server may identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field. The operations of1010may be performed according to the methods described herein. In some examples, aspects of the operations of1010may be performed by a thread traversing component as described with reference toFIGS.7through9. At1015, the database server may identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users. The operations of1015may be performed according to the methods described herein. In some examples, aspects of the operations of1015may be performed by a leading edge identifier as described with reference toFIGS.7through9. At1020, the database server may transmit an indication of the leading edge event of the thread of events based on the query. The operations of1020may be performed according to the methods described herein. In some examples, aspects of the operations of1020may be performed by a leading edge indicator as described with reference toFIGS.7through9. FIG.11shows a flowchart illustrating a method1100that supports user specific event threading in accordance with aspects of the present disclosure. The operations of method1100may be implemented by a database server or its components as described herein. For example, the operations of method1100may be performed by an event threading manager as described with reference toFIGS.7through9. In some examples, a database server may execute a set of instructions to control the functional elements of the database server to perform the functions described below. Additionally or alternatively, a database server may perform aspects of the functions described below using special-purpose hardware. At1105, the database server may identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field. The operations of1105may be performed according to the methods described herein. In some examples, aspects of the operations of1105may be performed by a thread identifier as described with reference toFIGS.7through9. At1110, the database server may add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The operations of1110may be performed according to the methods described herein. In some examples, aspects of the operations of1110may be performed by a new event component as described with reference toFIGS.7through9. At1115, the database server may update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. The operations of1115may be performed according to the methods described herein. In some examples, aspects of the operations of1115may be performed by a thread updating component as described with reference toFIGS.7through9. FIG.12shows a flowchart illustrating a method1200that supports user specific event threading in accordance with aspects of the present disclosure. The operations of method1200may be implemented by a database server or its components as described herein. For example, the operations of method1200may be performed by an event threading manager as described with reference toFIGS.7through9. In some examples, a database server may execute a set of instructions to control the functional elements of the database server to perform the functions described below. Additionally or alternatively, a database server may perform aspects of the functions described below using special-purpose hardware. At1205, the database server may identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field. The operations of1205may be performed according to the methods described herein. In some examples, aspects of the operations of1205may be performed by a thread identifier as described with reference toFIGS.7through9. At1210, the database server may add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The operations of1210may be performed according to the methods described herein. In some examples, aspects of the operations of1210may be performed by a new event component as described with reference toFIGS.7through9. At1215, the database server may update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. The operations of1215may be performed according to the methods described herein. In some examples, aspects of the operations of1215may be performed by a thread updating component as described with reference toFIGS.7through9. At1220, the database server may receive a query for a leading edge event of the thread of events for a target user of the set of user identifiers, where the query indicates the target user. The operations of1220may be performed according to the methods described herein. In some examples, aspects of the operations of1220may be performed by a thread querying component as described with reference toFIGS.7through9. At1225, the database server may identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the respective next field. The operations of1225may be performed according to the methods described herein. In some examples, aspects of the operations of1225may be performed by a thread traversing component as described with reference toFIGS.7through9. At1230, the database server may identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the respective next field. The operations of1230may be performed according to the methods described herein. In some examples, aspects of the operations of1230may be performed by a leading edge identifier as described with reference toFIGS.7through9. At1235, the database server may transmit an indication of the leading edge event of the thread of events based on the query. The operations of1235may be performed according to the methods described herein. In some examples, aspects of the operations of1235may be performed by a leading edge indicator as described with reference toFIGS.7through9. At1240, the database server may receive an indication to expand the leading edge event for the target user to a thread for the target user. The operations of1240may be performed according to the methods described herein. In some examples, aspects of the operations of1240may be performed by a thread expander as described with reference toFIGS.7through9. At1245, the database server may identify each element prior to the first element corresponding to the leading edge event that includes the target user in the next field until an element does not include the target user in the previous field. The operations of1245may be performed according to the methods described herein. In some examples, aspects of the operations of1245may be performed by a thread traversing component as described with reference toFIGS.7through9. At1250, the database server may transmit an indication of each element as the thread for the target user. The operations of1250may be performed according to the methods described herein. In some examples, aspects of the operations of1250may be performed by a thread expansion component as described with reference toFIGS.7through9. FIG.13shows a flowchart illustrating a method1300that supports user specific event threading in accordance with aspects of the present disclosure. The operations of method1300may be implemented by a database server or its components as described herein. For example, the operations of method1300may be performed by an event threading manager as described with reference toFIGS.7through9. In some examples, a database server may execute a set of instructions to control the functional elements of the database server to perform the functions described below. Additionally or alternatively, a database server may perform aspects of the functions described below using special-purpose hardware. At1305, the database server may identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field. The operations of1305may be performed according to the methods described herein. In some examples, aspects of the operations of1305may be performed by a thread identifier as described with reference toFIGS.7through9. At1310, the database server may add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The operations of1310may be performed according to the methods described herein. In some examples, aspects of the operations of1310may be performed by a new event component as described with reference toFIGS.7through9. At1315, the database server may update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. The operations of1315may be performed according to the methods described herein. In some examples, aspects of the operations of1315may be performed by a thread updating component as described with reference toFIGS.7through9. At1320, the database server may receive an indication of a new event including at least a subset of the set of user identifiers, where the new event is associated with the thread of events in accordance with a thread identifier. The operations of1320may be performed according to the methods described herein. In some examples, aspects of the operations of1320may be performed by a new event component as described with reference toFIGS.7through9. At1325, the database server may identify the thread of events using the thread identifier. The operations of1325may be performed according to the methods described herein. In some examples, aspects of the operations of1325may be performed by a thread identifier as described with reference toFIGS.7through9. At1330, the database server may add, to the thread of events, a new element corresponding to the new event, where the new element includes the respective previous field including one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events. The operations of1330may be performed according to the methods described herein. In some examples, aspects of the operations of1330may be performed by a new event component as described with reference toFIGS.7through9. At1335, the database server may update, based on adding the new element to the new event, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to the current element in the thread of events. The operations of1335may be performed according to the methods described herein. In some examples, aspects of the operations of1335may be performed by a thread updating component as described with reference toFIGS.7through9. A method of data processing is described. The method may include receiving a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users, identifying, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field, identifying, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users, and transmitting an indication of the leading edge event of the thread of events based on the query. An apparatus for data processing is described. The apparatus may include a processor, memory coupled with the processor, and instructions stored in the memory. The instructions may be executable by the processor to cause the apparatus to receive a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users, identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field, identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users, and transmit an indication of the leading edge event of the thread of events based on the query. Another apparatus for data processing is described. The apparatus may include means for receiving a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users, identifying, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field, identifying, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users, and transmitting an indication of the leading edge event of the thread of events based on the query. A non-transitory computer-readable medium storing code for data processing is described. The code may include instructions executable by a processor to receive a query for a leading edge event of a thread of events for a target user, where the thread of events is associated with a set of users and is organized such that each element in the thread of events corresponds to a respective event and includes at least a next field that includes a first subset of the set of users and a previous field that includes a second subset of the set of users, identify, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the first subset of the set of users in the next field, identify, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the first subset of the set of users, and transmit an indication of the leading edge event of the thread of events based on the query. A method of data processing is described. The method may include identifying a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field, adding, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events, and updating, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. An apparatus for data processing is described. The apparatus may include a processor, memory coupled with the processor, and instructions stored in the memory. The instructions may be executable by the processor to cause the apparatus to identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field, add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events, and update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. Another apparatus for data processing is described. The apparatus may include means for identifying a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field, adding, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events, and updating, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. A non-transitory computer-readable medium storing code for data processing is described. The code may include instructions executable by a processor to identify a first element of a thread of events based on a first event associated with a set of user identifiers, where the first element includes a respective next field, add, for each subsequent event associated with at least a subset of the set of user identifiers, a new element in the thread of events, each new element including the respective next field and a respective previous field, where the respective previous field of the new element includes one or more of the set of user identifiers that are associated with at least one prior event to the new element in the thread of events, and update, based on adding the new element, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that are associated with at least one subsequent event to a current element in the thread of events. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving a query for a leading edge event of the thread of events for a target user of the set of user identifiers, where the query indicates the target user, identifying, starting with a latest element in the thread of events arranged in a chronological order, a first element that does not include the target user in the respective next field, identifying, as the leading edge event for the target user, an event corresponding to the first element that does not include the target user in the respective next field, and transmitting an indication of the leading edge event of the thread of events based on the query. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving an indication to expand the leading edge event for the target user to a thread for the target user, identifying each element prior to the first element corresponding to the leading edge event that includes the target user in the next field until an element does not include the target user in the previous field, and transmitting an indication of each element as the thread for the target user. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving an indication of a new event including at least a subset of the set of user identifiers, where the new event may be associated with the thread of events in accordance with a thread identifier, identifying the thread of events using the thread identifier, adding, to the thread of events, a new element corresponding to the new event, where the new element includes the respective previous field including one or more of the set of user identifiers that may be associated with at least one prior event to the new element in the thread of events, and updating, based on adding the new element to the new event, the respective next field of each element of the thread of events such that the respective next field includes one or more of the set of user identifiers that may be associated with at least one subsequent event to the current element in the thread of events. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, receiving the indication of the new event may include operations, features, means, or instructions for detecting transmission of a new message between one or more of the set of user identifiers. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for storing a table in association with the thread of events, where the table includes a summary of the thread of events. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, each event corresponds to a meeting, an email, a message, or a phone call. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, each element of the thread of events further includes a respective contributor field that includes one or more of the set of user identifiers that includes user identifiers of the set of user identifiers that either contributed to an event of a current element or an event prior to the current element. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, each element of the thread of events further includes an insight field indicating one or more of the set of user identifiers based on which users of the set of user identifiers were a contributor to an event corresponding to a current element with an insight or any prior element corresponding to an event with an insight. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, aspects from two or more of the methods may be combined. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
77,521
11861400
The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or that render other details difficult to perceive may have been omitted. DETAILED DESCRIPTION In an embodiment, disclosed systems and methods present a flexible, scalable, and reliable method for selecting and deploying an optimally secure and efficient distributed network. Use of trusted hardware and related technology may enable rapid and decentralized authentication of devices; in embodiments, block-chains or similar distributed data management facilities may be used in authentication and device selection, permitting efficiency of rapid lookup to be coupled to reliability of consensus and other methods for authentication. Systems and methods as described herein may involve computation, calculation, assessment, assignment, or use of a confidence level associated with one or more processes, devices, or data, including without limitation one or more processes, appraisals, and/or cryptographic evaluators as described herein. Confidence level, as used herein, is an element of data expressing a degree to which the safety, security, or authenticity of a process, device, or datum may be relied upon. As used herein, a confidence level may include a numerical score; numerical score may be a score on a scale having one extremum representing a maximal degree of reliability, and a second extremum representing a minimum degree of reliability. As a non-limiting example, extremum representing maximal degree of reliability may be a maximal number of an ordered set of numbers such as an open or closed set on the real number line, a sequential listing of integers or natural numbers, or the like; persons skilled in the art will be aware that selection of a numerical extremum to represent a higher level of confidence or reliability, albeit intuitively pleasing, is not mathematically necessary, and any suitable mapping of level of confidence or reliability to numerical objects or ranges may feasibly be substituted. As a further non-limiting example, numerical score may include, or be mappable to, a probability score, such as a percentage probability or a 0-1 probability level. Confidence level may include further information or indications, such as without limitation flags denoting untrustworthy, suspect, or hostile elements; for instance a flag may indicate that a particular device, program, process, or element of data appears to be compromised and/or has been involved in fraudulent or otherwise hostile or disruptive engagement with system100and/or methods described herein in the past. Methods of aggregating, computing, and/or using confidence levels will be described in further detail below. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various ways in which confidence levels may be implemented, calculated, assigned, and/or used as consistent with methods and systems disclosed herein. In an embodiment, methods and systems described herein may perform implement one or more aspects of a cryptographic system. In one embodiment, a cryptographic system is a system that converts data from a first form, known as “plaintext,” which is intelligible when viewed in its intended format, into a second form, known as “cyphertext,” which is not intelligible when viewed in the same way. Cyphertext may be unintelligible in any format unless first converted back to plaintext. In one embodiment, a process of converting plaintext into cyphertext is known as “encryption.” Encryption process may involve the use of a datum, known as an “encryption key,” to alter plaintext. Cryptographic system may also convert cyphertext back into plaintext, which is a process known as “decryption.” Decryption process may involve the use of a datum, known as a “decryption key,” to return the cyphertext to its original plaintext form. In embodiments of cryptographic systems that are “symmetric,” decryption key is essentially the same as encryption key: possession of either key makes it possible to deduce the other key quickly without further secret knowledge. Encryption and decryption keys in symmetric cryptographic systems may be kept secret, and shared only with persons or entities that the user of the cryptographic system wishes to be able to decrypt the cyphertext. One example of a symmetric cryptographic system is the Advanced Encryption Standard (“AES”), which arranges plaintext into matrices and then modifies the matrices through repeated permutations and arithmetic operations with an encryption key. In embodiments of cryptographic systems that are “asymmetric,” either encryption or decryption key cannot be readily deduced without additional secret knowledge, even given the possession of a corresponding decryption or encryption key, respectively; a common example is a “public key cryptographic system,” in which possession of the encryption key does not make it practically feasible to deduce the decryption key, so that the encryption key may safely be made available to the public. An example of a public key cryptographic system is RSA, in which an encryption key involves the use of numbers that are products of very large prime numbers, but a decryption key involves the use of those very large prime numbers, such that deducing the decryption key from the encryption key requires the practically infeasible task of computing the prime factors of a number which is the product of two very large prime numbers. Another example is elliptic curve cryptography, which relies on the fact that given two points P and Q on an elliptic curve over a finite field, and a definition for addition where A+B=R, the point where a line connecting point A and point B intersects the elliptic curve, where “0,” the identity, is a point at infinity in a projective plane containing the elliptic curve, finding a number k such that adding P to itself k times results in Q is computationally impractical, given correctly selected elliptic curve, finite field, and P and Q. Some embodiments of the disclosed systems and methods involve creation and/or evaluation of digital signatures. A digital signature as used herein is an application of a secure proof of a secret possessed by a particular device and/or user thereof to an element or lot of data, or to a verifiable mathematical representation of the element or lot of data, which may include a cryptographic hash as described above. A secure proof, as used herein, is a protocol whereby an output is generated that demonstrates possession of a secret, such as module-specific secret, without demonstrating the entirety of the module-specific secret; in other words, a secure proof by itself, is insufficient to reconstruct the entire module-specific secret, enabling the production of at least another secure proof using at least a module-specific secret. Where at least a module-specific secret is a plurality of secrets, such as a plurality of challenge-response pairs, a secure proof may include an output that reveals the entirety of one of the plurality of secrets, but not all of the plurality of secrets; for instance, secure proof may be a response contained in one challenge-response pair. In an embodiment, proof may not be secure; in other words, proof may include a one-time revelation of at least a module-specific secret, for instance as used in a single challenge-response exchange. Secure proof may include a zero-knowledge proof, which may provide an output demonstrating possession of a secret while revealing none of the secret to a recipient of the output; zero-knowledge proof may be information-theoretically secure, meaning that an entity with infinite computing power would be unable to determine secret from output. Alternatively, zero-knowledge proof may be computationally secure, meaning that determination of secret from output is computationally infeasible, for instance to the same extent that determination of a private key from a public key in a public key cryptographic system is computationally infeasible. Zero-knowledge proof algorithms may generally include a set of two algorithms, a prover algorithm, or “P,” which is used to prove computational integrity and/or possession of a secret, and a verifier algorithm, or “V” whereby a party may check the validity of P. Zero-knowledge proof may include an interactive zero-knowledge proof, wherein a party verifying the proof must directly interact with the proving party; for instance, the verifying and proving parties may be required to be online, or connected to the same network as each other, at the same time. Interactive zero-knowledge proof may include a “proof of knowledge” proof, such as a Schnorr algorithm for proof on knowledge of a discrete logarithm. in a Schnorr algorithm, a prover commits to a randomness r, generates a message based on r, and generates a message adding r to a challenge c multiplied by a discrete logarithm that the prover is able to calculate; verification is performed by the verifier who produced c by exponentiation, thus checking the validity of the discrete logarithm. Interactive zero-knowledge proofs may alternatively or additionally include sigma protocols. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various alternative interactive zero-knowledge proofs that may be implemented consistently with this disclosure. Alternatively, zero-knowledge proof may include a non-interactive zero-knowledge, proof, or a proof wherein neither party to the proof interacts with the other party to the proof; for instance, each of a party receiving the proof and a party providing the proof may receive a reference datum which the party providing the proof may modify or otherwise use to perform the proof. As a non-limiting example, zero-knowledge proof may include a succinct non-interactive arguments of knowledge (ZK-SNARKS) proof, wherein a “trusted setup” process creates proof and verification keys using secret (and subsequently discarded) information encoded using a public key cryptographic system, a prover runs a proving algorithm using the proving key and secret information available to the prover, and a verifier checks the proof using the verification key; public key cryptographic system may include RSA, elliptic curve cryptography, ElGamal, or any other suitable public key cryptographic system. Generation of trusted setup may be performed using a secure multiparty computation so that no one party has control of the totality of the secret information used in the trusted setup; as a result, if any one party generating the trusted setup is trustworthy, the secret information may be unrecoverable by malicious parties. As another non-limiting example, non-interactive zero-knowledge proof may include a Succinct Transparent Arguments of Knowledge (ZK-STARKS) zero-knowledge proof. In an embodiment, a ZK-STARKS proof includes a Merkle root of a Merkle tree representing evaluation of a secret computation at some number of points, which may be 1 billion points, plus Merkle branches representing evaluations at a set of randomly selected points of the number of points; verification may include determining that Merkle branches provided match the Merkle root, and that point verifications at those branches represent valid values, where validity is shown by demonstrating that all values belong to the same polynomial created by transforming the secret computation. In an embodiment, ZK-STARKS does not require a trusted setup. Zero-knowledge proof may include any other suitable zero-knowledge proof. Zero-knowledge proof may include, without limitation bulletproofs. Zero-knowledge proof may include a homomorphic public-key cryptography (hPKC)-based proof. Zero-knowledge proof may include a discrete logarithmic problem (DLP) proof. Zero-knowledge proof may include a secure multi-party computation (MPC) proof. Zero-knowledge proof may include, without limitation, an incrementally verifiable computation (IVC). Zero-knowledge proof may include an interactive oracle proof (TOP). Zero-knowledge proof may include a proof based on the probabilistically checkable proof (PCP) theorem, including a linear PCP (LPCP) proof. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various forms of zero-knowledge proofs that may be used, singly or in combination, consistently with this disclosure. In an embodiment, secure proof is implemented using a challenge-response protocol. In an embodiment, this may function as a one-time pad implementation; for instance, a manufacturer or other trusted party may record a series of outputs (“responses”) produced by a device possessing secret information, given a series of corresponding inputs (“challenges”), and store them securely. In an embodiment, a challenge-response protocol may be combined with key generation. A single key may be used in one or more digital signatures as described in further detail below, such as signatures used to receive and/or transfer possession of crypto-currency assets; the key may be discarded for future use after a set period of time. In an embodiment, varied inputs include variations in local physical parameters, such as fluctuations in local electromagnetic fields, radiation, temperature, and the like, such that an almost limitless variety of private keys may be so generated. Secure proof may include encryption of a challenge to produce the response, indicating possession of a secret key. Encryption may be performed using a private key of a public key cryptographic system, or using a private key of a symmetric cryptographic system; for instance, trusted party may verify response by decrypting an encryption of challenge or of another datum using either a symmetric or public-key cryptographic system, verifying that a stored key matches the key used for encryption as a function of at least a module-specific secret. Keys may be generated by random variation in selection of prime numbers, for instance for the purposes of a cryptographic system such as RSA that relies prime factoring difficulty. Keys may be generated by randomized selection of parameters for a seed in a cryptographic system, such as elliptic curve cryptography, which is generated from a seed. Keys may be used to generate exponents for a cryptographic system such as Diffie-Helman or ElGamal that are based on the discrete logarithm problem. A digital signature may include, without limitation, an encrypted mathematical representation of a file or other set of data using the private key of a public key cryptographic system. Signature may be verified by decrypting the encrypted mathematical representation using the corresponding public key and comparing the decrypted representation to a purported match that was not encrypted; if the signature protocol is well-designed and implemented correctly, this means the ability to create the digital signature is equivalent to possession of the private decryption key. Likewise, if mathematical representation of file is well-designed and implemented correctly, any alteration of the file will result in a mismatch with the digital signature; the mathematical representation may be produced using an alteration-sensitive, reliably reproducible algorithm, such as a hashing algorithm as described in further detail below. A mathematical representation to which the signature may be compared may be included with signature, for verification purposes; in other embodiments, the algorithm used to produce the mathematical representation is publicly available, permitting the easy reproduction of the mathematical representation corresponding to any file. In an embodiment, and continuing to refer toFIG.2, a digital signature may have a property of unlinkability; that is, digital signature may be delegated from one device to another in a way that makes digital signature impossible or practically infeasible to use for deduction of a granting device or of a digital signature that was previously used to derive and/or generate digital signature. In an embodiment, and without limitation, this may be accomplished as described in Provisional Application No. 62/815,493, filed on Mar. 8, 2019, and entitled “METHODS AND SYSTEMS FOR IMPLEMENTING AN ANONYMIZED ATTESTATION CHAIN,” the entirety of which is incorporated herein by reference. Still viewingFIG.2, in some embodiments, digital signatures may be combined with or incorporated in digital certificates. In one embodiment, a digital certificate is a file that conveys information and links the conveyed information to a “certificate authority” that is the issuer of a public key in a public key cryptographic system. Certificate authority in some embodiments contains data conveying the certificate authority's authorization for the recipient to perform a task. The authorization may be the authorization to access a given datum. The authorization may be the authorization to access a given process. In some embodiments, the certificate may identify the certificate authority. The digital certificate may include a digital signature. With continued reference toFIG.2, in some embodiments, a third party such as a certificate authority (CA) is available to verify that the possessor of the private key is a particular entity; thus, if the certificate authority may be trusted, and the private key has not been stolen, the ability of an entity to produce a digital signature confirms the identity of the entity and links the file to the entity in a verifiable way. Digital signature may be incorporated in a digital certificate, which is a document authenticating the entity possessing the private key by authority of the issuing certificate authority and signed with a digital signature created with that private key and a mathematical representation of the remainder of the certificate. In other embodiments, digital signature is verified by comparing the digital signature to one known to have been created by the entity that purportedly signed the digital signature; for instance, if the public key that decrypts the known signature also decrypts the digital signature, the digital signature may be considered verified. Digital signature may also be used to verify that the file has not been altered since the formation of the digital signature. In other embodiments where trust in a single certificate authority is undesirable (e.g., where there is concern of the certificate authority and verifier colluding), the same functionality may be accomplished by a group of certificate authorities acting to authenticate in coordination, with the requirement that a threshold number of the group of certificate authorities, and/or a threshold proportion of the group of certificate authorities, agree (e.g. “threshold cryptography”); a confidence level in each certificate authority may be determined according to any method or means described herein for determination of a confidence level in any device or entity, including without limitation in a cryptographic evaluator as described in further detail below. In an embodiment, certificate authorities that have a confidence level below a given threshold level may be eliminated; in other embodiments, certificate authority confidence levels may be aggregated according to any method shown herein. Aggregate confidence level may be used for threshold cryptography as described above; for instance, agreeing certificate authorities may have an aggregate confidence level which must exceed a threshold, or aggregate confidence level of agreeing certificate authorities may be required to represent a threshold proportion of aggregate confidence level of all certificate authorities in group. Additional embodiments may include group signature schemes that issue certificates on a membership public key generated by a secure computing hardware apparatus as described in further detail below; in such scenarios, authentication may include proof by the secure computing hardware apparatus that the secure computing hardware apparatus possesses a secret key to a public key/certificate pair. In some embodiments, persons, devices, or transactions may be authenticated or assigned a confidence level using digital certificates. In one embodiment, a digital certificate is a file that conveys information and links the conveyed information to a “certificate authority” that is the issuer of a public key in a public key cryptographic system. Certificate authority in some embodiments contains data conveying the certificate authority's authorization for the recipient to perform a task. The authorization may be the authorization to access a given datum. The authorization may be the authorization to access a given process. In some embodiments, the certificate may identify the certificate authority. The digital certificate may include a digital signature. In some embodiments, a third party such as a certificate authority (CA) is available to verify that the possessor of the private key is a particular entity; thus, if the certificate authority may be trusted, and the private key has not been stolen, the ability of an entity to produce a digital signature confirms the identity of the entity and links the file to the entity in a verifiable way. Digital signature may be incorporated in a digital certificate, which is a document authenticating the entity possessing the private key by authority of the issuing certificate authority and signed with a digital signature created with that private key and a mathematical representation of the remainder of the certificate. In other embodiments, digital signature is verified by comparing the digital signature to one known to have been created by the entity that purportedly signed the digital signature; for instance, if the public key that decrypts the known signature also decrypts the digital signature, the digital signature may be considered verified. Digital signature may also be used to verify that the file has not been altered since the formation of the digital signature. In other embodiments where trust in a single certificate authority is undesirable (e.g., where there is concern of the certificate authority and verifier colluding), the same functionality may be accomplished by a group of certificate authorities acting to authenticate in coordination, with the requirement that a threshold number of the group of certificate authorities, and/or a threshold proportion of the group of certificate authorities, agree (e.g. “threshold cryptography”); a confidence level in each certificate authority may be determined according to any method or means described herein for determination of a confidence level in any device or entity, including without limitation in a cryptographic evaluator as described in further detail below. In an embodiment, certificate authorities that have a confidence level below a given threshold level may be eliminated; in other embodiments, certificate authority confidence levels may be aggregated according to any method shown herein. Aggregate confidence level may be used for threshold cryptography as described above; for instance, agreeing certificate authorities may have an aggregate confidence level which must exceed a threshold, or aggregate confidence level of agreeing certificate authorities may be required to represent a threshold proportion of aggregate confidence level of all certificate authorities in group. Additional embodiments may include group signature schemes that issue certificates on a membership public key generated by a secure computing module116as described in further detail below; in such scenarios, authentication may include proof by the secure computing module116that the secure computing module116possesses a secret key to a public key/certificate pair. Although digital signatures have been introduced here as performed using public key cryptographic systems, digital signatures may alternatively or additionally be performed using any non-interactive zero-knowledge proof; for instance, a proof may be recorded in conjunction with a datum, and a verification may be performed by any party seeking to evaluate the proof. Certificate authority may be implemented in a number of ways, including without limitation as described in Provisional Application No. 62/758,367, filed on Nov. 9, 2018, and entitled “METHOD AND SYSTEMS FOR A DISTRIBUTED CERTIFICATE AUTHORITY,” the entirety of which is incorporated herein by reference; for instance, and without limitation, certificate authority may include, be included in, and/or be implemented as a distributed certificate authority as described in Provisional Application No. 62/758,367. In some embodiments, systems and methods described herein produce cryptographic hashes, also referred to by the equivalent shorthand term “hashes.” A cryptographic hash, as used herein, is a mathematical representation of a lot of data, such as files or blocks in a block chain as described in further detail below; the mathematical representation is produced by a lossy “one-way” algorithm known as a “hashing algorithm.” Hashing algorithm may be a repeatable process; that is, identical lots of data may produce identical hashes each time they are subjected to a particular hashing algorithm. Because hashing algorithm is lossy, it may be impossible to reconstruct a lot of data from a hash produced from the lot of data using the hashing algorithm. In the case of some hashing algorithms, reconstructing the full lot of data from the corresponding hash using a partial set of data from the full lot of data may be possible only by repeatedly guessing at the remaining data and repeating the hashing algorithm; it is thus computationally difficult if not infeasible for a single computer to produce the lot of data, as the statistical likelihood of correctly guessing the missing data may be extremely low. However, the statistical likelihood of a computer of a set of computers simultaneously attempting to guess the missing data within a useful timeframe may be higher, permitting mining protocols as described in further detail below. In an embodiment, hashing algorithm may demonstrate an “avalanche effect,” whereby even extremely small changes to lot of data produce drastically different hashes. This may thwart attempts to avoid the computational work necessary to recreate a hash by simply inserting a fraudulent datum in data lot, enabling the use of hashing algorithms for “tamper-proofing” data such as data contained in an immutable ledger as described in further detail below. This avalanche or “cascade” effect may be evinced by various hashing processes; persons skilled in the art, upon reading the entirety of this disclosure, will be aware of various suitable hashing algorithms for purposes described herein. Verification of a hash corresponding to a lot of data may be performed by running the lot of data through a hashing algorithm used to produce the hash. Such verification may be computationally expensive, albeit feasible, potentially adding up to significant processing delays where repeated hashing, or hashing of large quantities of data, is required, for instance as described in further detail below. Examples of hashing programs include, without limitation, Winternitz hashing algorithms, various generations of Secure Hash Algorithm (including “SHA-1,” “SHA-2,” and “SHA-3”), “Message Digest” family hashes such as “MD4,” “MD5,” “MD6,” and “RIPEMD,” Keccak, “BLAKE” hashes and progeny (e.g., “BLAKE2,” “BLAKE-256,” “BLAKE-512,” and the like), Message Authentication Code (“MAC”)-family hash functions such as PMAC, OMAC, VMAC, HMAC, and UMAC, Poly1305-AES, Elliptic Curve Only Hash (“ECOH”) and similar hash functions, Fast-Syndrome-based (FSB) hash functions, GOST hash functions, the Grøstl hash function, the HAS-160 hash function, the JH hash function, the RadioGatún hash function, the Skein hash function, the Streebog hash function, the SWIFFT hash function, the Tiger hash function, the Whirlpool hash function, or any hash function that satisfies, at the time of implementation, the requirements that a cryptographic hash be deterministic, infeasible to reverse-hash, infeasible to find collisions, and have the property that small changes to an original message to be hashed will change the resulting hash so extensively that the original hash and the new hash appear uncorrelated to each other. A degree of security of a hash function in practice may depend both on the hash function itself and on characteristics of the message and/or digest used in the hash function. For example, where a message is random, for a hash function that fulfills collision-resistance requirements, a brute-force or “birthday attack” may to detect collision may be on the order of O(2n/2) for n output bits; thus, it may take on the order of 2256operations to locate a collision in a 512 bit output “Dictionary” attacks on hashes likely to have been generated from a non-random original text can have a lower computational complexity, because the space of entries they are guessing is far smaller than the space containing all random permutations of bits. However, the space of possible messages may be augmented by increasing the length or potential length of a possible message, or by implementing a protocol whereby one or more randomly selected strings or sets of data are added to the message, rendering a dictionary attack significantly less effective. Referring now toFIG.1, a system100for selecting a distributed framework. System100includes a selection device104. Selection device104may include any computing device as described in this disclosure, including without limitation a microcontroller, microprocessor, digital signal processor (DSP) and/or system on a chip (SoC), or a Graphic Processing Unit (GPU) as described in this disclosure. Selection device104may include, be included in, and/or communicate with a mobile device such as a mobile telephone or smartphone. Selection device104may include a single computing device operating independently, or may include two or more computing device operating in concert, in parallel, sequentially or the like; two or more computing devices may be included together in a single computing device or in two or more computing devices. Selection device104may interface with one or more additional devices as described below in further detail via a network interface device. Network interface device may be utilized for connecting a selection device104to one or more of a variety of networks, and one or more devices. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software etc.) may be communicated to and/or from a computer and/or a computing device. Selection device104may include but is not limited to, for example, a selection device104or cluster of computing devices in a first location and a second computing device or cluster of computing devices in a second location. Selection device104may include one or more computing devices dedicated to data storage, security, distribution of traffic for load balancing, and the like. Selection device104may distribute one or more computing tasks as described below across a plurality of computing devices of computing device, which may operate in parallel, in series, redundantly, or in any other manner used for distribution of tasks or memory between computing devices. Selection device104may be implemented using a “shared nothing” architecture in which data is cached at the worker, in an embodiment, this may enable scalability of system100and/or computing device. Still referring toFIG.1, selection device104is coupled to a memory108. Memory108may include any form of memory described below in reference toFIG.4. Memory108may be incorporated in a device containing selection device104, distributed through several devices, which may contain selection device104, or a component thereof, or in another device accessible to selection device104via electronic communication. Selection device104may be communicatively connected a plurality of cryptographic evaluators112. Selection device104may be designed and configured to perform any method step or steps as disclosed herein; as a non-limiting example, selection device104may be designed and configured to identify at least a first cryptographic evaluator of the plurality of cryptographic evaluators assign a confidence level of the at least a first cryptographic evaluator, and select a distributed framework from the plurality of cryptographic evaluators as a function of the at least a first cryptographic evaluator as a function of the confidence level. With continued reference toFIG.1, any cryptographic evaluator of plurality of cryptographic evaluators may include a secure computing module116. As used herein, a secure computing module116is a hardware element configured to perform one or more secured operations beyond the control of other circuit elements or software, whether incorporated with the secure computing module116in a circuit or computing device, or a part of an extrinsic computing device. As a result, at least one secured operation performed by secure computing module116may be intrinsically reliable; that is, the at least one secured operation may be relied upon by any other module or user to produce an expected result regardless of behavior by neutral or adversarial parties, as long as some basic set of assumptions hold true. Other parties may be able to assign a confidence level in secure computing module116and/or a system or computing device incorporating secure computing module116based on the above-described set of assumptions. As a non-limiting, example, a secure computing module116designed to produce an expected result despite all software-only attacks may give rise to a first confidence level, whereas another secure computing module116designed to produce its expected result in the face of all software or hardware attacks may give rise to a second confidence level; the second confidence level may be higher, owing to the reduced probability that the second secure computing module116would be compromised. Still viewingFIG.1, secure computing module116may include a trusted platform module (TPM120). In an embodiment, a TPM120may include a hardware module, which may be an integrated circuit, an optoelectronic circuit, a section of an integrated circuit on the same die as a processor, an integrated circuit packaged with other die in a multi-chip module or other multi-die integration method, or printed circuit board product; TPM120may have any suitable elements of digital or analog circuitry usable to perform one or more processes as described herein, including without limitation processes used to determine confidence levels and/or authenticate digitally signed assertions as described below. TPM120may have memory and/or other logic and/or a processor in its own right which may be in a non-limiting example a crypto-processor. TPM120may have a hard-coded process for signing a digital signature, which may be performed using a private key, which is associated with a public key. This private key and/or signing process may be produced using a genuinely random process during manufacturing, and/or unique object (UNO) fingerprint, and/or a physically unclonable function (PUF), or any other disorder-based security primitive, defined as a function that creates challenge responses from a physical circuit that depend on unique features of that circuit, including without limitation microstructure features or elements that depend on random physical factors occurring or conferred during manufacture. Private key may be extracted via physically unclonable function processes using, for instance, a fuzzy extractor or key extractor physically unclonable function. Private key extraction may utilize additional corrective measures, including as a nonlimiting example machine learning, neural networks, convolutional neural networks and the like, or other approaches to provide error correction over the operating temperature range of the device. Private key generation may additionally incorporate true random number generator(s) (TRNGs), pseudorandom number generators (PRNGs) and related devices. With continued reference toFIG.1, secure computing module116may include at least PUF124. PUF124may be implemented by various means. In an embodiment, PUF124includes one or more non-intrinsic PUFs. Non-intrinsic PUFs may include without limitation optics-based PUFs. Optics-based PUFs may include, as a nonlimiting example, optical PUFs. An optical PUF may be implemented by combining a light source such as lasers with a material that causes unpredictable scattering from the light source; one or more light sensors or light sensor arrays may be used to detect scattered light and output an electrical signal, for instance by generating, at a given light sensor unit, a logic 1 signal for detected light above a given threshold intensity or energy content, and a logic 0 signal for detected light below such threshold. Each light sensor may include any suitable device for converting light to an electrical signal; such devices include, without limitation, avalanche photodiodes (APDs), single photon avalanche diodes (SPADs), silicon photo-multipliers (SiPMs), photo-multiplier tubes (PMTs), micro-channel plates (MCPs), micro-channel plate photomultiplier tubes (MCP-PMTs), photodiodes, and/or photosensitive or photon-detecting circuit elements and/or transducers. Avalanche photo diodes (APDs), as used herein, may include diodes (e.g. without limitation p-n, p-i-n, and others) reverse biased such that a single photon generated carrier can trigger a short, temporary “avalanche” of photocurrent on the order of milliamps or more caused by electrons being accelerated through a high field region of the diode and impact ionizing covalent bonds in the bulk material, these in turn triggering greater impact ionization of electron-hole pairs. When the reverse bias is less than the breakdown voltage, the gain of the APD is approximately linear. For silicon APDs this gain is on the order of 10-100. An APD reverse biased significantly above the breakdown voltage is referred to as a Single Photon Avalanche Diode, or SPAD. In this case the n-p electric field is sufficiently high to sustain an avalanche of current with a single photon, hence referred to as “Geiger mode.” This avalanche current rises rapidly (sub-nanosecond), such that detection of the avalanche current can be used to approximate the arrival time of the incident photon. The SPAD may be pulled below breakdown voltage once triggered in order to reset or quench the avalanche current before another photon may be detected, as while the avalanche current is active carriers from additional photons may have a negligible effect on the current in the diode. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various alternative or additional light detection devices that may be used to detect light scattered by scattering medium. Still referring toFIG.1non-intrinsic PUF may include without limitation a radio frequency (RF)-based PUF. A radio-frequency PUF may be constructed by embedding thin, randomly arranged copper wires in flexible silicone sealant or other RF permissive medium to be exposed to a source of electromagnetic waves, which may, in a non-limiting example, emit in the 5-6 GHz band; near-field scattering of such waves may be detected, for instance, using a matrix of antennas to produce an “RF-DNA PUF” secret. near-field scattering of EM waves by the copper wires may be measured, for instance in a 5-6 GHz band; RF-DNA PUFs. Alternatively, an RF-based PUF may be fabricated as an inductor-capacitor (LC) PUF by for instance by incorporating a capacitor, such as a glass plate with metal plates on both sides, serially chained with a passive inductor such as a metal coil on the glass plate; this may form a passive LC resonator circuit which may absorb some amount of power when placed in an external RF field, using for instance an RF emitter as described above. A frequency sweep may indicate the circuit resonant frequencies, which depend on the capacitive and inductive components. Manufacturing variations in the construction may lead to resonant peak variations, the detection of which may generate secret. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various alternative, additional, or modified methods, means, and/or procedures suitable for use in fabrication of the above described PUFs, or of modification of methods for construction of RF PUFs to be compatible with fabrication of other elements, or with methods of fabrication thereof, as disclosed herein, including without limitation CMOS fabrication. With continued reference toFIG.1, non-intrinsic PUF may include one or more electronics-based PUFs. Electronics-based PUFs may include, as a nonlimiting example, coating PUFs. In a non-limiting example of a coating PUF, a comb-shaped sensor may be fabricated on the surface of an integrated circuit. A passive dielectric coating may be sprayed directly on the surface, where the dielectric particles are dispersed randomly. Capacitance measurements between sensors may be used as identifiers. Opaque and chemically inert coating may offer further protection. Non-intrinsic PUFs may include power distribution network PUFs. Power distribution network PUFs may be based on resistance variations in a power grid of a silicon chip. Voltage drops and equivalent resistances in power distribution system may be measured and subject to random manufacturing variability. Additional non-intrinsic PUFs may include, without limitation, compact disc (CD)-based PUFs. For instance, measured lengths of lands and pits on a CD may exhibit a random deviation from their intended lengths due to fabrication process variations. This variation may be large enough to be observed by monitoring the electrical signal of the photodetector in a CD player. Non-intrinsic PUFs may include acoustical PUFs, which may be constructed by observing the characteristic frequency spectrum of an acoustical delay line, where a bit string is extracted by performing principal component analysis. Non-intrinsic PUFS may include magstripe-based PUFs, which may leverage randomness of particle patterns in magnetic media (for instance in magnetic swipe cards). These types of PUFs may be used commercially to prevent credit card fraud. In all examples, the bit string may be obtained by a number of mathematical processes, for example independent component analysis (ICA), principal component analysis (PCA), signal power spectral density (PSD) etc. In an embodiment, and still referring toFIG.1, PUF124may include an “intrinsic PUF” produced via semiconductor construction, including without limitation the fabrication of semiconductor circuit elements based on silicon. As a non-limiting example, a pair of paths may be simulated with identical properties in a design of an integrated circuit; upon fabrication based on simulation, signals may propagate around each path of the pair of paths at a slightly different rate than the other path of the pair of paths. Fabrication may further include fabrication of an “arbiter” component connected to the two paths, the arbiter component configured to generate a first output if a signal arrives first from a first path of the two paths and a second output if a signal arrives first from a second path of the two paths; first output and second output may correspond, as a non-limiting example, to digital values such as logic 1 and logic 0. A plurality of such constructions may be combined to produce a plurality of randomly generated output bits. Other such race-condition PUFs may be similarly constructed. In an embodiment, an intrinsic PUF circuit may be manufactured by fabricating a circuit including two multiplexors, two counters, one comparator, and a plurality of ring oscillators; each oscillator may connect to an input of the two multiplexors, which may be configured to select two ring oscillators to compare, while the counters count the number of oscillations per a time period, and the output is set to 0 if one counter has a higher value and 1 if another counter has a higher value. Multiple such combinations may be used to generate a plurality of bits. With continued reference toFIG.1, intrinsic PUFs may include asynchronous PUFs, which may be synonymous with Self-Timed Ring PUFs. These may possess the same structure as the generic ring oscillator, however such PUFs may use self-timed rings instead of the inverter chains. The design may be based on the use of the Muller's C-element, a fundamental building block of asynchronous circuits. A significant benefit of self-timed rings may be that they make resulting PUF more immune to environmental variations. However, there may be an increase in the used silicon surface area. Furthermore, these self-timed structures may be prone to entering deadlock states. Intrinsic PUFS may include glitch PUFS; this may also involve a delay-based PUF construction which may be based on glitch behavior of combinatorial logic circuits. Occurrence of glitches may be determined by the difference in delay of the different logical paths from the input to output. As with other delay-based methods, the exact circuit delays may be subject to silicon manufacturing variations, and the number and shape of resulting glitches on output signals may be unique and be used as a PUF response. Continuing to refer toFIG.1, PUF124may include a circuit producing a PUF via cross-coupled logical or analog circuit elements. As a non-limiting example, static random access memory 256 (SRAM) PUFs may be produced by cross-coupling two inverters and two access transistors. When the cell is powered up, the two cross-coupled inverters may enter a “power-struggle,” where the winner is decided by the difference in the driving strength of the MOSFETs in the cross coupled inverters. Theoretically, there may be three possible states, where two are stable and one is metastable. If the transistors in the inverter circuits are perfectly matched, then the SRAM may remain metastable forever. Practically speaking, even though the transistors are designed to be identical, random variations in fabrication may ensure one has a stronger driving current, and this defines the initial start-up value for the cell. The majority of cells have an initial state that consistently may be returned to when powered up, and this is an important characteristic that allows them to be used for PUFs; a plurality of such cells may be used to generate a plurality of bits. Cross-coupling may be performed between other elements, such as without limitation a cell made up of two cross-coupled NOR gates (otherwise known as a latch); in operation, latch may be forced into an unstable state the resolution of which to either logic 1 or logic 0 may depend on slight mismatches between NOR gates. Similarly, a D flip-flop may be incorporated in a circuit that detects its power-up behavior. Alternatively or additionally, a PUF circuit may be fabricated by cross-coupling two transparent data latches, forming a bistable circuit. By leveraging the clear functionality of the latches, the circuit may be forced into an unstable state and converge when released to an output determined by slight manufacturing variations. Other examples of PUF124in an embodiment include without limitation buskeeper PUFs, which may be similar to other PUFs based on bistable memory elements, but leveraging buskeeper cells. PUF124may also combine two or more PUF designs, for instance a bistable ring PUF, which may be a hybrid of a ring oscillator PUF and a SRAM PUF, wherein the structure is similar to the ring oscillator PUF, but the number of inverting elements is even. This may mean that the loop does not oscillate, but is bistable (like the SRAM PUF). Using reset logic, the bistable ring may destabilize and subsequently stabilize into a state that is set by the random silicon manufacturing variations. Continuing to viewFIG.1, PUF124may include mixed-signal PUFs that produce a variable analog signal as determined by small circuit variations; analog signal may be converted to a digital signal using, for instance, an analog-to-digital converter, compared to a threshold voltage to produce a logic 1 or 0 output, or the like. PUFs may be constructed, as a non-limiting example, using threshold voltage PUFs: these may be constructed by connecting identically designed transistors in an addressable array may driving resistive loads; in operation, because of random silicon manufacturing variations, the transistor threshold voltages and current through the load may be random. Similarly, mixed-signal PUFs may include inverter gain PUFs, which may be based on the variable gain of equally designed inverters. The variable gain may be random because of random silicon process variations. Each challenge-response pair may be extracted from a pair of inverters. Mixed-signal PUFs may include super high information content (SHIC) PUFs, which may include an addressable array of diodes implemented as a crossbar memory 256 forms the structure; each diode may be, as a non-limiting example, produced by a crystal-growing process that seeds and produces random variation in crystal growth within the diode, resulting in unpredictably irregular I(U) curves. Read-out time of each memory 256 cell may be influenced by random silicon manufacturing variations and this forms a PUF response. Mixed-signal PUFs may include SRAM failure PUFs. Static noise margin for an individual SRAM cell may depend on random silicon manufacturing variations. As such, each SRAM cell may produce a bit failure at different noise levels, and this may be leveraged to generate a PUF response. In each case, the PUF circuit element producing the variable signal may be connected to an analog to digital converter, comparator, or similar element to produce one or more output bits. In an embodiment, and still viewingFIG.1PUF124may include a circuit implementing a quantum PUF. A quantum PUF, as used herein, is a PUF that generates secrets, such as random numbers, that are unique to the PUF owing to the nanostructure of atomic layers in an electronic or other component, so that the variations are governed by quantum physics, and harder to predict. Quantum PUF may include a quantum confinement PUF, which may operate by varying its output according to variations in behavior due to quantum confinement as determined by nanostructure of atomic layers of one or more components. In an embodiment, uniqueness of a quantum PUF or quantum confinement PUF may be made highly probable by the inherently random nature of atomic positions and imperfections in a quantum well. Simulating structures on such a scale may require computationally infeasible amounts of computing power, even for some quantum computers, particularly where multiple quantum PUF elements are used together; infeasibility may be enhanced by the unknown nature of the nanostructures, which may be impossible to determine without atom-by-acorn dismantling. Still referring toFIG.1, implementation of quantum confinement PUFs may be achieved using any device that can measure phenomenological properties arising from behavior governed by quantum mechanics, such as without limitation properties governed by quantum confinement. Implementation may, as a non-limiting example for illustrative purposes, involve characterizing fluctuations in tunneling through quantum wells in resonant tunneling diodes (RTDs); an RTD may permit electrons to tunnel through it directly where voltage across the RTD places an energy level at a conduction band minimum. As confined energy level may be exponentially sensitive to width and height of a quantum well determined by atomic-level variations, such as variations atomic uniformity at interfaces between layers in RTD, this may cause the required voltage for tunneling to vary according to such variations in RTD, causing RTD behavior to be dictated by such variations. Such diodes may, in a non-limiting example, be constructed by fabricating from an InGaAs/AIAs double-barrier structure, formation of top and bottom ohmic contacts, and etching, which may be wet-etching, to isolate the resulting component from other structures on the die. Quantum confinement PUF may function, as a non-limiting example, through measuring electronic properties, for instance by determining current/voltage response of one or more RTDs, other types of diodes and/or combinations of various types of diodes (in any parallel or series arrangement) and analyzing the resultant curves for peak values, slopes, gradients, valleys, full-width-half-max, number of peaks, or other component identified by the current-voltage response that would serve as a uniquely identifying characteristic. Confined energy levels may be highly sensitive to the specific nanostructure within each RTD, leading to a distinct tunneling spectrum for every device. As a non-limiting example, measurement may be performed by finding currents corresponding to energy levels by sweeping voltage across each RTD through a range, and recording the resulting currents. Multiple RTDs may be combined to increase output complexity, for instance by coupling together in series or by using a crossbar structure as for other diode-based PUFs. Continuing to refer toFIG.1, as persons skilled in the art will be aware upon reviewing the entirety of this disclosure, variations may be applied to RTDs and/or manufacture of RTDs to increase a degree of variation in response from one RTD to another. For instance, RTDs may be selected and/or manufactured to have a double barrier rather than a single barrier, causing behavior to depend on four barrier interfaces rather than two barrier interfaces. Variations may include incorporation of a ternary material into quantum well. Variations may include manipulations of manufacturing steps to create uniqueness, such as without limitation inducing variations in molecular bean epitaxy growth, for instance by not rotating a sample stage during a particular step; this may introduce 1-monolayer variations at barriers, which may induce additional I-V characterictic variations. In an embodiment, such variations may also render the RTD-based PUF more tamper-resistant, as invasive probing of device would distort nanostructure and change the outputs; alternatively or additionally, a PUF manufactured in this way may be reconfigurable by, for instance, a controlled application of heat causing modifications to the nanostructure. Implementation variations may further include exploitation of changes in PUF response due to local variations in temperature and magnetic field; such changes would be unknown to an attacker, and may enable the production of multiple unique IDs based on such fluctuations, in a manner unpredictable even to the manufacturer. With continued reference toFIG.1, other elements or components may be used instead of or additionally to RTDs to exploit variations in quantum-physical behavior based on nanoscale variations. Such elements or components may include, without limitation, three-dimensional nanostructures, such as quantum dots, which typically have many electron and hole confinement levels. RTDs or similar elements may be modified to contain single, or a few, dots, converting this increase in the number of confined states to an increased number of peaks in their dI/dV curves; each peak, when fitted individually and combined, could form part of a unique key for at least a secret generator204a-b. A number of dots in a device such as an RID does may not be reproducible, or may be allowed to vary. There may be many constructions of quantum PUFs and/or quantum-confinement PUFs based on these principles as will be evident to those skilled in the art, upon reviewing the entirety of this disclosure, including without limitation use of alternative or additional structures or components incorporating two or three-dimensional features evincing electrical behavior that varies based on quantum-physical properties affected by nanoscale manufacturing variations. Continuing to viewFIG.1, other applications of other types of PUB, such as uniquely identifying a particular material good based on, for example, a unique pattern developed due to the details of how the part was manufactured, extruded, finish coating was sprayed, etc., either across the part or at one or more points on the part, may also be implemented or exploited. These details may include optical reflection/scattering at one or more of the material interfaces, the measurement of this optical response, and optionally the computation of a digital bit string uniquely identifying or representing the optical response. With continued reference toFIG.1, PUF124may include, without limitation, PUFs implemented using design of vertical interconnect accesses (VIAs) in multi-layered chips or integrated circuits. A “VIA-PUF” may be created by, without limitation, designing VIAs with a small enough size that there is a roughly equal chance that they will or will not be created; this may cause the VIAs that function in the completed circuit to be randomly placed, leading to circuit behavior that is not predictable ahead of time. The above-mentioned randomness generated by random VIA creation may cause the resulting circuit to behave as a PUF. Such a VIA-PUF may be extremely robust over time and across environmental conditions. Continuing to refer toFIG.1, PUF124may include one or more photonic PUFs. In an embodiment, a photonic PUF may take advantage of the fact that some photonic devices can operate in a non-linear and/or chaotic manner. In a non-limiting example, a photonic PUF is manufactured by creating a microcavity in a material, such as silicon; microcavity may be formed with a chamfer, Microcavity may be formed, as a non-limiting example with a diameter on the order of tens of micrometers; for instance, microcavity may have a 30-micrometer diameter in an exemplary embodiment. Chamfer size and position may be varied between microcavities; arbitrarily positioned holes may be formed in an interior surface of one or more microcavities to induce irregularities; further irregularities may be introduced as an inevitable result of limits on manufacturing consistency. Irregularities may create variable reflective and/or refractive responses to a pulse of light, which may include, as a non-limiting example, a pulse in the femtosecond to attosecond range, such as, for illustrative purposes only, a 175-femtosecond pulse from a model-locked laser having a 90-MHz repetition rate, Fabrication may include incorporation of the light source. In operation, Optical output waveforms may also be complex and highly sensitive to precise physical cavity structure; at the same time responses may remain highly repeatable. Continuing the example, ultrashort optical pulses (e.g. in the femtosecond to attosecond region) may be used to probe microcavities; the pulses may excite a unique combination of spatial optical modes that may interact with fine-scale structure of cavity interiors and with one another through optical nonlinearity of silicon. Each sequence of optical responses may contain spatiotemporal features that are extremely sensitive to cavity structures. It may be possible to extract long binary keys, including keys on the order of gigabytes, from a single micro-cavity PUF. Alternative or additional non-linear photonic devices may be used to implement a photonic PUF. Further viewingFIG.1, other examples of PUF124that may be used may include, without limitation, nano-electromechanical (NEM) PUFs. NEM PUFs may include PUFs that leverage stiction of a silicon nanowire to a binary gate structure. In an embodiment, an NEM PUF system may be highly robust; as a non-limiting example, NEM PUF may work effectively across a wide range of environmental conditions, including without limitation thermal variation, exposure to microwave radiation, and exposure to high dose radiation at various frequencies. Additional methods for PUF implementation may include, without limitation Kirchoff-law-Johnson-noise (KLJN) PUFs, which may use KLJN key exchange to generate, between two hardware components, a new and manufacturer-unknown secret key which may be stored locally in, for instance, secure hash memory. Still referring toFIG.1, in an embodiment, one or more bits may be output directly from the PUF124and/or TPM120; such outputs may be used to generate symmetric or asymmetric keys, private keys, zero-knowledge proofs, or other proofs of authenticity, as described in further detail below. Continuing to refer toFIG.1, secure computing module116may implement one or more secure memory storage protocols. One or more secure memory storage protocols may be protocols designed to prevent unauthorized access to memory and/or to protect secure computing module116from attacks compromising memory; secure memory storage protocols may prevent, as a non-limiting example, compromise of memory used for computation. In an embodiment, one or more memory elements may be located within a trusted computing boundary (TCB); TCB may be a boundary within which it is physically, information-theoretically, or computationally infeasible for exterior computing elements to probe, manipulate, access, or otherwise interact with elements under control of or incorporated in secure computing module116. For instance, and without limitation, it may be infeasible to physically probe the memory or access the memory from other software elements. In some embodiments, one or more memory elements may be located outside of trusted computing boundary. In some embodiments, a memory interface uses algorithmic techniques to randomize memory access patterns, for instance using obfuscated access, oblivious RAM, or ORAM. Such algorithmic techniques may implement one or more randomization techniques. In an embodiment, when crossing a trusted computing boundary, a memory interface data bus may be encrypted; that is data passed to the memory interface data bus may be encrypted using any hardware or software based encryption techniques discussed in this disclosure. In an embodiment, secure computing module116may incorporate a memory controller located within the trusted computing boundary to encrypt and authenticate by a secret key memory elements such as without limitation memory page tables and/or memory pages accessible by other software elements, such as an operating system. Various techniques, processes, means or elements may be used to implement the above-described secure memory protocols. For instance, secure computing module116may use hardware-enabled access control to protect memory access; hardware access control may, as a non-limiting example, be performed by tagging each memory entry with a “container identifier” corresponding to a page, file, or other grouping of memory, enabling secure computing module116to determine whether tampering has occurred. Secure computing module116may perform one or more safe-sharing protocols for hardware shared with other resources; for instance, where an exception, termination of a programmed process, or other condition causes a secured process to exit, shared registers may be reset to eliminate protected data prior to access by other processes. Secure computing module116may operate using one or more dedicated memory objects, registers, or storage elements; as a non-limiting example, secure computing module116may operate with dedicated cache lines not available to other processes or circuits, preventing, e.g., stack or buffer overrun attacks to corrupt or steal data. Dedicated memory elements may be wired only to secure computing module116; access to dedicated memory elements may be rendered impossible except by way of secure computing module116. Secure computing module116may use one or more order-preserving memory storage protocols to detect “reset attacks” or fraudulent data entries presented out of order; such order preserving memory storage protocols may include, without limitation, Merkle trees or other hash trees in which each new entry contains a hash of a recently stored data entry and a hash of earlier Merkle tree and/or hash tree entries, rendering false or out-of-order entries computationally infeasible, or any temporally sequential listing as described below, including without limitation blockchains and the like. Secure computing module116may utilize oblivious random access memory (RAM) wherein memory access patterns are obfuscate to prevent detection of memory access patterns by outside observers attempting to deduce execution details regarding processes performed using secure computing module116Secure computing module116and/or device incorporating secure computing module116may incorporate a trusted non-volatile storage device that provides some means of verification of secure storage capability and other properties. Memory protocols as described above may be used to implement methods of attested storage and the chain of trust beginning at PUF124level up through processor, memory and code. Such mechanisms may be used to secure long-term storage (e.g. SSDs, spinning disks, tape, other), RAM, or other memory storage facilities. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various ways in which memory storage, securing, encryption, measuring, and attesting techniques as disclosed herein may be implemented and/or utilized by or with secure computing module116. Still referring toFIG.1, secure computing module116may include a secure processor. Secure processor may be a processor as described below in reference toFIG.5. Secure processor may operate autonomously from other processors and/or an operating system operating on at least a cryptographic evaluator; for instance, secure processor may store entries in temporary or long-term memory in encrypted form, where decryption is impossible without private keys not available to devices, circuits or software besides secure processor. Encryption may likewise be impossible without private keys available only to secure processor. Secure processor may also digitally sign memory entries using, for instance, a private key available only to secure processor. Keys available only to secure processor may include keys directly encoded in hardware of the secure processor; i.e., a process to digitally sign and/or encrypt using keys may be coded using logic circuits, field-programmable arrays, read-only memory, burning into memory using one-time programmable polysilicon fuses, or the like, and thus be immutable absent physical changes to secure processor. Secure processor may be constructed, similarly to TPM120, to frustrate alteration and/or probing to discover and/or alter private keys. Private keys may be demonstrable as uniquely associated with secure processor by use of PUF124as described above; secure processor may include, for instance, a TPM120as described above. Alternatively or additionally, a certificate authority as described above, which may be a manufacturer of secure processor, may verify that one or more public keys are associated uniquely with secure processor according to any protocol suitable for digital certificates. With continued reference toFIG.1, secure computing module116may implement one or more methods of attested computation. Attested computation may include or involve one or more methods to ensure that computation of a program, known as an attested program, is trusted and signed by secure computing module116and/or computing device incorporating secure computing module116; this may be supported by means to assert the state of the system memory, code, and input data. In an embodiment, secure computing module116and/or a computing device incorporating secure computing module116computes a cryptographic hash of a system state when performing a trusted computation. System state may include, without limitation, program code and/or one or more elements of data being computed. A resulting cryptographic hash of system state may be stored in one or more trusted or secured memories as described above. Secure computing module116and/or computing device incorporating secure computing module116may append a cryptographic signature based upon any private key that may be associated with secure computing module116as described herein. Secure computing module116and/or computing device incorporating secure computing module116may operate a security reset of working memory prior to load of data for trusted computation; for instance, the secure computing module116and/or computing device incorporating secure computing module116may append a hash of the memory to cryptographic hash of system state following reset and prior to loading data. Secure computing module116and/or computing device incorporating secure computing module116may append its authentication signature of memory page tables and/or memory tables. Upon completion of the trusted computation, which may include execution of program code of system state, secure computing module116and/or computing device incorporating secure computing module116may append an output value of the trusted computation to cryptographic hash of system state. In an embodiment, an output value of the trusted computation may itself be cryptographically hashed and/or encrypted; encryption may be performed using any form of hardware or software based encryption that may be associated with secure computing module116. Secure computing module116and/or computing device incorporating secure computing module116may include a system to compute one or more hash trees of cryptographic hash of the computation, system state, and/or outputs; secure computing module116and/or computing device incorporating secure computing module116may store the one or more hash trees within the trusted computation boundary. Hash trees may be appended to the trusted computation hash. Any process steps or components described above as performing trusted and/or attested computing may be performed or omitted in any order or combination as will be apparent to those skilled in the art, upon reading the entirety of this disclosure; for instance, order of appending data may be done in any combination. Still viewingFIG.1, in an embodiment, a non-secure processor and/or secure computing module116initiate a trusted protocol stack upon startup. For instance, and without limitation, selection device104and/or secure computing module116may implement a secure boot and/or attested boot protocol. In an embodiment, a basic input/output system (BIOS) that initiates upon startup of selection device104may compute a cryptographic hash of a boot loader of an operating system running on selection device104; cryptographic hash may include boot drivers of one or more processes that initiate when selection device104starts up. Secure computing module116may then digitally sign cryptographic hash; cryptographic hash with or without digital signature, may be stored in memory. Selection device104may subsequently refuse to load any process that is not also signed with digital signature; this may in turn be used to perform attested computing procedures as described above. Continuing to refer toFIG.1, selection device104may implement at least a software monitor to enforce security invariants, and protected memory primitives, which may be referred to herein as enclaves. As used herein, a software monitor is a software component that operates in highest privilege mode of the processor, such as without limitation machine mode in the non-limiting example of the RISC-V processor ISA and may have exclusive access to a portion of memory, e.g. DRAM. The software monitor may check allocation decisions of software operating on selection device104and or a plurality of processors and/or computing devices making up a secure enclave for correctness and commit them into hardware configuration registers. Such software may include without limitation operating system, kernel, hypervisor, and/or guest OS. In this nomenclature, an operating system handles scheduling and demand paging, and a hypervisor may multiplex CPU cores of selection device104or devices. In a representative embodiment, software monitor may intermediate untrusted system software handling of isolated machine resources. Software monitor may verify decisions made by software operating on selection device104and/or devices for any events that may cause change in the protection domain/privilege mode of the selection device104and/or devices, including without limitation interrupts and fault handling, and may configure low level hardware resources when in at least a particular privilege mode. Hardware resources may include, without limitation, memory, such as physical memory pages, cache lines, processor cores that include all microarchitectural state, L1 cache and register files, and other resources. Software monitor may consider isolated protection domains including the monitor itself, enclaves, and untrusted software. Software monitor may ensure that resource allocation for one protection domain may not be modified by any other domain. Still referring toFIG.1, software monitor may be implemented in microcode, operate in the highest privilege level (e.g. machine mode in RISC-V processor), be implemented in hard coded logic, reconfigurable logic with protections on reconfiguration, or any combination of the foregoing. As a non-limiting example, software monitor may be invoked when software is executed in a secure enclave, and handle context switches between secure enclave mode, to and from less privileged mode(s). Software monitor may receive interrupt requests when operating a secure enclave operation, exit enclave operation including flushing of state and in example parking of enclave execution, and delegate the interrupt back to the operating system. Software monitor may intermediate handling of machine resources analogous to system calls in a typical OS. Software monitor may be conceived of as a state machine having states that may, as a non-limiting example, implement steps as follows: Software monitor may receive an event and authenticate a caller of the event; this may lead to three possibilities: (1) If caller is an OS interrupt and a secure enclave isn't operating, then the OS may receive the event; (2) If caller is an enclave interrupt, and the enclave has the handler, then the enclave may receive the event; otherwise, the enclave may asynchronously exit, meaning enclave cleans sensitive processor state, may park the enclave state in protected memory, and may delegate event to the OS—otherwise, the enclave may receive the event; (3) If event is a monitor call, and caller is authorized, then the request may be validated. If the request is concurrent, it may be handled, if it is invalid, it is thrown out and the caller may be flagged as potentially malicious; if it is valid, and no concurrent operations are happening, the monitor may proceed to change state cleanly (e.g., clean sensitive processor state and then switch privilege modes. Continuing to refer toFIG.1, to ensure protection domains are enforced, software monitor may enforce resource state transitions, which may occur in a non-limiting example as follows: if a resource requested is owned by owner (current user) or software monitor itself, the resource may be blocked. A requesting OS may demand the resource, in which case the sensitive processor state may be cleaned, and resource made available; finally the OS may grant the resource to a new owner. Software monitor may include a map of resource to owner, and lock on resource. These resource metadata may be pre-allocated to the monitor's binary image in case of statically partitioned resources such as cores and cache partitions. Software monitor may contain a cryptographic measurement (e.g. a hash) of certificates, keys, and of at least a first enclave. In an embodiment, software monitor may include an associated base address/address mask pair register in hardware that protects the location of the software monitor in memory space from corruption, bitmapped protected memory provisions, and the creation of page tables for each enclave within protected memory. A secure boot and/or attested boot process may be used to achieve trustworthiness of software monitor and/or selection device104may execute a chain of attested boot upon reset to prove that the software monitor has not been tampered with and the at least a first enclave, referred to below as the signing enclave, is correctly constructed, such that core executed within the enclave may be considered trusted. Reset may occur on startup, restart, and/or upon a hard or soft reset of selection device104. Continuing to viewFIG.1, a non-limiting example illustrating, an attested boot sequence in a processor with at least one core is presented; this example is provided for expository purposes, and implementation of attested boot, related secure programming using selection device104and/or secure computing module116may be performed according to any processes and/or procedures that may occur to persons skilled in the art upon reviewing the entirety of this disclosure may operate according to an assumption that selection device104possesses a device specific secret, such as without limitation a cryptographic key pair, has been signed by a manufacturer of secure computing module116, selection device104and/or other component or module described herein, such that one may evaluate the authenticity of the device by proof of possession of a valid signature; a device specific secret has been signed by a manufacturer, as used herein, where the manufacturer, or a device operated by the manufacturer, signs a verification datum, such as a public key, generated using the device-specific secret. Digital signature of manufacturer may be any digital signature as described above. As a result, a verification datum signed by manufacturer may be linked to secure proofs generated by device identifier using device-specific secret, such that manufacturer signature identifies secure computing module116. In an embodiment, link of the manufacturer signature to device-specific secret may be used to verify authenticity of the software monitor by authentic signature of the device and cryptographic proof of construction of the software monitor Still viewingFIG.1, in an embodiment a first core of a processor may be initialized; other cores may wait on interrupt from the first core. In an exemplary sequence, upon initialization of a first core, a cryptographic measurement root code may be booted from resistant hardware, such as, without limitation, on-chip read-only memory (ROM), and/or other hardcoded memory or circuitry. Software monitor may subsequently be loaded into memory from at least a non-volatile programmable memory. In an embodiment, all other memory address space may be cleared, zeroed, and/or set to a uniform value to achieve a known initial state. Continuing the illustrative example, at secure computing module116and/or a component thereof may generate device-specific secret; alternatively, a pre-shared secret may be loaded from protected memory, such as without limitation on-chip ROM, XOM, hardcoded circuitry, or the like. Further continuing the illustrative example, software monitor may be processed via a one-way cryptographic hash function as described above; an output of cryptographic hash function may be input to a key derivation function (KDF) along with device-specific secret, secure proof derived from device-specific secret, and/or verification datum derived from device-specific secret to generate software monitor public/private key pair. Cryptographic measurement root code may configure selection device104to sign software monitor public key and/or hash of the software monitor using device private key, and/or to cause device identifier to create a secure proof signing software monitor public key and/or hash of software monitor, establishing an attestation certificate of the software monitor. As noted above, measurement root may include dedicated circuitry that configures a computing device and/or secure computing module116to check the authenticity of the software monitor; for instance, the measurement root may generate an at least a first attestation key pair and sign the software monitor's public key with the processor's key system as described above. Still referring toFIG.1, examples of secure computing module116smay include, without limitation, a TPM120as described above. Secure computing module116may include a TPM120combined with a boot-measuring protocol using hash trees, Merkle trees, or the like to measure boot entries to create an “attested boot.” Secure computing module116may include a trusted execution technology (TXT) module combining TPM120with establishment of a secure container at run-time; secure container may be isolated from a software stack and OS of at least a temporal attester104and/or use TPM120to measure and attest to secure container prior to launch. Secure computing module116may include execute-only memory (XOM). Secure computing module116may include an Aegis processor. Secure computing module116may include a Bastion processor. Secure computing module116may implement a trusted enclave, also known as a trusted execution environment (TEE). In an embodiment, a trusted enclave may be a portion of a computing device that is isolated from the main processor of the computing device. Isolation may be achieved using elements of secure computing module108as described above, including isolation of memory. Isolation of memory may be achieved through any process or architecture as described above for secure memory, including encryption using a cryptographic system a decryption and/or encryption key to which a secure processor or TPM has access, but to which a CPU or other main processor, as well as input/output devices or connections, does not and/or use of dedicated cache lines or the like to physically separate memory accessible to secure computing module116from CPU and/or input/output devices or connections. Inputs and outputs to and from trusted enclave may be restricted and controlled tightly by a secure processor and/or TPM as described above. Trusted enclave may perform trusted and/or attested computing protocols as described above, including without limitation attested boot protocols. Examples of trusted enclaves include without limitation those enabled by SOFTWARE GUARD EXTENSIONS (SGX) systems as promulgated by Intel Corporation of Santa Clara, CA RISC V architecture, including without limitation sanctum processors, Ascend secure infrastructure, Ghostrider secure infrastructure, ARM TrustZone, Trusted Little Kernel (TLK) as promulgated by Nvidia Corporation of Santa Clara, CA, and Secure Encrypted Virtualization (SEV) as promulgated by Advanced Micro Devices, Inc. of Santa Clara, CA, and/or any other suitable architecture. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various additional or alternative trusted computing processes that may be used to implement secure computing module116, TEE, or trusted enclaves as disclosed herein. System100may incorporate or communicate with a certificate authority, which may include any certificate authority and/or version thereof as described in this disclosure. Referring now toFIG.2, system100may be used to perform one or more processing steps necessary to create, maintain, and/or authenticate a digitally signed assertion200. In one embodiment, at least a digitally signed assertion200is a collection of textual data signed using a secure proof as described in further detail below; secure proof may include, without limitation, a digital signature as described above. Collection of textual data may contain any textual data, including without limitation American Standard Code for Information Interchange (ASCII), Unicode, or similar computer-encoded textual data, any alphanumeric data, punctuation, diacritical mark, or any character or other marking used in any writing system to convey information, in any form, including any plaintext or cyphertext data; in an embodiment, collection of textual data may be encrypted, or may be a hash of other data, such as a root or node of a Merkle tree or hash tree, or a hash of any other information desired to be recorded in some fashion using a digitally signed assertion200. In an embodiment, collection of textual data states that the owner of a certain transferable item represented in the at least a digitally signed assertion200register is transferring that item to the owner of an address. At least a digitally signed assertion200may be signed by a digital signature created using the private key associated with the owner's public key, as described above. For instance, at least a digitally signed assertion200may describe a transfer of virtual currency, such as crypto-currency as described below. The virtual currency may be a digital currency. Item of value may be a transfer of trust, for instance represented by a statement vouching for the identity or trustworthiness of the first entity. Item of value may be an interest in a fungible negotiable financial instrument representing ownership in a public or private corporation, a creditor relationship with a governmental body or a corporation, rights to ownership represented by an option, derivative financial instrument, commodity, debt-backed security such as a bond or debenture or other security as described in further detail below. At least a digitally signed assertion200may describe the transfer of a physical good; for instance, at least a digitally signed assertion200may describe the sale of a product. In some embodiments, a transfer nominally of one item may be used to represent a transfer of another item; for instance, a transfer of virtual currency may be interpreted as representing a transfer of an access right; conversely, where the item nominally transferred is something other than virtual currency, the transfer itself may still be treated as a transfer of virtual currency, having value that depends on many potential factors including the value of the item nominally transferred and the monetary value attendant to having the output of the transfer moved into a particular user's control. The item of value may be associated with the at least a digitally signed assertion200by means of an exterior protocol, such as the COLORED COINS created according to protocols developed by The Colored Coins Foundation, the MASTERCOIN protocol developed by the Mastercoin Foundation, or the ETHEREUM platform offered by the Stiftung Ethereum Foundation of Baar, Switzerland, the Thunder protocol developed by Thunder Consensus, or any other protocol. Still referring toFIG.2, in one embodiment, an address is a textual datum identifying the recipient of virtual currency or another item of value in at least a digitally signed assertion200. In some embodiments, address is linked to a public key, the corresponding private key of which is owned by the recipient of the at least a digitally signed assertion200. For instance, address may be the public key. Address may be a representation, such as a hash, of the public key. Address may be linked to the public key in memory of a computing device, for instance via a “wallet shortener” protocol. Where address is linked to a public key, a transferee in the at least a digitally signed assertion200may record a subsequent at least a digitally signed assertion200transferring some or all of the value transferred in the first at least a digitally signed assertion200to a new address in the same manner. At least a digitally signed assertion200may contain textual information that is not a transfer of some item of value in addition to, or as an alternative to, such a transfer. For instance, as described in further detail below, at least a digitally signed assertion200may indicate a confidence level associated with a cryptographic evaluator as described in further detail below. With continued reference toFIG.2, at least a digitally signed assertion200may be included in a temporally sequential listing204. Temporally sequential listing204may include any set of data used to record a series of at least a digitally signed assertion200in an inalterable format that permits authentication of such at least a digitally signed assertion200. In some embodiments, temporally sequential listing204records a series of at least a digitally signed assertion200in a way that preserves the order in which the at least a digitally signed assertion200took place. Temporally sequential listing may be accessible at any of various security settings; for instance, and without limitation, temporally sequential listing may be readable and modifiable publicly, may be publicly readable but writable only by entities and/or devices having access privileges established by password protection, confidence level, or any device authentication procedure or facilities described herein, or may be readable and/or writable only by entities and/or devices having such access privileges. Access privileges may exist in more than one level, including, without limitation, a first access level or community of permitted entities and/or devices having ability to read, and a second access level or community of permitted entities and/or devices having ability to write; first and second community may be overlapping or non-overlapping. Still referring toFIG.2, temporally sequential listing204may preserve the order in which the at least a digitally signed assertion200took place by listing them in chronological order; alternatively or additionally, temporally sequential listing204may organize digitally signed assertions200into sub-listings208such as “blocks” in a blockchain, which may be themselves collected in a temporally sequential order; digitally signed assertions200within a sub-listing208may or may not be temporally sequential. In an embodiment, the temporally sequential listing may be a directed acyclic graph (DAG), in which multiple branches may be generated on or by different devices implementing temporally sequential listing204, and branches may be merged into one another, while a hash chain or similar structure ensures that branches cannot go “back in time” whether merged or not; secure timestamps and/or attested time may be further included to impose a temporal order on a DAG or other temporally sequential listing204. The ledger may preserve the order in which at least a digitally signed assertion200took place by listing them in sub-listings208and placing the sub-listings208in chronological order. The temporally sequential listing204may be a distributed, consensus-based ledger, such as those operated according to the protocols promulgated by Ripple Labs, Inc., of San Francisco, CA, or the Stellar Development Foundation, of San Francisco, CA, or of Thunder Consensus. In some embodiments, the ledger is a secured ledger; in one embodiment, a secured ledger is a ledger having safeguards against alteration by unauthorized parties. The ledger may be maintained by a proprietor, such as a system administrator on a server, that controls access to the ledger; for instance, the user account controls may allow contributors to the ledger to add at least a digitally signed assertion200to the ledger, but may not allow any users to alter at least a digitally signed assertion200that have been added to the ledger. In some embodiments, ledger is cryptographically secured; in one embodiment, a ledger is cryptographically secured where each link in the chain contains encrypted or hashed information that makes it practically infeasible to alter the ledger without betraying that alteration has taken place, for instance by requiring that an administrator or other party sign new additions to the chain with a digital signature. Temporally sequential listing204may be incorporated in, stored in, or incorporate, any suitable data structure, including without limitation any database, datastore, file structure, distributed hash table, or the like. In some embodiments, the timestamp of an entry is cryptographically secured and validated via trusted time, either directly on the chain or indirectly by utilizing a separate chain. In one embodiment the validity of timestamp is provided using a time stamping authority as described in the RFC 3161 standard for trusted timestamps, or in the ANSI ASC x9.95 standard. In another embodiment, the trusted time ordering is provided by a group of entities collectively acting as the time stamping authority with a requirement that a threshold number of the group of authorities sign the timestamp. In some embodiments, and with continued reference toFIG.2, temporally sequential listing204, once formed, cannot be altered by any party, no matter what access rights that party possesses. For instance, temporally sequential listing204may include a hash chain, in which data is added during a successive hashing process to ensure non-repudiation. Temporally sequential listing204may include a block chain. In one embodiment, a block chain is temporally sequential listing204that records one or more new at least a digitally signed assertion200in a data item known as a sub-listing208or “block.” An example of a block chain is the BITCOIN block chain used to record BITCOIN transactions and values. Sub-listings208may be created in a way that places the sub-listings208in chronological order, and links each sub-listing208to a previous sub-listing208in the chronological order, so that any computing device may traverse the sub-listings208in reverse chronological order to verify any at least a digitally signed assertion200listed in the block chain. Each new sub-listing208may be required to contain a cryptographic hash describing the previous sub-listing208In some embodiments, the block chain contains a single first sub-listing208sometimes known as a “genesis block.” Still referring toFIG.2, the creation of a new sub-listing208may be computationally expensive; for instance, the creation of a new sub-listing208may be designed by a “proof of work” protocol accepted by all participants in forming the temporally sequential listing204to take a powerful set of computing devices a certain period of time to produce. Where one sub-listing208takes less time for a given set of computing devices to produce the sub-listing208protocol may adjust the algorithm to produce the next sub-listing208so that it will require more steps; where one sub-listing208takes more time for a given set of computing devices to produce the sub-listing208protocol may adjust the algorithm to produce the next sub-listing208so that it will require fewer steps. As an example, protocol may require a new sub-listing208to contain a cryptographic hash describing its contents; the cryptographic hash may be required to satisfy a mathematical condition, achieved by having the sub-listing208contain a number, called a nonce, whose value is determined after the fact by the discovery of the hash that satisfies the mathematical condition. Continuing the example, the protocol may be able to adjust the mathematical condition so that the discovery of the hash describing a sub-listing208and satisfying the mathematical condition requires more or less steps, depending on the outcome of the previous hashing attempt. Mathematical condition, as an example, might be that the hash contains a certain number of leading zeros and a hashing algorithm that requires more steps to find a hash containing a greater number of leading zeros, and fewer steps to find a hash containing a lesser number of leading zeros. In some embodiments, production of a new sub-listing208according to the protocol is known as “mining.” The creation of a new sub-listing208may be designed by a “proof of stake” protocol as will be apparent to those skilled in the art upon reviewing the entirety of this disclosure. Continuing to refer toFIG.2, in some embodiments, protocol also creates an incentive to mine new sub-listings208The incentive may be financial; for instance, successfully mining a new sub-listing208may result in the person or entity that mines the sub-listing208receiving a predetermined amount of currency. The currency may be fiat currency. Currency may be crypto-currency as defined below. In other embodiments, incentive may be redeemed for particular products or services; the incentive may be a gift certificate with a particular business, for instance. In some embodiments, incentive is sufficiently attractive to cause participants to compete for the incentive by trying to race each other to the creation of sub-listings208Each sub-listing208created in temporally sequential listing204may contain a record or at least a digitally signed assertion200describing one or more addresses that receive an incentive, such as virtual currency, as the result of successfully mining the sub-listing208. With continued reference toFIG.2, where two entities simultaneously create new sub-listings208, temporally sequential listing204may develop a fork; protocol may determine which of the two alternate branches in the fork is the valid new portion of the temporally sequential listing204by evaluating, after a certain amount of time has passed, which branch is longer. “Length” may be measured according to the number of sub-listings208in the branch. Length may be measured according to the total computational cost of producing the branch. Protocol may treat only at least a digitally signed assertion200contained the valid branch as valid at least a digitally signed assertion200. When a branch is found invalid according to this protocol, at least a digitally signed assertion200registered in that branch may be recreated in a new sub-listing208in the valid branch; the protocol may reject “double spending” at least a digitally signed assertion200that transfer the same virtual currency that another at least a digitally signed assertion200in the valid branch has already transferred. As a result, in some embodiments the creation of fraudulent at least a digitally signed assertion200requires the creation of a longer temporally sequential listing204branch by the entity attempting the fraudulent at least a digitally signed assertion200than the branch being produced by the rest of the participants; as long as the entity creating the fraudulent at least a digitally signed assertion200is likely the only one with the incentive to create the branch containing the fraudulent at least a digitally signed assertion200, the computational cost of the creation of that branch may be practically infeasible, guaranteeing the validity of all at least a digitally signed assertion200in the temporally sequential listing204. Still referring toFIG.2, additional data linked to at least a digitally signed assertion200may be incorporated in sub-listings208in the temporally sequential listing204; for instance, data may be incorporated in one or more fields recognized by block chain protocols that permit a person or computer forming a at least a digitally signed assertion200to insert additional data in the temporally sequential listing204. In some embodiments, additional data is incorporated in an unspendable at least a digitally signed assertion200field. For instance, the data may be incorporated in an OP_RETURN within the BITCOIN block chain. In other embodiments, additional data is incorporated in one signature of a multi-signature at least a digitally signed assertion200. In an embodiment, a multi-signature at least a digitally signed assertion200is at least a digitally signed assertion200to two or more addresses. In some embodiments, the two or more addresses are hashed together to form a single address, which is signed in the digital signature of the at least a digitally signed assertion200. In other embodiments, the two or more addresses are concatenated. In some embodiments, two or more addresses may be combined by a more complicated process, such as the creation of a Merkle tree or the like. In some embodiments, one or more addresses incorporated in the multi-signature at least a digitally signed assertion200are typical crypto-currency addresses, such as addresses linked to public keys as described above, while one or more additional addresses in the multi-signature at least a digitally signed assertion200contain additional data related to the at least a digitally signed assertion200; for instance, the additional data may indicate the purpose of the at least a digitally signed assertion200, aside from an exchange of virtual currency, such as the item for which the virtual currency was exchanged. In some embodiments, additional information may include network statistics for a given node of network, such as a cryptographic evaluator, e.g. the latencies to nearest neighbors in a network graph, the identities or identifying information of neighboring nodes in the network graph, the trust level and/or mechanisms of trust (e.g. certificates of physical encryption keys, certificates of software encryption keys, (in non-limiting example certificates of software encryption may indicate the firmware version, manufacturer, hardware version and the like), certificates from a trusted third party, certificates from a decentralized anonymous authentication procedure, and other information quantifying the trusted status of the cryptographic evaluator) of neighboring nodes in the network graph, IP addresses, GPS coordinates, and other information informing location of the node and/or neighboring nodes, geographically and/or within the network graph. In some embodiments, additional information may include history and/or statistics of neighboring nodes with which the node has interacted. In some embodiments, this additional information may be encoded directly, via a hash, hash tree or other encoding. With continued reference toFIG.2, in some embodiments, virtual currency is traded as a crypto-currency. In one embodiment, a crypto-currency is a digital, currency such as Bitcoins, Peercoins, Namecoins, and Litecoins. Crypto-currency may be a clone of another crypto-currency. The crypto-currency may be an “alt-coin.” Crypto-currency may be decentralized, with no particular entity controlling it; the integrity of the crypto-currency may be maintained by adherence by its participants to established protocols for exchange and for production of new currency, which may be enforced by software implementing the crypto-currency. Crypto-currency may be centralized, with its protocols enforced or hosted by a particular entity. For instance, crypto-currency may be maintained in a centralized ledger, as in the case of the XRP currency of Ripple Labs, Inc., of San Francisco, CA In lieu of a centrally controlling authority, such as a national bank, to manage currency values, the number of units of a particular crypto-currency may be limited; the rate at which units of crypto-currency enter the market may be managed by a mutually agreed-upon process, such as creating new units of currency when mathematical puzzles are solved, the degree of difficulty of the puzzles being adjustable to control the rate at which new units enter the market. Mathematical puzzles may be the same as the algorithms used to make productions of sub-listings208in a block chain computationally challenging; the incentive for producing sub-listings208may include the grant of new crypto-currency to the miners. Quantities of crypto-currency may be exchanged using at least a digitally signed assertion200as described above. Still referring toFIG.2, at least a digitally signed assertion200may be included data structures or memory elements besides a temporally sequential file, including without limitation any temporary or persistent memory as used in or by any computing device as described below in reference toFIG.5. For example, and without limitation, at least a digitally signed assertion200may include one or more encrypted or otherwise secured or partitioned memory entries as entered for instance using a secure computing module116or according to a secure computing protocol as described in further detail below. Referring again toFIG.1, in some embodiments, secure computing module116and/or cryptographic evaluator may integrate a precision clock reference for determination of locations and latencies of nodes in the network graph. In non-limiting example, the precision clock reference may be a cesium- or rubidium-based atomic clock, active hydrogen maser, GPS disciplined oscillator, precision crystal oscillator, SAW oscillator, quartz oscillator or related that provides microsecond or better timing accuracy. In some embodiments, precision time may be used to establish physical distance by inference from latency statistics of nodes in the network, whether using probabilistic, Bayesian or other statistical methods, machine learning classifiers or other. In some embodiments, changes in inferred physical distance or latency between nodes in the graph may be used to flag potentially compromised secure computing module116s, man in the middle or other attacks. Referring now toFIG.3, an exemplary embodiment of a method300of selecting a distributed framework is illustrated. At step305, selection device104identifies at least a first cryptographic evaluator of a plurality of cryptographic evaluators112. Identifying may include, as a non-limiting example, comparing at least a datum received as an identifier from at least a first cryptographic evaluator to one or more stored values; one or more stored values may be stored in a temporally sequential listing as described above. One or more stored values may be stored in a database or other data structure. Identifying may include comparison of a digitally signed assertion and/or secure proof, as described in further detail below, in a temporally sequential listing or other data structure to a digitally signed assertion and/or secure proof received from at least a first cryptographic evaluator. Still referring toFIG.3, identifying the at least a first cryptographic evaluator may include evaluating a secure proof generated by the at least a first cryptographic evaluator and identifying the at least a first cryptographic evaluator as a function of the secure proof. Secure proof may include any secure proof as described above including without limitation a secure proof demonstrating possession of a secret stored in or produced by secure computing module116and/or PUF124. Where at least a secret is a plurality of secrets, such as a plurality of challenge-response pairs, a secure proof may include an output that reveals the entirety of one of the plurality of secrets, but not all of the plurality of secrets; for instance, secure proof may be a response contained in one challenge-response pair. In an embodiment, proof may not be secure; in other words, proof may include a one-time revelation of at least a secret, for instance as used in a single challenge-response exchange. With continued reference toFIG.3, secure proof may include a digital signature. In an embodiment, digital signature may be any digital signature as described above; digital signature may be created by signing a mathematical representation of first dataset. In an embodiment, at least a first cryptographic evaluator may generate a key to be used in producing digital signature using secure computing module116. A single key may be used in one or more digital signatures, such as signatures used to receive and/or transfer possession of crypto-currency assets; the key may be discarded for future use after a set period of time. In an embodiment, varied inputs including variations in local physical parameters, such as fluctuations in local electromagnetic fields, radiation, temperature, and the like may be combined with key-generation circuits or methods, such that an almost limitless variety of private keys may be so generated. In an embodiment, at least a first cryptographic evaluator and/or secure computing module116may convert immediate output from PUT124into key in the form of a binary number. This may be performed, without limitation, using a fuzzy extractor, such as those used to convert slightly variable signals from biometric samples or the like predictably into keys by having certain variation tolerances in the binary encoding process. Private key extraction may utilize additional corrective measures, including as a nonlimiting example machine learning, neural networks, convolutional neural networks and the like, or other approaches to provide error correction over the operating temperature range of the device, to ensure consistency in key extraction. Private key generation may alternatively or additionally incorporate true random number generator(s) (TRNGs), pseudorandom number generators (PRNGs) and related devices. Extraction may include extraction of a symmetric key; for instance, at least a first cryptographic evaluator and/or secure computing module116may extract one or more random numbers based on a PUF124output to create a symmetric key as described above. Alternatively or additionally, extraction may include extraction of a private key of a public key cryptographic system. Still referring toFIG.3, key extraction may include use of a number output by a PUF124or other circuit to generate a public and private key pair. For instance, such a number output may be used as a seed in an elliptic curve cryptographic system. In a non-limiting example, output may include a random number generated within a desired interval, which may be achieved, for instance, by setting the number of output bits to be provided from a PUF124; steps along a chosen elliptic curve may then be performed using random number to generate a public key. Initial point on elliptic curve and elliptic curve may be selected using a additional random numbers, which may be generated using any suitable method; random numbers associated with curves having known vulnerabilities may be discarded, according to mathematical descriptors or other characteristics of such vulnerabilities as stored in memory of or accessible to at least a first cryptographic evaluator and/or secure computing module116. Persons skilled in the art, upon reading the entirety of this disclosure, will be aware of various ways in which a random number may be used to generate a private and public key pair consistently with this disclosure. Still viewingFIG.3, key extraction may utilize a numerical output from a PUF124or other element of secure computing module116to generate an RSA private key; this may be accomplished, for instance, by using numerical outputs to generate RSA primes. RSA primes may be generated, as a general matter, by obtaining a random or pseudorandom odd number, checking whether that number is prime, and if it is not, repeatedly incrementing by 2, or some other amount leading to additional odd numbers, and rechecking until a prime is discovered. PUF124and/or elements of secure computing module116may generate one or more random numbers, for instance by using one or more PUFs as described above; any suitable algorithm may be used for generating a prime from a random number to produce pairs of primes usable as RSA factors. Random numbers below a threshold size may be discarded, and other filtering processes may be employed to discard potentially insecure prime factors. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of many suitable methods for creating RSA primes, and using such primes to generate RSA keys, using random numbers output by PUFs or other elements. Keys may be used to generate exponents for a cryptographic system such as Diffie-Helman or ElGamal that are based on the discrete logarithm problem. Continuing to viewFIG.3, digital signature may be generated using a digital signature using a direct anonymous authentication protocol (DAA). In an embodiment, DAA is an anonymous digital signature scheme, which instead of reliance on a certificate authority to link a particular private key to a particular party, uses reference to a group public key or to multiple public keys to verify an anonymous signature. Secure computing module116may act as a “first signer” of a digital signature, signing with a private key produced from a secret generator module as described above, which may be a group key. In an embodiment Secure computing module116signs an element of data using the private key. A second signer, which may include a manufacturer device or another device endorsing key and/or secret used for first signing may previously or subsequently sign the element of data and/or a verification datum associated with the secure proof and/or digital signature used for first signing; alternatively or additionally, second signer may use information or signature elements provided by Secure computing module116to perform a digital signature. This process may, for instance, enable generation of novel secret keys using additional circuitry, to demonstrate, for instance, timeliness of execution and frustrate delay-based attacks. DAA may perform digital signature using a zero-knowledge proof; for instance, any non-interactive zero-knowledge proof and/or zero-knowledge proof that may be made non-interactive may be used to generate digital signature, where signature may be, for instance, the proof algorithm output, while verification program, trusted setups where needed, or the like may be publicly available for evaluation of digital signature, i.e. of the proof. Similar processes may be performed, such as without limitation Intel EPID. Where a manufacturer or other device signs group public key and/or verification datum, such signature may be provided, distributed to one or more verifying nodes, or the like. Still referring toFIG.3, secure proof may include be generated using a physically unclonable function. For instance, and without limitation, an output of a PUF124may be used to generate a private key for a digital signature as described above. Alternatively or additionally, a PUF124output may constitute a secret to be used as a basis for a zero-knowledge proof, which may be any zero-knowledge proof as described herein. Still referring toFIG.3, secure computing module116and/or at least a first cryptographic evaluator may generate one or more elements of additional information that user or device may use to evaluate secure proof. For instance, secure computing module116and/or at least a first cryptographic evaluator may generate a public key; public key may be provided automatically to any querying device. Alternatively or additionally, public key may be provided to a manufacturer of secure computing module116, permitting manufacturer to act as a certificate authority for secure computing module116. Similarly, secure computing module116and/or at least a first cryptographic evaluator may generate data necessary to perform verification of a zero-knowledge proof by any verifier as described above. With continued reference toFIG.3, evaluating the secure proof may include receiving a verification datum corresponding to secure proof and evaluating the secure proof as a function of the verification datum. Verification datum, as used herein, is any datum that may be used to aid in evaluation of secure proof; for instance, where secure proof includes a digital signature generated using a private key of a public key cryptographic system, verification datum may include a corresponding public key. Similarly, where secure proof includes a zero-knowledge proof, verification datum may include verification data useable to verify zero-knowledge proof. In an embodiment, and still viewingFIG.3, identifying the at least a first cryptographic evaluator may include identifying a first cryptographic evaluator using a first identification protocol and identifying a second cryptographic evaluator using a second identification protocol, wherein the first identification protocol is distinct from the second identification protocol. As a non-limiting example, a first cryptographic evaluator of at least a first cryptographic evaluator may be identified using a TTP protocol, while a second may be identified using a DAA protocol. As a further example, a first cryptographic evaluator may be identified using a first version of a secure computing module116incorporated in the first cryptographic evaluator, while a second cryptographic evaluator may be identified using a second version of a secure computing module116; the first version may, for instance, be a GhostRider implementation while the second is an SGX implementation, or the like. In an embodiment, identification of cryptographic evaluators using heterogenous methods decreases the likelihood of an exploit successfully compromising all evaluators, as such an exploit would be required to take advantage of a potentially wide range of different vulnerabilities. Furthermore, in an embodiment selection device104may perform a time-of-evaluation selection of identification protocols, for instance by selecting from a stored menu of protocols using a random number generator or pseudorandom number generator; this may further decrease the probability of a successful exploit. At step310, and with continued reference toFIG.3, selection device determines a confidence level of the at least a first cryptographic evaluator. At least a confidence level may include a single confidence level assigned to a single cryptographic evaluator, a plurality of confidence levels assigned to a plurality of cryptographic evaluators, an aggregate confidence level of a plurality of cryptographic evaluators, or any other single or plural confidence level as described herein. Assigning a confidence level may include evaluating at least a digitally signed assertion signed by a cryptographic evaluator of the at least a first cryptographic evaluator, and assigning a confidence level to the cryptographic evaluator as a function of the evaluation of the at least a digitally signed assertion. At least a digitally signed assertion may be identified as signed by at least a first cryptographic evaluator using any identification process or protocol as described above. In an embodiment, at least a digitally signed assertion may be incorporated in a temporally sequential listing of digitally signed assertions. For instance, where temporally sequential listing is a blockchain or similar data structure, each assertion may be included in the blockchain. At least a second digitally signed assertion may include a plurality of digitally signed assertions. For instance, at least a first cryptographic evaluator may record a series of digitally signed assertions in temporally sequential listing; each transaction of the series of transactions may be authenticated by any process suitable for authenticating temporally sequential listing, including any process described herein for authentication of temporally sequential listing. As a further non-limiting example, at least a first cryptographic evaluator may enter an initial digitally signed assertion attesting to one or more elements of identification and/or authentication, including without limitation attestation of manufacturing date of at least a first cryptographic evaluator and/or secure computing module116, identities, serial numbers, versions, or make of hardware components of at least a first cryptographic evaluator and/or secure computing module116, or the like. Transactions performed by at least a cryptographic evaluator may be scored according to authenticity; for instance, trusted status may be conferred on at least a cryptographic evaluator only if a certain number of authenticated transactions have been performed by at least a cryptographic evaluator, a certain amount of value has been conveyed in authenticated transactions by at least a node, a certain proportion (which may be 100%) of transactions performed by at least a cryptographic evaluator have been successfully authenticated, or any other suitable benchmarking and/or scoring process or combination thereof. At least a digitally signed assertion may include assertions that were recorded in rejected instances of a temporally sequential listing204, such as rejected forks; in an embodiment, confidence level may be reduced as a function of a number of rejected forks including assertions signed by at least a cryptographic evaluator, for instance. Still referring toFIG.3, assigning the at least a confidence level may include receiving a consensus evaluation of the at least a confidence level from a network of cryptographic evaluators. for instance, all cryptographic evaluators currently connected to network may determine a confidence level concerning a particular cryptographic evaluator. This determination may be performed, for instance, by authenticating one or more current or past instances of a temporally sequential listing204and/or one or more sub-listings208thereof. Determination may include identification of one or more rejected instances of temporally sequential listing204. Each cryptographic evaluator of plurality of cryptographic evaluators may provide a confidence level for the cryptographic evaluator to be evaluated. Selection device104and/or another processor communicatively coupled to network may calculate an aggregate confidence level based on confidence levels submitted by plurality of cryptographic evaluators; aggregation may be performed according to any method for aggregation of confidence levels described above. In an embodiment, aggregation may be weighted according to a previously determined confidence level of each cryptographic evaluator of plurality of cryptographic evaluators performing consensus determination of confidence level of cryptographic evaluator to be evaluated. This may include, e.g., ignoring confidence level submissions from evaluators having confidence levels below a certain threshold; alternatively or additionally, selection device104may request confidence level determinations by a plurality of evaluators previously determined to have a confidence level above a certain threshold level. Each cryptographic evaluator and/or other processor participating in consensus determination of confidence level may perform any action described herein for determining a confidence level, or any combination of such actions. With continued reference toFIG.3, assigning the at least a confidence level may include evaluating a digitally signed assertion assigning a recorded confidence level to a cryptographic evaluator of the at least a first cryptographic evaluator, and assigning the confidence level as a function of the recorded confidence level. Digitally signed assertion may be any digitally signed assertion as described herein. Digitally signed assertion may be included in any temporally sequential listing as described herein; temporally sequential listing may include a temporally sequential listing relating identifiers of cryptographic evaluators to confidence levels, where identifiers may be any data usable as identifiers as described herein. Assignment of confidence level may be performed as a function of identifier; that is, identifier may be linked to an identity of a cryptographic evaluator, which may be used for assignment of confidence level as described in this disclosure. Selection device104may receive an instance of temporally sequential listing; receiving may include receiving an entire copy of the instance, receiving a sub-listing, receiving a link to temporally sequential listing, or a portion thereof, stored remotely, receiving digitally signed assertion along with an indication of temporally sequential listing containing digitally signed assertion, or the like. As a non-limiting example, one or more processors, a consensus process, selection device104, and/or a network of cryptographic evaluators having a confidence level in excess of a threshold, may have previously evaluated a confidence level in a certain cryptographic evaluator; in an embodiment, such a confidence level may itself be recorded in an assertion listed in temporally sequential listing204. A plurality of such assertions, corresponding to a plurality of cryptographic evaluators, may be listed; as such, selection device104may determine confidence level in one or more cryptographic evaluators solely by retrieving confidence levels so recorded. Alternatively or additionally, selection device104may combine such confidence levels with confidence level determinations made by other means. Combination may be performed, e.g., by retrieving such confidence levels from temporally sequential listing204for at least a first cryptographic evaluator, and calculating a confidence level for at least a second cryptographic evaluator by any other process described above. As a further example, selection device104may retrieve a confidence level recorded in temporally sequential listing204for a given cryptographic evaluator, determine a confidence level for the same cryptographic evaluator, and then aggregate the two confidence levels according to any process as described above for aggregation of confidence levels. Selection device104may determine confidence level using an algorithm assessing a number of connections from one device to another, such as without limitation a number of references to first cryptographic evaluator by other cryptographic evaluators in hypertext markup language (HTML) files or the like. Still referring toFIG.3, selection device104may further weight or modify confidence level according to one or more additional factors. For instance, confidence level may be weighted according to how recently cryptographic evaluator signed a digitally signed assertion in an authenticated instance of temporally sequential listing204, where a more recently authenticated assertion may result in a higher confidence level or higher weight assigned to the confidence level, and a less recently authenticated assertion may result in a lower confidence level or a lower weight assigned to that confidence level. As another example a cryptographic evaluator that has recently “sold off” a large amount of value and/or has an assertion in a sub-listing208currently awaiting authentication may have its confidence level decreased. As a further example, an evaluator with little or no history, or an anonymous evaluator, may be assigned some minimal or “neutral” confidence level indicating treatment as a “consensus” evaluator rather than a “trusted” evaluator. An evaluator associated with a previous fraudulent transaction may be assigned a confidence level of zero or may be excluded from evaluation processes. With continued reference toFIG.3, assigning the at least a confidence level may include performing a trusted time evaluation of at least an action performed by the at least a first cryptographic evaluator. As a non-limiting example, secure proof may be generated using a secure timestamp. Generating the secure timestamp may include digitally signing the secure timestamp using any digital signature protocol as described above. In one embodiment authenticity of received data signals is established by utilizing a chain of attestation via one or more attestation schemes (in nonlimiting example, via decentralized anonymous attestation (DAA)) to verify that the secure computing module116is an authentic secure computing module116that has the property of attested time. Attested time may be implemented, without limitation, as described in Provisional Application No. 62/758,367, filed on Nov. 9, 2018, and entitled “METHOD AND SYSTEMS FOR A DISTRIBUTED CERTIFICATE AUTHORITY,” the entirety of which is incorporated herein by reference. With continued reference toFIG.3, secure timestamp may be recorded the current time in a hash chain. In an embodiment, a hash chain includes a series of hashes, each produced from a message containing a current time stamp (i.e., current at the moment the hash is created) and the previously created hash, which may be combined with one or more additional data; additional data may include a random number, which may be generated for instance using a secure computing module116. Additional data may include one or more additional data, including sensor data or a hash of data, that are received or generated by temporal attester104. Additional data may be hashed into a Merkle tree or other hash tree, such that a root of the hash tree may be incorporated in an entry in hash chain. It may be computationally infeasible to reverse hash any one entry, particularly in the amount of time during which its currency is important; it may be astronomically difficult to reverse hash the entire chain, rendering illegitimate or fraudulent timestamps referring to the hash chain all but impossible. A purported entry may be evaluated by hashing its corresponding message. In an embodiment, the trusted timestamping procedure utilized is substantially similar to the RFC 3161 standard. In this scenario, the received data signals are locally processed at the listener device by a one way function, e.g. a hash function, and this hashed output data is sent to a timestamping authority (TSA). The use of secure timestamps as described herein may enable systems and methods as described herein to instantiate attested time. Attested time is the property that a device incorporating a local reference clock may hash data, e.g. sensor data, along with the local timestamp of the device. Attested time may additionally incorporate attested identity, attested device architecture and other pieces of information identifying properties of the attesting device. In one embodiment, secure timestamp is generated by a trusted third party (TTP) that appends a timestamp to the hashed output data, applies the TSA private key to sign the hashed output data concatenated to the timestamp, and returns this signed, a.k.a. trusted timestamped data back to the listener device. Alternatively or additionally, one or more additional participants, such as other cryptographic evaluators may evaluate confidence levels in at least a first cryptographic evaluator or other party generating secure timestamp and/or perform threshold cryptography with a plurality of such parties, each of which may have performed an embodiment of method to produce a secure timestamp. In an embodiment, cryptographic evaluators or other parties authenticating first digitally signed assertion200may perform authentication at least in part by evaluating timeliness of entry and/or generation of first digitally signed assertion200as assessed against secure timestamp. In an embodiment, secure proof is generated using an attested computing protocol; this may be performed, as a non-limiting example, using any protocol for attested computing as described above. Still referring toFIG.3, selection device104may determine a confidence level in an identity of the at least a first cryptographic evaluator; assigning the at least a confidence level may include assigning the at least a confidence level as a function of the at least a confidence level in the identity. Confidence level in identity may be computed, for instance, using one or more statistical measures of reliability of the identification method used; for instance, a user may enter an instruction on selection device104providing statistics indicating success rates of various identification methods. Statistics may be collected based, as a non-limiting example, on discoveries of vulnerabilities in particular identification protocols and/or particular instances of secure computation module. User may alternatively make a subjective assessment, based on expert knowledge, for instance, of a confidence level to assign based on such findings, and enter that confidence level. Statistics and/or user-entered confidence level in identification method may be used as multipliers or otherwise combined with confidence-level calculations as described in further detail below, or otherwise assigning a confidence level as a function of the confidence level in the identity. Selection device104may also determine confidence level in identity as a function of, for instance, one or more algorithms collecting statistics concerning degree of accuracy in past iterations of method400of a particular process for identifying at least a cryptographic evaluator. At step315, and still referring toFIG.3, selection device104selects a distributed framework from plurality of cryptographic evaluators as a function of the at least a confidence level. A distributed framework, as used herein, is a network containing one or more computing devices amongst which computational and/or data storage tasks are distributed, including without limitation computational tasks and/or data storage tasks as disclosed in further detail herein. Distributed framework may enable a device calling upon distributed framework, including without limitation selection device104, to treat one or more network-connected devices assembled in the distributed framework as a single device or pool that performs computational and/or storage tasks. Distributed framework may be use any suitable protocol for such task distribution, including without limitation any protocol and/or protocols as described herein, the Message Passing Interface (MPI) protocol, the HADOOP protocol promulgated by the Apache Software Foundation of Wakefield, MA, and or the SPARK protocol promulgated by the Apache Software Foundation. Selecting distributed framework may include selecting a distributed framework including at least a first cryptographic evaluator. Distributed framework may include solely the at least a first cryptographic evaluator; for instance, selection device104may select one or more cryptographic evaluators having confidence levels recorded in temporally sequential listing, and select the one or more cryptographic evaluators as the distributed framework. Alternatively or additionally, one or more cryptographic evaluators and/or other devices may be selected for distributed framework by at least a first cryptographic evaluator and/or using first cryptographic evaluator as a reference point. Still referring toFIG.3, selections of devices for distributed framework may be determined according to proximity according one or more measures of distance or time between each cryptographic evaluator and selection device104, between at least a first cryptographic evaluator and each selected cryptographic evaluator, and/or between at least a first cryptographic evaluator and selection device104. For instance, and without limitation, where the plurality of cryptographic evaluators is connected to the selection device via a network, selecting the distributed framework further comprises selecting at least a proximate cryptographic evaluator of the plurality of cryptographic evaluators in a graph representing the network; a proximate at least a cryptographic evaluator on a graph, may include, for instance, a at least a cryptographic evaluator within a certain number of steps through the graph from the once device to another. Steps may also be weighted according to, e.g., estimates of physical distance or length of wire between cryptographic evaluators112connected by steps, as measured using network latency analysis and/or other processes for instance as described below. As another non-limiting example, selecting the distributed framework may include selecting at least a geographically proximate cryptographic evaluator of the plurality of cryptographic evaluators. Geographical location of selection device104, at least a first cryptographic evaluator and/or at least a device selected as part of distributed framework may be performed by analysis of IP addresses, which may be compared to stored information mapping such addresses to particular geographical locations or the like; geographical location of any devices as described above may alternatively or additionally be determined using navigational facilities, such as the global positioning system (GPS) or other protocols used to determine the location of a device. Distance between devices may be computed using this information and compared to a threshold value; a device may be selected only if distance from selection device104and/or at least a first cryptographic evaluator is below the threshold value, which may include, for instance, a radius of a certain number of miles or kilometers around the determined location of the selection device104, at least a first cryptographic evaluator, and/or another device. With continued reference toFIG.3, selecting the distributed framework may include selecting at least a temporally proximate cryptographic evaluator; this may be at least a cryptographic evaluator that under network latency analysis, time for response to a “ping” signal, or the like presents a likelihood of a more rapid response. Alternatively or additionally, past response times and/or past times in which generation of appraisals as described in further detail below was performed may be recorded in memory108and/or in temporally sequential listing204; selection of at least a cryptographic evaluator may be performed based on past performance time. Selection of distribute framework may include selection of at least a device to minimize total communication latency, where total communication latency is total expected time for each cryptographic evaluator, or other device, to respond with an appraisal as described in further detail below; such selection may involve determining, for instance, a selection of plurality of cryptographic evaluators112presenting an optimal or near-optimal network traversal time, which may be computed using node-count distances, geographical distances, network communication latency times, and/or expected performance times by particular cryptographic evaluators112. Such optimization may involve a near-optimal resolution of a “traveling salesman” problem, including without limitation a “greedy algorithm” in which each selection step involves choosing a locally optimal cryptographic evaluator112; for instance, selection device104may choose a first “nearest” cryptographic evaluator112as measured by any of the above metrics, including any measure of actual or path distance and/or any measure of communication or computation latency. Continuing the example, selection device104may subsequently select a second cryptographic evaluator according to a locally optimal next selection under the above-described metric or metrics, selecting from locally optimal steps that either at least a first cryptographic evaluator, selection device104, either, or both may perform. This may be repeated until a desired number of cryptographic evaluators112is selected; “desired” number may be a raw threshold number, an aggregate confidence level as described in further detail below, or the solution to another optimization problem such as optimization of confidence versus speed as described in further detail below. Alternatively or additionally, optimal selection may make use of data concerning previously performed transactions; use of such data may include selection of an acceptably rapid previous transaction, or use of a plurality of previous selections to produce an algorithmic or mathematical solution to optimal selection using, e.g. a polynomial regression process, a neural-net machine learning process, or the like. Persons skilled in the art will be aware of various machine learning, deep learning, or other adaptive techniques that may be used to approach such an optimization problem, upon reviewing the entirety of this disclosure. Still referring toFIG.3, selection may include selection of only highly trusted cryptographic evaluators, for instance as determined by determination of confidence levels as described below, such that the fewest cryptographic evaluators are required for a given security requirement. These methods may be used to optimize network performance of authentication processes. In another example, additional data as described above that are incorporated into blocks or otherwise made available to nodes of the network may be utilized to optimally select which cryptographic evaluators are selected. In another embodiment, and continuing to refer toFIG.3, selecting distributed framework may include establishing an aggregate confidence-level threshold determining confidence levels of one or more cryptographic evaluators of the plurality of cryptographic evaluators, and/or of one or more other devices that may be incorporated in distributed framework, aggregating the confidence levels of the one or more cryptographic evaluators to generate an aggregate confidence level, determining that the aggregate confidence level satisfies the aggregate confidence-level threshold, and selecting the one or more cryptographic evaluators. Evaluation of confidence level of each of the plurality of cryptographic evaluators may be performed as described in further detail herein. Establishment of an aggregate confidence level in a plurality of cryptographic evaluators112or other devices having a plurality of associated confidence levels may involve, e.g., adding together confidence levels; alternatively, aggregate confidence level may be computed by viewing each confidence level as a probability, calculating an aggregate probability by averaging or other statistical combination processes, and selecting cryptographic evaluators112or other devices so as to result in an aggregate probability representing a desired confidence level. Alternatively or additionally, a machine-learning algorithm as described above may analyze past transactions to determine an optimal mathematical operation for calculating an aggregate confidence level. As noted below, a desired confidence level to be used as a threshold may be computed in turn by reference to a user input indicating a desired confidence level, a minimal confidence level set by selection device104and/or network, for instance to ensure some degree of overall network integrity, a calculation based on a value of a transaction recorded in at least a digitally signed assertion116, or the like. Still referring toFIG.3, selecting the distributed framework may include generating a cost function of confidence level and communication latency and minimizing the cost function. In an embodiment, cost function may be selected to optimize one or more user and/or network goals. Goals to be optimized may include, without limitation, a desired degree of latency (defined herein as a speed with which at least a computational or storage task to be performed by distributed framework occurs), security (which may be defined, e.g., as a degree of confidence in the accuracy of the task, a degree of confidence in the data integrity of the task, a degree of confidence in protection from data breeches and/or theft of information, and/or a degree of confidence in faithful performance of the computation by distributed framework), anonymity (defined as a degree of difficulty in obtaining information concerning a user of querying device and/or a person entering a transaction on temporally sequential listing204), and throughput (defined as an aggregate or average latency across users, cryptographic evaluators, and or other devices). There may be tradeoffs between the above-mentioned four goals. For instance, if user wishes to perform a task rapidly, reducing the number of nodes in at least a highly trusted at least a cryptographic evaluator may improve the speed with which authentication can take place, as may selection of proximate nodes as described above. Anonymity, however, may favor selection of more widely scattered cryptographic evaluators or other devices to make it more difficult to deduce where selection device104is located geographically or within network; additional measures to ensure anonymity, such as use of an anonymizing protocol such as the Tor protocol promulgated by The Tor Project, Inc., which functions by directing all internet traffic through a network containing a plurality of relays to conceal a user's location and usage from network surveillance and/or traffic analysis attempts, using “onion routing” processes, or the like may further increase latency and slow down authentication. Similarly, where greater security is a goal selections a highly trusted devices may be maximized, and/or across a wider range of network locations and/or geographical locations to improve the likely independence of nodes, also slowing the process. Selection of greater numbers of nodes, with lesser network latency between them, may also enable greater performance or capacity in computational or storage tasks. Thus, a person or device who wants to perform a task very secretly may desire a very high degree of security and anonymity, and may accept a greater degree of latency in exchange. A user or device seeking to perform a task with a high degree of security, but without a need for rapid performance or storage capacity may use a small number of highly trusted nodes. As another non-limiting example, a task may require fast, high-security, processing, relying on high degree of trust and low anonymity. As a further example, processes involving medical data may require high anonymity and high security, which may be emphasized above speed. In an embodiment, the ability of method300or variations thereof to modify these parameters for optimal results in different scenarios may be highly advantageous over existing methods. With continued reference toFIG.3, cost function may be dynamically set by a selected degree of optimization for one or more attributes. Determining degree of optimization may be performed via a user interface, which may be a graphical user interface (GUI), for instance by providing a user with one or more sliders representing desired degrees of security, transaction speeds, and/or levels of anonymity; sliders may be linked to absolute ranges of the attributes or may alternatively be used proportionally to represent relative importance to user of each attribute. Positions of one or more sliders may be reset according to stored mathematical relationships between different items; mathematical relationships may be determined by combining or producing machine-learning processes. A related or separate set of mathematical relationships may be used to determine how selection of at least a highly trusted at least a cryptographic evaluator affects each attribute. Protocol implemented in embodiments herein may support varying security and anonymity demands by the parties to the transactions. For instance, two parties wishing to exchange $5M over the network will demand commensurate security and require some reduction in anonymity to comply with federal laws, in exchange for slightly longer validation times. Conversely, a customer purchasing a coffee at Starbucks will demand relatively little security and may be fully anonymous; a potential malicious actor utilizing a great number of small transactions to hide a large total transaction from regulators may be thwarted by identifying anonymous certificates that are re-used above some threshold and flagged by the network. This may allow network to self-adapt to meet varying demands. With continued reference toFIG.3, mathematical relationships between attributes and each other and/or between attributes and selection of distributed framework may be derived by collection of statistics concerning past transactions. In some embodiments, statistical relationships are determined through one or more machine learning processes; for instance, data describing the speed, authenticity, and anonymity of a plurality of past transactions may be subjected to regression analysis, such as linear or polynomial regression, to determine one or more equations relating one parameter of such transactions to one or more other such parameters. Similarly, a neural net may be provided with such a plurality of past transactions. Machine-learning processes may be supervised and/or unsupervised; for instance, attributes to compare may be preselected to ensure that machine-learning processes result in relationships between desired attributes and transaction parameters. Mathematical relationships may demonstrate, e.g., that a certain number of nodes in at least a highly trusted node results in a 95% degree of confidence, that a second, higher number of nodes results in a 98% degree of confidence, and the like. As a further example, mathematical relationships may associate a level of anonymity, as measured in average proportion information content concerning user and/or selection device104obtainable from a transaction, information entropy of transaction, or the like, to average network or geographical distance between nodes of at least a highly trusted node, to selection of protocols to anonymize, and the like. Relationships between, the above parameters and latency may also be represented. Direct relationships between attributes to be optimized may be determined by machine learning processes; alternatively or additionally, such relationships may be determined using relationships of each attribute to parameters of selected device. In an embodiment, and still referring toFIG.3, selection may include assigning an authorization token granting an access right to at least a first cryptographic evaluator. An “authorization token” as used in this disclosure is a token granting an access right, signed by a device generating authorization token, such as without limitation selection device104. Authoriation token may include a temporal attribute. To facilitate anonymity, in an exemplary embodiment of authorization token in which it is desired to maintain anonymity of the remote device while using at least a authorization token, the at least a authorization token may contain at least one of the following attributes: a secure timestamp indicating the time that the token was created, a monotonic counter value or other datum unique to the authorization token for this particular cryptographic evaluator, and/or a session key conferring access to the network at the time of token creation. Additionally or separately, at least an authorization token may include an expiration period, e.g. a fixed time limit relative to the verifier's local time the token was created or issued, and may include at least a trust level based upon the properties of the cryptographic evaluator or other device attested in the authorization process, as described herein. It may be desirous to separately or additionally provide at least a session key enabling cryptographic evaluator to encrypt and/or decrypt messages to at least an additional cryptographic evaluator, or at least a group of cryptographic evaluators, based on properties of commonality therebetween. In non-limiting example, session key may be a symmetric key conveyed via secure channel from the at least a verifier, and/or an asymmetric key, multisignature, threshold signature or key system resulting from multi-signature or threshold signature as described above, or other key system or datum associated with at least a verifier during at least a time epoch. In an embodiment, a temporal attribute associated with an authorization token may be determined and/or generated based on confidence level; for instance, a first cryptographic evaluator that has been assigned a first confidence level may be issued a first authorization token by selection device104having a first expiration period, and second cryptographic evaluator that has been assigned a second confidence level that is less than, or indicative of a lower degree of confidence than, first confidence level may be issued a second authorization token having a second expiration period of lesser duration than the first expiration period. In an embodiment, selection device104and/or any other device generating authorization tokens may re-evaluate a length of an expiration period, upon expiration of an authorization token associated with a cryptographic evaluator; for instance, and without limitation, selection device104and/or other device generating authorization tokens may perform any step described above for evaluation of confidence levels, including without limitation generating a new or updated confidence level for the cryptographic evaluator and/or making any determination regarding the cryptographic evaluator described above as usable for determination and/or assigning of a confidence level. Where a newly determined confidence level is higher or indicative of greater confidence, and/or determination results in a conclusion that would, if used in determinations of confidence level as described above, cause and/or tend toward generation of a higher confidence level, a subsequently and/or concurrently generated authorization token may have a new expiration period of longer duration, and/or may not expire at all; where a newly determined confidence level is lower or indicative of lesser confidence, and/or determination results in a conclusion that would, if used in determinations of confidence level as described above, cause and/or tend toward generation of a lower confidence level, a subsequently and/or concurrently generated authorization token may have a new expiration period of shorter duration, and/or may not be generated at all if confidence level and/or result of determination falls below a threshold as described above. Authorization tokens, temporal attributes, and/or attested time may be implemented according to any embodiments described in Provisional Application No. 62/758,367, filed on Nov. 9, 2018, and entitled “METHOD AND SYSTEMS FOR A DISTRIBUTED CERTIFICATE AUTHORITY,” the entirety of which is incorporated herein by reference. With continued reference toFIG.3, at least a first cryptographic evaluator may assist in selection of one or more additional devices, which may be cryptographic evaluators of plurality of cryptographic evaluators, or may be other devices connected to network. For instance, and without limitation, selecting the distributed framework may include receiving an identification of at least a second cryptographic evaluator of the plurality of cryptographic evaluators from the at least a first cryptographic evaluator, and selecting the at least a second cryptographic evaluator as a function of the identification of the at least a second cryptographic evaluator. The identification of the at least a second cryptographic evaluator may include a digitally signed assertion generated by the at least a first cryptographic evaluator; digitally signed assertion may be created using any protocol for creation of a digitally signed assertion, including a digital signature signed with a private key possessed and/or generated by at least a first cryptographic evaluator, a secure proof, as defined above, generated according to any protocol or combination of protocols as described above by first cryptographic evaluator, or the like. Identification of at least a second cryptographic evaluator and/or other device may include verification information that may be combined with a secure proof issued by second cryptographic evaluator to verify or authenticate second cryptographic evaluator, including without limitation an address as described above, a public key as described above, a verification associated with a zero-knowledge proof, or the like. Selection device104may select one or more of at least a second cryptographic evaluator (or other device), including less than all cryptographic evaluators of at least a second cryptographic evaluator (or other device) according to any criteria as described above for selection of at least a first cryptographic evaluator and/or any device included in distributed framework, including without limitation by determining confidence levels in individual devices and/or aggregate confidence levels, comparison of confidence levels to threshold values, minimization of cost functions and/or optimization of network distance or latency, or any other procedure described above. At step320, and still viewingFIG.3, selection device104assigns a task to the distributed framework; task may include a computational task, a storage task, or any combination thereof. This may be performed in any suitable manner for division of tasks, including distributed storage using, for instance, distributed hash tables, temporally sequential listings, JAVA HDFS as promulgated by Oracle, a resilient distributed dataset, or the like. Assignment of task may be performed by partitioning or dividing data and/or computational tasks by a “master” device amongst one or more “slave” devices; “master” device may be selection device104, a device having a high confidence level as described above, including without limitation first cryptographic evaluator. For instance, and without limitation, a task requiring processing of a large quantity of data, for instance sorting or searching within the data, may be divided among “slave” devices by partitioning the data into “chunks,” each of which is sent to one or more distinct devices; devices may then perform local portions of the overall computing task with regard to their respective partitions, followed by a recombination of the computing outputs to produce a final result. Recombination of outputs may be performed by “master” device. Allocation of computational or data storage tasks may be performed to minimize network latency costs, which may be done using any calculations or processes to minimize latency, minimize network distance, and/or minimize geographical distance, as described above; in other words, “selection” may be performed a first time to select distributed framework, and (optionally) a second time for maximally efficient distribution of tasks. It is to be noted that any one or more of the aspects and embodiments described herein may be conveniently implemented using one or more machines (e.g., one or more computing devices that are utilized as a user computing device for an electronic document, one or more server devices, such as a document server, etc.) programmed according to the teachings of the present specification, as will be apparent to those of ordinary skill in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those of ordinary skill in the software art. Aspects and implementations discussed above employing software and/or software modules may also include appropriate hardware for assisting in the implementation of the machine executable instructions of the software and/or software module. Such software may be a computer program product that employs a machine-readable storage medium. A machine-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a machine-readable storage medium include, but are not limited to, a magnetic disk, an optical disc (e.g., CD, CD-R, DVD, DVD-R, etc.), a magneto-optical disk, a read-only memory “ROM” device, a random access memory “RAM” device, a magnetic card, an optical card, a solid-state memory device, an EPROM, an EEPROM, and any combinations thereof. A machine-readable medium, as used herein, is intended to include a single medium as well as a collection of physically separate media, such as, for example, a collection of compact discs or one or more hard disk drives in combination with a computer memory. As used herein, a machine-readable storage medium does not include transitory forms of signal transmission. Such software may also include information (e.g., data) carried as a data signal on a data carrier, such as a carrier wave. For example, machine-executable information may be included as a data-carrying signal embodied in a data carrier in which the signal encodes a sequence of instruction, or portion thereof, for execution by a machine (e.g., a computing device) and any related information (e.g., data structures and data) that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g., a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof. In one example, a computing device may include and/or be included in a kiosk. FIG.4shows a diagrammatic representation of one embodiment of a computing device in the exemplary form of a computer system400within which a set of instructions for causing a control system to perform any one or more of the aspects and/or methodologies of the present disclosure may be executed. It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure. Computer system400includes a processor404and a memory408that communicate with each other, and with other components, via a bus412. Bus412may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. Memory408may include various components (e.g., machine-readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system416(BIOS), including basic routines that help to transfer information between elements within computer system400, such as during start-up, may be stored in memory408. Memory408may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software)420embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory408may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof. Computer system400may also include a storage device424. Examples of a storage device (e.g., storage device424) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device424may be connected to bus412by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device424(or one or more components thereof) may be removably interfaced with computer system400(e.g., via an external port connector (not shown)). Particularly, storage device424and an associated machine-readable medium428may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system400. In one example, software420may reside, completely or partially, within machine-readable medium428. In another example, software420may reside, completely or partially, within processor404. Computer system400may also include an input device432. In one example, a user of computer system400may enter commands and/or other information into computer system400via input device432. Examples of an input device432include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device432may be interfaced to bus412via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus412, and any combinations thereof. Input device432may include a touch screen interface that may be a part of or separate from display436, discussed further below. Input device432may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above. A user may also input commands and/or other information to computer system400via storage device424(e.g., a removable disk drive, a flash drive, etc.) and/or network interface device440. A network interface device, such as network interface device440, may be utilized for connecting computer system400to one or more of a variety of networks, such as network444, and one or more remote devices448connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network, such as network444, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software420, etc.) may be communicated to and/or from computer system400via network interface device440. Computer system400may further include a video display adapter452for communicating a displayable image to a display device, such as display device436. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter452and display device436may be utilized in combination with processor404to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system400may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus412via a peripheral interface456. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof. The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments, what has been described herein is merely illustrative of the application of the principles of the present invention. Additionally, although particular methods herein may be illustrated and/or described as being performed in a specific order, the ordering is highly variable within ordinary skill to achieve methods, systems, and software according to the present disclosure. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention. Exemplary embodiments have been disclosed above and illustrated in the accompanying drawings. It will be understood by those skilled in the art that various changes, omissions and additions may be made to that which is specifically disclosed herein without departing from the spirit and scope of the present invention.
170,101
11861401
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The terms or words used in the disclosure and the claims should not be construed as limited to their ordinary or lexical meanings. They should be construed as the meaning and concept in line with the technical idea of the disclosure based on the principle that the inventor can define the concept of terms or words in order to describe his/her own embodiments in the best possible way. Further, since the embodiment described herein and the configurations illustrated in the drawings are merely one embodiment in which the disclosure is realized and do not represent all the technical ideas of the disclosure, it should be understood that there may be various equivalents, variations, and applicable examples that can replace them at the time of filing this application. Although terms such as first, second, A, B, etc. used in the description and the claims may be used to describe various components, the components should not be limited by these terms. These terms are used only for the purpose of distinguishing one component from another. For example, a first component may be referred to as a second component, and similarly, a second component may be referred to as a first component, without departing from the scope of the disclosure. The term ‘and/or’ includes a combination of a plurality of related listed items or any item of the plurality of related listed items. The terms used in the description and the claims are merely used to describe particular embodiments and are not intended to limit the disclosure. Singular expressions include plural expressions unless the context explicitly indicates otherwise. In the application, terms such as “comprise,” “have,” “include”, “contain,” etc. should be understood as not precluding the possibility of existence or addition of features, numbers, steps, operations, components, parts, or combinations thereof described herein. Terms such as a “circuit” or “circuitry”, refers to a circuit in hardware but may also refer to a circuit in software. Unless otherwise defined, the phrases “A, B, or C,” “at least one of A, B, or C,” or “at least one of A, B, and C” may refer to only A, only B, only C, both A and B, both A and C, both B and C, all of A, B, and C, or any combination thereof. Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by those of ordinary skill in the art to which the disclosure pertains. Terms such as those defined in commonly used dictionaries should be construed as having a meaning consistent with the meaning in the context of the relevant art, and are not to be construed in an ideal or excessively formal sense unless explicitly defined in the disclosure. In addition, each configuration, procedure, process, method, or the like included in each embodiment of the disclosure may be shared to the extent that they are not technically contradictory to each other. Hereinafter, a neural processing device in accordance with some embodiments of the disclosure will be described with reference toFIGS.1to27. FIG.1is a block diagram for illustrating a neural processing system in accordance with some embodiments of the disclosure. Referring toFIG.1, a neural processing system NPS in accordance with some embodiments may include a first neural processing device1, a second neural processing device2, and an external interface3. The first neural processing device1may be a device that performs calculations using an artificial neural network. The first neural processing device1may be, for example, a device specialized in performing tasks of deep learning calculations. However, the embodiment is not limited thereto. The second neural processing device2may be a device having the same or similar configuration as the first neural processing device1. The first neural processing device1and the second neural processing device2may be connected to each other via the external interface3and share data and control signals. AlthoughFIG.1shows two neural processing devices, the neural processing system NPS in accordance with some embodiments is not limited thereto. That is, in a neural processing system NPS in accordance with some embodiments, three or more neural processing devices may be connected to each other via the external interface3. Also, conversely, a neural processing system NPS in accordance with some embodiments may include only one neural processing device. FIG.2is a block diagram for illustrating the neural processing device ofFIG.1. Referring toFIG.2, a first neural processing device1may include a neural core SoC10, a CPU20, an off-chip memory30, a first non-volatile memory interface40, a first volatile memory interface50, a second non-volatile memory interface60, and a second volatile memory interface70. The neural core SoC10may be a system on a chip device. The neural core SoC10can be an artificial intelligence calculation device and may be an accelerator. The neural core SoC10may be, for example, any one of a graphics processing unit (GPU), a field-programmable gate array (FPGA), or an application-specific integrated circuit (ASIC). However, the embodiment is not limited thereto. The neural core SoC10may exchange data with other external calculation devices via the external interface3. Further, the neural core SoC10may be connected to the non-volatile memory31and the volatile memory32via the first non-volatile memory interface40and the first volatile memory interface50, respectively. The CPU20may be a control device that controls the system of the first neural processing device1and executes program calculations. The CPU20is a general-purpose calculation device and may have low efficiency in performing simple parallel calculations that are frequently used in deep learning. Accordingly, there can be high efficiency by performing calculations in deep learning inference and training tasks by the neural core SoC10. The CPU20may exchange data with other external calculation units via the external interface3. Further, the CPU20may be connected to the non-volatile memory31and the volatile memory32via the second non-volatile memory interface60and the second volatile memory interface70, respectively. The off-chip memory30may be a memory disposed outside the chip of the neural core SoC10. The off-chip memory30may include a non-volatile memory31and a volatile memory32. The non-volatile memory31may be a memory that continuously retains stored information even if electric power is not supplied. The non-volatile memory31may include, for example, at least one of Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Alterable ROM (EAROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM) (e.g., NAND Flash memory, NOR Flash memory), Ultra-Violet Erasable Programmable Read-Only Memory (UVEPROM), Ferroelectric Random-Access Memory (FeRAM), Magnetoresistive Random-Access Memory (MRAM), Phase-change Random-Access Memory (PRAM), silicon—oxide—nitride—oxide—silicon (SONOS), Resistive Random-Access Memory (RRAM), Nanotube Random-Access Memory (NRAM), magnetic computer storage devices (e.g., hard disks, diskette drives, magnetic tapes), optical disc drives, or 3D XPoint memory. However, the embodiment is not limited thereto. The volatile memory32may be a memory that continuously requires electric power to retain stored information, unlike the non-volatile memory31. The volatile memory32may include, for example, at least one of Dynamic Random-Access Memory (DRAM), Static Random-Access Memory (SRAM), Synchronous Dynamic Random-Access Memory (SDRAM), or Double Data Rate SDRAM (DDR SDRAM). However, the embodiment is not limited thereto. Each of the first non-volatile memory interface40and the second non-volatile memory interface60may include, for example, at least one of Parallel Advanced Technology Attachment (PATA), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Serial Advanced Technology Attachment (SATA), or PCI Express (PCIe). However, the embodiment is not limited thereto. Each of the first volatile memory interface50and the second volatile memory interface70may be, for example, at least one of SDR (Single Data Rate), DDR (Double Data Rate), QDR (Quad Data Rate), or XDR (eXtreme Data Rate, Octal Data Rate). However, the embodiment is not limited thereto. FIG.3is a block diagram for illustrating the neural core SoC ofFIG.2. Referring toFIGS.2and3, the neural core SoC10may include at least one neural processor1000, a shared memory2000, direct memory access (DMA)3000, a non-volatile memory controller4000, a volatile memory controller5000, and a global interconnection6000. The neural processor1000may be a calculation device that directly performs calculation tasks. If there exist neural processors1000in plurality, calculation tasks may be assigned to respective neural processors1000. The respective neural processors1000may be connected to each other via the global interconnection6000. The shared memory2000may be a memory shared by multiple neural processors1000. The shared memory2000may store data of each neural processor1000. In addition, the shared memory2000may receive data from the off-chip memory30, store the data temporarily, and transfer the data to each neural processor1000. The shared memory2000may also receive data from the neural processor1000, store the data temporarily, and transfer the data to the off-chip memory30ofFIG.2. The shared memory2000may be required to be a relatively high-speed memory. Accordingly, the shared memory2000may include, for example, an SRAM. However, the embodiment is not limited thereto. That is, the shared memory2000may include a DRAM as well. The shared memory2000may be a memory corresponding to the SoC level, i.e., level 3 (L3). Accordingly, the shared memory2000may also be defined as an L3 shared memory. The DMA3000may directly control the movement of data without the need for the neural processor1000to control the input/output of data. Accordingly, the DMA3000may control the data movement between memories, thereby minimizing the number of interrupts of the neural processor1000. The DMA3000may control the data movement between the shared memory2000and the off-chip memory30. Via the authority of the DMA3000, the non-volatile memory controller4000and the volatile memory controller5000may perform the movement of data. The non-volatile memory controller4000may control the task of reading from or writing onto the non-volatile memory31. The non-volatile memory controller4000may control the non-volatile memory31via the first non-volatile memory interface40. In this case, the non-volatile memory controller4000may be referred to as a non-volatile memory controller circuit, but for the sake of convenience, the terms are unified as a non-volatile memory controller. In addition, the non-volatile memory controller4000may be implemented as a circuit or circuitry. The volatile memory controller5000may control the task of reading from or writing onto the volatile memory32. Further, the volatile memory controller5000may perform a refresh task of the volatile memory32. The volatile memory controller5000may control the volatile memory32via the first volatile memory interface50. Likewise, the volatile memory controller5000may be referred to as a volatile memory controller circuit, but for the sake of convenience, the terms are unified as a volatile memory controller. In addition, the volatile memory controller5000may be implemented as a circuit or circuitry. The global interconnection6000may connect the at least one neural processor1000, the shared memory2000, the DMA3000, the non-volatile memory controller4000, and the volatile memory controller5000to one another. In addition, the external interface3may also be connected to the global interconnection6000. The global interconnection6000may be a path through which data travels between the at least one neural processor1000, the shared memory2000, the DMA3000, the non-volatile memory controller4000, the volatile memory controller5000, and the external interface3. The global interconnection6000may transmit not only data but also control signals and may transmit a signal for synchronization. That is, in the neural processing device in accordance with some embodiments, each neural processor1000may directly transmit and receive a synchronization signal, instead of a separate control processor managing the synchronization signal. Accordingly, it is possible to preclude the latency of the synchronization signal generated by the control processor. In other words, if there exist neural processors1000in plurality, there may be dependencies of individual tasks in which the task of one neural processor1000needs to be finished before the next neural processor1000can start a new task. The end and start of these individual tasks can be checked and/or coordinated via a synchronization signal, and in conventional techniques, a control processor performed the reception of such a synchronization signal and an instruction to start a new task. However, as the number of neural processors1000increases and task dependencies are designed more complicatedly, the number of requests and instructions for this synchronization task can increase exponentially. Therefore, the latency resulting from each request and instruction can greatly reduce the efficiency of tasks. Accordingly, in the neural processing device in accordance with some embodiments, each neural processor1000, instead of the control processor, may directly transmit a synchronization signal to another neural processor1000according to the dependency of a task. In this case, several neural processors1000can perform the synchronization tasks in parallel as compared with the method managed by the control processor, thereby minimizing the latency due to synchronization. In addition, the control processor needs to perform the task scheduling of the neural processors1000according to a task dependency, and the overhead of such scheduling may increase significantly as the number of neural processors1000increases. Accordingly, in the neural processing device, in accordance with some embodiments, the scheduling task is also performed by the individual neural processors1000, and thus, the performance of the neural processing device can be improved without resulting in an additional scheduling burden. FIG.4is a structural diagram for illustrating the global interconnection ofFIG.3. Referring toFIG.4, the global interconnection6000may include a data channel6100, a control channel6200, and an L3 sync channel6300. The data channel6100may be a dedicated channel for transmitting data. Through the data channel6100, the at least one neural processor1000, the shared memory2000, the DMA3000, the non-volatile memory controller4000, the volatile memory controller5000, and the external interface3may exchange data with one another. The control channel6200may be a dedicated channel for transmitting control signals. Through the control channel6200, the at least one neural processor1000, the shared memory2000, the DMA3000, the non-volatile memory controller4000, the volatile memory controller5000, and the external interface3may exchange control signals with one another. The L3 sync channel6300may be a dedicated channel for transmitting synchronization signals. Through the L3 sync channel6300, the at least one neural processor1000, the shared memory2000, the DMA3000, the non-volatile memory controller4000, the volatile memory controller5000, and the external interface3may exchange synchronization signals with one another. The L3 sync channel6300may be set as a dedicated channel inside the global interconnection6000, and thus, may not overlap with other channels and transmit synchronization signals quickly. Accordingly, the neural processing device in accordance with some embodiments does not require new wiring work and may smoothly perform the synchronization task by using the global interconnection6000. FIG.5is a block diagram for illustrating the neural processor ofFIG.3. Referring toFIGS.3to5, a neural processor1000may include at least one neural core100, an L2 shared memory400, a local interconnection200, and an L2 sync path300. The at least one neural core100may share and perform the tasks of the neural processor1000. The number of neural cores100may be, for example, eight. However, various embodiments are not limited thereto.FIG.5illustrates that a plurality of neural cores are included in the neural processor1000, but various embodiments are not limited thereto. That is, the neural processor1000may be configured with only one neural core. The L2 shared memory400may be a memory shared by the neural cores100in the neural processor1000. The L2 shared memory400may store data of each neural core100. In addition, the L2 shared memory400may receive data from the shared memory2000ofFIG.3, store them temporarily, and transfer them to each neural core100. On the contrary, the L2 shared memory400may also receive data from the neural core100, store them temporarily, and transfer them to the shared memory2000ofFIG.3. The L2 shared memory400may be a memory corresponding to the neural processor level, i.e., level 2 (L2). The L3 shared memory, i.e., the shared memory2000may be shared by the neural processors1000, and the L2 shared memory400may be shared by the neural cores100. The local interconnection200may connect the at least one neural core100and the L2 shared memory400to each other. The local interconnection200may be a path through which data travels between the at least one neural core100and the L2 shared memory400. The local interconnection200may be connected and transmit data to the global interconnection6000ofFIG.3. The L2 sync path300may connect the at least one neural core100and the L2 shared memory400to each other. The L2 sync path300may be a path through which synchronization signals of the at least one neural core100and the L2 shared memory400travel. The L2 sync path300may be formed physically separately from the local interconnection200. In the case of the local interconnection200, sufficient channels may not be formed therein, unlike the global interconnection6000. In such a case, the L2 sync path300may be formed separately so that the synchronization signal can be transmitted quickly and without any delay. The L2 sync path300may be used for synchronization performed at a level one step lower than that of the L3 sync channel6300of the global interconnection6000. FIG.6is a diagram for illustrating a hierarchical structure of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.6, the neural core SoC10may include at least one neural processor1000. Each neural processor1000may transmit data to each other via the global interconnection6000. The neural processors1000may each include at least one neural core100. The neural core100may be a processing unit optimized for deep learning calculation tasks. The neural core100may be a processing unit corresponding to one operation of a deep learning calculation task. In other words, a deep learning calculation task can be represented by a sequential or parallel combination of multiple operations. The neural cores100may each be a processing unit capable of processing one operation, and may be a minimum calculation unit that can be considered for scheduling from the viewpoint of a compiler. The neural processing device in accordance with the embodiment may configure the scales of the minimum calculation unit, considered from the viewpoint of compiler scheduling and the hardware processing unit to be the same, so that fast and efficient scheduling and calculation tasks can be performed. That is, if the processing units into which hardware can be divided are too large compared to calculation tasks, inefficiency of the calculation tasks may occur in driving the processing units. Conversely, it is not appropriate to schedule a processing unit that is a unit smaller than an operation, which is the minimum scheduling unit of the compiler, every time since a scheduling inefficiency may occur and hardware design costs may increase. Therefore, by adjusting the scales of the scheduling unit of the compiler and the hardware processing unit to be similar in the embodiment, it is possible to simultaneously satisfy the rapid scheduling and efficient execution of calculation tasks without wasting hardware resources. FIG.7is a block diagram for illustrating the neural core ofFIG.5in detail. Referring toFIG.7, the neural core100may include a load/store unit (LSU)110, an L0 memory120, a weight buffer130, an activation LSU140, an activation buffer150, and a processing unit160. The LSU110may receive at least one of data, a control signal, or a synchronization signal from the outside via the local interconnection200and the L2 sync path300. The LSU110may transmit at least one of the data, the control signal, or the synchronization signal received to the L0 memory120. Similarly, the LSU110may transfer at least one of the data, the control signal, or the synchronization signal to the outside via the local interconnection200and the L2 sync path300. In this case, the LSU110may be referred to as an LSU circuit, but for the sake of convenience, the terms are unified as an LSU. In addition, the LSU110may be implemented as a circuit or circuitry. FIG.8is a block diagram for illustrating the LSU ofFIG.7. Referring toFIG.8, the LSU110may include a local memory load unit (LMLU)111a, a local memory store unit (LMSU)111b, a neural core load unit (NCLU)112a, a neural core store unit (NCSU)112b, a load buffer LB, a store buffer SB, a load (LD) engine113a, a store (ST) engine113b, and a translation lookaside buffer (TLB)114. The local memory load unit111a, the local memory store unit111b, the neural core load unit112a, the neural core store unit112b, the load engine113a, and the store engine113bmay be referred to respectively as a local memory load circuit, a local memory store circuit, a neural core load circuit, a neural core store circuit, a load engine circuit, and a store engine circuit. However, for the sake of convenience, the terms are respectively unified as a local memory load unit, a local memory store unit, a neural core load unit, a neural core store unit, a load engine, and a store engine. In addition, the local memory load unit111a, the local memory store unit111b, the neural core load unit112a, the neural core store unit112b, the load engine113a, and the store engine113bmay each be implemented as a circuit or circuitry. The local memory load unit111amay fetch a load instruction for the L0 memory120and issue the load instruction. When the local memory load unit111aprovides the issued load instruction to the load buffer LB, the load buffer LB may sequentially transmit memory access requests to the load engine113aaccording to the inputted order. Further, the local memory store unit111bmay fetch a store instruction for the L0 memory120and issue the store instruction. When the local memory store unit111bprovides the issued store instruction to the store buffer SB, the store buffer SB may sequentially transmit memory access requests to the store engine113baccording to the inputted order. The neural core load unit112amay fetch a load instruction for the neural core100and issue the load instruction. When the neural core load unit112aprovides the issued load instruction to the load buffer LB, the load buffer LB may sequentially transmit memory access requests to the load engine113aaccording to the inputted order. In addition, the neural core store unit112bmay fetch a store instruction for the neural core100and issue the store instruction. When the neural core store unit112bprovides the issued store instruction to the store buffer SB, the store buffer SB may sequentially transmit memory access requests to the store engine113baccording to the inputted order. The load engine113amay receive the memory access request and retrieve data via the local interconnection200. At this time, the load engine113amay quickly find the data by using a translation table of a physical address and a virtual address that has been used recently in the translation lookaside buffer114. If the virtual address of the load engine113ais not in the translation lookaside buffer114, the address translation information may be found in another memory. The store engine113bmay receive the memory access request and retrieve data via the local interconnection200. At this time, the store engine113bmay quickly find the data by using a translation table of a physical address and a virtual address that has been used recently in the translation lookaside buffer114. If the virtual address of the store engine113bis not in the translation lookaside buffer114, the address translation information may be found in another memory. The load engine113aand the store engine113bmay send synchronization signals to the L2 sync path300. At this time, the synchronization signal may indicate that the task has been completed. Referring toFIG.7again, the L0 memory120is a memory located inside the neural core100, and may receive all input data required for the tasks by the neural core100from the outside and store them temporarily. In addition, the L0 memory120may temporarily store the output data calculated by the neural core100for transmission to the outside. The L0 memory120may serve as a cache memory of the neural core100. The L0 memory120may transmit an input activation Act_In to the activation buffer150and receive an output activation Act_Out via the activation LSU140. The L0 memory120may directly transmit and receive data to and from the processing unit160, in addition to the activation LSU140. In other words, the L0 memory120may exchange data with each of a processing element (PE) array163and a vector unit164. The L0 memory120may be a memory corresponding to the level of the neural core. In this case, the L0 memory120may be a private memory of the neural core that is not shared. The L0 memory120may transmit data such as activations or weights via a data path. The L0 memory120may exchange synchronization signals via an L1 sync path, which is a separate dedicated path. The L0 memory120may exchange synchronization signals with, for example, the LSU110, the weight buffer130, the activation LSU140, and the processing unit160via the L1 sync path. The weight buffer130may receive a weight from the L0 memory120. The weight buffer130may transfer the weight to the processing unit160. The weight buffer130may temporarily store the weight before transferring it. The input activation Act_In and the output activation Act_Out may refer to input values and output values of the layers of a neural network. In this case, if there are a plurality of layers in the neural network, the output value of the previous layer becomes the input value of the next layer, and thus, the output activation Act_Out of the previous layer may be utilized as the input activation Act_In of the next layer. The weight may refer to a parameter that is multiplied by the input activation Act_In inputted in each layer. The weight is adjusted and confirmed in the deep learning training phase, and may be used to derive the output activation Act_Out via a fixed value in the inference phase. The activation LSU140may transfer the input activation Act_In from the L0 memory120to the activation buffer150, and the output activation Act_Out from the activation buffer150to the on-chip buffer. In other words, the activation LSU140may perform both a load task and a store task of the activation. The activation buffer150may provide the input activation Act_In to the processing unit160and receive the output activation Act_Out from the processing unit160. The activation buffer150may temporarily store the input activation Act_In and the output activation Act_Out. The activation buffer150may quickly provide the activation to the processing unit160, in particular, the PE array163, which has a large quantity of calculations, and may quickly receive the activation, thereby increasing the calculation speed of the neural core100. The processing unit160may be a module that performs calculations. The processing unit160may perform not only one-dimensional calculations but also two-dimensional matrix calculations, i.e., convolution operations. The processing unit160may receive an input activation Act_In, multiply it by a weight, and then add it to generate an output activation Act_Out. FIG.9is a block diagram for illustrating the processing unit ofFIG.7in detail. Referring toFIG.7andFIG.9, the processing unit160may include a PE array163, a vector unit164, a column register161, and a row register162. The PE array163may receive the input activation Act_In and the weight and perform multiplication on them. In this case, each of the input activation Act_In and the weight may be in the form of matrices and calculated via convolution. Through this, the PE array163may generate an output activation Act_Out. However, the embodiment is not limited thereto. The PE array163may generate any types of outputs other than the output activation Act_Out as well. The PE array163may include at least one processing element (PE)163_1. The processing elements163_1may be aligned with each other so that each of the processing elements163_1may perform multiplication on one input activation Act_In and one weight. The PE array163may sum values for each multiplication to generate a subtotal. This subtotal may be utilized as an output activation Act_Out. The PE array163performs two-dimensional matrix multiplication, and thus, may be referred to as a 2D matrix compute unit. The vector unit164may mainly perform one-dimensional calculations. The vector unit164, together with the PE array163, may perform deep learning calculations. Through this, the processing unit160may be specialized for necessary calculations. In other words, each of the at least one neural core100has calculation modules that perform a large amount of two-dimensional matrix multiplications and one-dimensional calculations, and thus, can efficiently perform deep learning tasks. The column register161may receive a first input I1. The column register161may receive the first input I1, and distribute them to each column of the processing elements163_1. The row register162may receive a second input I2. The row register162may receive the second input I2, and distribute them to each row of the processing elements163_1. The first input I1may be an input activation Act_In or a weight. The second input I2may be a value other than the first input I1between the input activation Act_In or the weight. Alternatively, the first input I1and the second input I2may be values other than the input activation Act_In and the weight. FIG.10is a block diagram for illustrating the L0 memory ofFIG.7in detail. Referring toFIG.10, the L0 memory120may include a scheduler121and one or more local memory banks122. When data is stored in the L0 memory120, the scheduler121may receive data from the load engine113a. In this case, the local memory bank122may be allocated for the data in a round-robin manner. Accordingly, data may be stored in any one of the local memory banks122. In contrast to this, when data is loaded from the L0 memory120, the scheduler121may receive the data from the local memory bank122and transmit the data to the store engine113b. The store engine113bmay store the data in the outside through the local interconnection200. In this case, the scheduler121may be referred to as a scheduler circuit, but for the sake of convenience, the term is unified as a scheduler. In addition, the scheduler121may be implemented as a circuit or circuitry. FIG.11is a block diagram for illustrating the local memory bank ofFIG.10in detail. Referring toFIG.11, the local memory bank122may include a local memory bank controller122_1and a local memory bank cell array122_2. The local memory bank controller122_1may manage read and write operations via the addresses of data stored in the local memory bank122. In other words, the local memory bank controller122_1may manage the input/output of data as a whole. The local memory bank cell array122_2may be of a structure in which cells in which data is directly stored are arranged in rows and columns. The local memory bank cell array122_2may be controlled by the local memory bank controller122_1. FIG.12is a block diagram for illustrating in detail the structure of the neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.12, a neural core101may have a CGRA structure, unlike a neural core100. The neural core101may include an instruction memory111_1, a CGRA L0 memory111_2, a PE array111_3, and a load/store unit (LSU)111_4. The PE array111_3may include a plurality of processing elements interconnected by a mesh style network. The mesh style network may be two-dimensional, three-dimensional, or higher-dimensional. In the CGRA, the plurality of processing elements may be reconfigurable or programmable. The interconnection between the plurality of processing elements may be reconfigurable or programmable. In some embodiments, the interconnection between the plurality of processing elements may be statically reconfigurable or programmable when the interconnection is fixed after the plurality of processing elements are configurated or programed. In some embodiments, the interconnection between the plurality of processing elements may be dynamically reconfigurable or programmable when the interconnection is reconfigurable or programmable even after the plurality of processing elements are configurated or programed. The instruction memory111_1may receive and store instructions. The instruction memory111_1may sequentially store instructions internally, and provide the stored instructions to the PE array111_3. In this case, the instructions may instruct the operation of first type of a plurality of processing elements111_3aincluded in each PE array111_3. The CGRA L0 memory111_2may be located inside the neural core101, receive all input data required for tasks of the neural core101, and temporarily store the data. In addition, the CGRA L0 memory111_2may temporarily store output data calculated by the neural core101to transmit the data to the outside. The CGRA L0 memory111_2may serve as a cache memory of the neural core101. The CGRA L0 memory111_2may transmit and receive data to and from the PE array111_3. The CGRA L0 memory111_2may correspond to L0 (a level 0) lower than L1. In this case, the L0 memory may be a private unshared memory of the neural core101unlike the L2 shared memory400. The CGRA L0 memory111_2may transmit a program and data, such as activation or weight, to the PE array111_3. The PE array111_3may be a module that performs calculations. The PE array111_3may perform not only one-dimensional calculations but also two-dimensional or higher matrix/tensor calculations. The PE array111_3may include the first type of the plurality of processing elements111_3aand a second type of a plurality of processing elements111_3btherein. The first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bmay be arranged in rows and columns. The first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bmay be arranged in m columns. In addition, the first type of the plurality of processing elements111_3amay be arranged in n rows, and the second type of the plurality of processing elements111_3bmay be arranged in 1 rows. Accordingly, the first type of the plurality of processing elements111_3aand the second type of the plurality of processing element111_3bmay be arranged in (n+1) rows and m columns. The LSU111_4may receive at least one of data, a control signal, or a synchronization signal from the outside through the local interconnection200. The LSU111_4may transmit at least one of the received data, the received control signal, or the received synchronization signal to the CGRA L0 memory111_2. Similarly, the LSU111_4may transmit at least one of data, a control signal, or a synchronization signal to the outside through the local interconnection200. The LSU111_4may be referred to as an LSU circuit, but for the sake of convenience, the terms are unified as an LSU. In addition, the LSU111_4may be implemented as a circuit or circuitry. The neural core101may have a CGRA (Coarse Grained Reconfigurable Architecture) structure. Accordingly, in the neural core101, each of the first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bof the PE array111_3may be connected to at least one of the CGRA L0 memory111_2, the instruction memory111_1, or the LSU111_4, respectively. In other words, the first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bdo not have to be connected to all of the CGRA L0 memory111_2, the instruction memory111_1, and the LSU111_4, but may be connected to some thereof. Further, the first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bmay be different types of processing elements from each other. Accordingly, out of the CGRA L0 memory111_2, the instruction memory111_1, and the LSU111_4, the elements connected to the first type of the plurality of processing elements111_3aand the elements connected to the second type of the plurality of processing elements111_3bmay be different from each other. The neural core101of the disclosure having a CGRA structure enables high-level parallel calculations, and since direct data exchange between the first type of the plurality of processing elements111_3aand the second type of the plurality of processing elements111_3bis possible, the power consumption may be low. In addition, by including two or more types of processing elements, optimization according to various calculation tasks may also be possible. For example, if the first type of the plurality of processing elements111_3aare processing elements that perform two-dimensional calculations, the second type of the plurality of processing elements111_3bmay be processing elements that perform one-dimensional calculations. However, the embodiment is not limited thereto. FIG.13is a block diagram for illustrating memory reconfiguration of a neural processing system in accordance with some embodiments of the disclosure. Referring toFIG.13, the neural core SoC10may include first to eighth processing units160ato160hand an on-chip memory OCM. AlthoughFIG.13illustrates eight processing units as an example, this is merely illustrative, and the number of processing units may vary as desired. The on-chip memory OCM may include first to eighth L0 memories120ato120hand a shared memory2000. The first to eighth L0 memories120ato120hmay be used as private memories for the first to eighth processing units160ato160h, respectively. In other words, the first to eighth processing units160ato160hand the first to eighth L0 memories120ato120hmay correspond to each other 1:1. The shared memory2000may include first to eighth memory units2100ato2100h. The first to eighth memory units2100ato2100hmay correspond to the first to eighth processing units160ato160hand the first to eighth L0 memories120ato120h, respectively. That is, the number of memory units may be eight, which is the same as the number of processing units and L0 memories. The shared memory2000may operate in one of two kinds of on-chip memory types. In other words, the shared memory2000may operate in one of a L0 memory type or a global memory type. In other words, the shared memory2000may implement two types of logical memories with one piece of hardware. If the shared memory2000is implemented in the L0 memory type, the shared memory2000may operate as a private memory for each of the first to eighth processing units160ato160h, just like the first to eighth L0 memories120ato120h. The L0 memory can operate at a relatively higher clock speed compared with the global memory, and the shared memory2000may also use a relatively higher clock speed when operating in the L0 memory type. If the shared memory2000is implemented in the global memory type, the shared memory2000may operate as a common memory used by the first processing unit160aand the second processing unit160btogether. In this case, the shared memory2000may be shared not only by the first to eighth processing units160ato160hbut also by the first to eighth L0 memories120ato120h. The global memory may generally use a lower clock compared with the L0 memory, but is not limited thereto. When the shared memory2000operates in the global memory type, the first to eighth processing units160ato160hmay share the shared memory2000. In this case, the shared memory2000may be connected to the volatile memory32ofFIG.2via the global interconnection6000and may also operate as a buffer for the volatile memory32. At least part of the shared memory2000may operate in the L0 memory type, and the rest may operate in the global memory type. In other words, the entire shared memory2000may operate in the L0 memory type, or the entire shared memory2000may operate in the global memory type. Alternatively, part of the shared memory2000may operate in the L0 memory type, and the rest may operate in the global memory type. FIG.14is a block diagram showing an example of memory reconstruction of a neural processing system in accordance with some embodiments of the disclosure. With reference toFIGS.13and14, first, third, fifth, and seventh dedicated areas AE1, AE3, AE5, and AE7for each of the first, third, fifth, and seventh processing units160a,160c,160e, and160gmay include only the first, third, fifth, and seventh L0 memories120a,120c,120e, and120g, respectively. Further, second, fourth, sixth, and eighth dedicated areas AE2, AE4, AE6, and AE8for each of the second, fourth, sixth, and eighth processing units160b,160d,160f, and160hmay include second, fourth, sixth, and eighth L0 memories120b,120d,120f, and120h, respectively. In addition, the second, fourth, sixth, and eighth dedicated areas AE2, AE4, AE6, and AE8may include the second, fourth, sixth, and eighth memory units2100b,2100d,2100f, and2100h. The first, third, fifth, and seventh memory units2100a,2100c,2100e, and2100gof the shared memory2000may be used as a common area AC. The common area AC may be a memory shared by the first to eighth processing units160ato160h. The second dedicated area AE2may include a second L0 memory120band a second memory unit2100b. The second dedicated area AE2may be an area in which the second L0 memory120band the second memory unit2100bthat are separated hardware-wise operate in the same manner and operate logically as one L0 memory. The fourth, sixth, and eighth dedicated areas AE4, AE6, and AE8may also operate in the same manner as the second dedicated area AE2. The shared memory2000in accordance with the embodiment may convert an area corresponding to each processing unit into a logical L0 memory and a logical global memory of an optimized ratio and may use them. The shared memory2000may perform the adjustment of this ratio at runtime. That is, each processing unit may perform the same task in some cases, but may perform different tasks in other cases as well. In this case, the amount of the L0 memory and the amount of the global memory required for the tasks carried out by each processing unit are inevitably different each time. Accordingly, if the composition ratio of the L0 memory and the shared memory is fixedly set as in the conventional on-chip memory, there may occur inefficiency due to the calculation tasks assigned to each processing unit. Therefore, the shared memory2000of the neural processing device in accordance with the embodiment may set an optimal ratio of the L0 memory and the global memory according to calculation tasks during the runtime, and may enhance the efficiency and speed of calculation. FIG.15is an enlarged block diagram of a portion A ofFIG.13. With reference toFIGS.13and15, the shared memory2000may include a first L0 memory controller122_1a, a second L0 memory controller122_1b, a fifth L0 memory controller122_1e, a sixth L0 memory controller122_1f, the first to eighth memory units2100ato2100h, and a global controller2200. Other L0 memory controllers not shown may also be included in the embodiment, but the description thereof will be omitted for convenience. The first L0 memory controller122_1a, the second L0 memory controller122_1b, the fifth L0 memory controller122_1e, the sixth L0 memory controller122_1f, and the global controller2200may be referred to respectively as a first L0 memory controller circuit, a second L0 memory controller circuit, a fifth L0 memory controller circuit, a sixth L0 memory controller circuit, and a global controller circuit. However, for the sake of convenience, the terms are respectively unified as a first L0 memory controller, a second L0 memory controller, a fifth L0 memory controller, a sixth L0 memory controller, and a global controller. In addition, the first L0 memory controller122_1a, the second L0 memory controller122_1b, the fifth L0 memory controller122_1e, the sixth L0 memory controller122_1f, and the global controller2200may each be implemented as a circuit or circuitry. The first L0 memory controller122_1amay control the first L0 memory120a. In addition, the first L0 memory controller122_1amay control the first memory unit2100a. Specifically, when the first memory unit2100ais implemented in a logical L0 memory type, the control by the first L0 memory controller122_1amay be performed on the first memory unit2100a. The second L0 memory controller122_1bmay control the second L0 memory120b. Further, the second L0 memory controller122_1bmay control the second memory unit2100b. In other words, when the second memory unit2100bis implemented in the logical L0 memory type, the control by the first L0 memory controller122_1amay be performed on the second memory unit2100b. The fifth L0 memory controller122_1emay control the fifth L0 memory120e. Further, the fifth L0 memory controller122_1emay control the fifth memory unit2100e. In other words, when the fifth memory unit2100eis implemented in the logical L0 memory type, the control by the fifth L0 memory controller122_1emay be performed on the fifth memory unit2100e. The sixth L0 memory controller122_1fmay control the sixth L0 memory120f. Further, the sixth L0 memory controller122_1fmay control the sixth memory unit2100f. In other words, when the sixth memory unit2100fis implemented in the logical L0 memory type, the control by the sixth L0 memory controller122_1fmay be performed on the sixth memory unit2100f. The global controller2200may control all of the first to eighth memory units2100ato2100h. Specifically, the global controller2200may control the first memory unit2100ato the eighth memory unit2100hwhen the first to eighth memory units2100ato2100heach operate logically in the global memory type (i.e., when they do not operate logically in the L0 memory type). In other words, the first to eighth memory units2100ato2100hmay be controlled by the first to eighth L0 memory controllers122_1ato122_1h, respectively, or may be controlled by the global controller2200, depending on what type of memory they are logically implemented. If the L0 memory controllers including the first, second, fifth, and sixth L0 memory controllers122_1a,122_1b,122_1e, and122_1fcontrol the first to eighth memory units2100ato2100h, respectively, the first to eighth L0 memory controllers122_1ato122_1hcontrol the first to eighth memory units2100ato2100hin the same manner as the first to eighth L0 memories120ato120h, and thus, can control them as the private memory of the first to eighth processing units160ato160h. Accordingly, the first to eighth memory units2100ato2100hmay operate at clock frequencies corresponding to the clock frequencies of the first to eighth processing units160ato160h. The L0 memory controllers including the first L0 memory controller122_1a, the second L0 memory controller122_1b, the fifth L0 memory controller122_1e, and the sixth L0 memory controller122_1fmay each include the LSU110ofFIG.7. If the global controller2200controls at least one of the first to eighth memory units2100ato2100h, respectively, then the global controller2200may control the first to eighth memory units2100ato2100has the global memory of the first to eighth processing units160ato160h, respectively. Accordingly, at least one of the first to eighth memory units2100ato2100hmay operate at a clock frequency independent of the clock frequencies of the first to eighth processing units160ato160h, respectively. In some embodiments, if the global controller2200controls the i-th memory unit among the first to eighth memory units2100ato2100h, the global controller2200may control the i-th memory unit as the global memory of the i-th processing unit, and the i-th memory unit may operate at a clock frequency independent of the clock frequency of the i-th processing unit. However, the embodiment is not limited thereto. The global controller2200may connect the first to eighth memory units2100ato2100hto the global interconnection6000as shown and described in accordance withFIG.3. The first to eighth memory units2100ato2100hmay exchange data with an off-chip memory as shown and described in accordance withFIG.2via the control of the global controller2200or may respectively exchange data with the first to eighth L0 memories120ato120h. Each of the first to eighth memory units2100ato2100hmay include at least one memory bank. The first memory unit2100amay include at least one first memory bank2110a. The first memory banks2110amay be one or more areas obtained by dividing the first memory unit2100ainto certain sizes. The first memory banks2110amay all be memory devices of the same size. However, the embodiment is not limited thereto.FIG.15shows that four memory banks are included in one memory unit. Similarly, the second, fifth, and sixth memory units2100b,2100e, and2100fmay include at least one second, fifth, and sixth memory banks2110b,2110e, and2110f, respectively. In the following, the description will be made based on the first memory banks2110aand the fifth memory banks2110e, which may be the same as other memory banks including the second and sixth memory banks2110band2110f. The first memory banks2110amay each operate logically in the L0 memory type or operate logically in the global memory type. In this case, the first memory banks2110amay operate independently of the other memory banks in the first memory unit2100a. However, the embodiment is not limited thereto. If each memory bank operates independently, the first memory unit2100amay include a first area operating in the same manner as the first L0 memory120aand a second area operating in a different manner from the first L0 memory120a. In this case, the first area and the second area do not necessarily coexist, but any one area may take up the entire first memory unit2100a. Likewise, the second memory unit2100bmay include a third area operating in the same manner as the second L0 memory120band a fourth area operating in a different manner from the second L0 memory120b. In this case, the third area and the fourth area do not necessarily coexist, and any one area may take up the entire first memory unit2100a. In this case, the ratio of the first area to the second area may be different from the ratio of the third area to the fourth area. However, the embodiment is not limited thereto. Therefore, the ratio of the first area to the second area may be the same as the ratio of the third area to the fourth area. In other words, the memory composition ratio in each memory unit may vary as desired. In general, in the case of the conventional system-on-chip, the on-chip memory except for high-speed L0 memory was often composed of high-density, low-power SRAM. This is because SRAM has high efficiency in terms of chip area and power consumption relative to required capacity. However, with the conventional on-chip memory, the processing speed slowed down significantly as was inevitable in the case where tasks that require more data quickly than the predetermined capacity of the L0 memory, and, even when the need for the global memory is not great, there is no way to utilize the remaining global memory, resulting in inefficiency. On the other hand, the shared memory2000in accordance with some embodiments may be controlled selectively by any one of the two controllers depending on the case. In the case depicted, the shared memory2000may be controlled not only as a whole by a determined one of the two controllers but also independently for each memory unit or each memory bank. Through this, the shared memory2000in accordance with the embodiment can obtain an optimal memory composition ratio according to calculation tasks during the runtime and can perform faster and more efficient calculation tasks. In the case of a processing unit specialized in artificial intelligence, the required sizes of L0 memory and global memory may vary for each particular application. Moreover, even for the same application, the required sizes of L0 memory and global memory may vary for each layer when a deep learning network is used. In the shared memory2000, in accordance with the embodiment, the composition ratio of the memory can be changed during runtime even when calculation steps change according to each layer, making fast and efficient deep learning tasks possible. FIG.16is a diagram for illustrating the first memory bank ofFIG.15in detail. AlthoughFIG.16illustrates the first memory bank2110a, other memory banks may also have the same structure as the first memory bank2110a. Referring toFIG.16, the first memory bank2110amay include a cell array Ca, a bank controller Bc, a first path unit P1, and a second path unit P2. In this case, the bank controller Bc, the first path unit P1, and the second path unit P2may be referred to respectively as a bank controller circuit, a first path unit circuit, and a second path unit circuit. However, for the sake of convenience, the terms are respectively unified as a bank controller, a first path unit, and a second path unit. In addition, the bank controller Bc, the first path unit P1, and the second path unit P2may each be implemented as a circuit or circuitry. The cell array Ca may include a plurality of memory devices (cells) therein. In the cell array Ca, the plurality of memory devices may be arranged in a lattice structure. The cell array Ca may be, for example, a SRAM (static random-access memory) cell array. The bank controller Bc may control the cell array Ca. The bank controller Bc may determine whether the cell array Ca operates in the L0 memory type or in the global memory type, and may control the cell array Ca according to the determined memory type. Specifically, the bank controller Bc may determine whether to transmit and receive data in the direction of the first path unit P1or to transmit and receive data in the direction of the second path unit P2during runtime. The bank controller Bc may determine a data transmission and reception direction according to a path control signal Spc. The path control signal Spc may be generated by a pre-designed device driver or compiler. The path control signal Spc may be generated according to the characteristics of calculation tasks. Alternatively, the path control signal Spc may be generated by an input received from a user. In other words, the user may directly apply an input to the path control signal Spc in order to select optimal memory composition ratio. The bank controller Bc may determine a path along which the data stored in the cell array Ca are transmitted and received via the path control signal Spc. The exchange interface of data may be changed as the bank controller Bc determines the path along which the data are transmitted and received. In other words, a first interface may be used when the bank controller Bc exchanges data with the first path unit P1, and a second interface may be used when the bank controller Bc exchanges data with the second path unit P2. In this case, the first interface and the second interface may be different from each other. Address systems in which data are stored may vary as well. In other words, if a particular interface is selected, then read and write operations may be performed in an address system corresponding thereto. The bank controller Bc may operate at a particular clock frequency. For example, if the cell array Ca is an SRAM cell array, the bank controller Bc may operate at the operating clock frequency of a general SRAM. The first path unit P1may be connected to the bank controller Bc. The first path unit P1may directly exchange the data of the cell array Ca with the first processing unit160a. In this case, “directly” may mean being exchanged with each other without going through the global interconnection6000. In other words, the first processing unit160amay exchange data directly with the first L0 memory120a, and the first processing unit160amay exchange data via the first path unit P1when the shared memory2000is implemented logically in the L0 memory type. The first path unit P1may include L0 memory controllers including the first L0 memory controller122_1aand the second L0 memory controller122_1b, as shown inFIG.15. The first path unit P1may form a multi-cycle sync-path. In other words, the operating clock frequency of the first path unit P1may be the same as the operating clock frequency of the first processing unit160a. The first L0 memory120amay quickly exchange data at the same clock frequency as the operating clock frequency of the first processing unit160ain order to quickly exchange data at the same speed as the operation of the first processing unit160a. Likewise, the first path unit P1may also operate at the same clock frequency as the operating clock frequency of the first processing unit160a. In this case, the operating clock frequency of the first path unit P1may be multiples of the operating clock frequency of the bank controller Bc. In this case, a clock domain crossing (CDC) operation for synchronizing the clocks between the bank controller Bc and the first path unit P1is not required separately. Thus, a delay of data transmission may not occur. Accordingly, faster and more efficient data exchange can be possible. In the embodiment shown inFIG.16, an operating clock frequency of the first path unit P1may be 1.5 GHz, as an example. This may be twice the frequency of 750 MHz of the bank controller Bc. However, the embodiment is not limited thereto, and any operating clock frequency of the first path unit P1may be possible as long as the first path unit P1operates at integer multiples of the clock frequency of the bank controller Bc. The second path unit P2may be connected to the bank controller Bc. The second path unit P2may exchange the data of the cell array Ca with the first processing unit160anot directly but via the global interconnection6000. In other words, the first processing unit160amay exchange data with the cell array Ca via the global interconnection6000and the second path unit P2. In this case, the cell array Ca may exchange data not only with the first processing unit160abut also with other processing units. In other words, the second path unit P2may be a data exchange path between the cell array Ca and all the processing units when the first memory bank2110ais implemented logically in the global memory type. The second path unit P2may include the global controller2200ofFIG.15. The second path unit P2may form an asynchronous path or Async-Path. The operating clock frequency of the second path unit P2may be the same as the operating clock frequency of the global interconnection6000. Likewise, the second path unit P2may also operate at the same clock frequency as the operating clock frequency of the global interconnection6000. In the case of the embodiment as shown inFIG.16, the operating clock frequency of the second path unit P2may not be synchronized with the operating clock frequency of the bank controller Bc. In this case, the clock domain crossing (CDC) operation for synchronizing the clocks between the bank controller Bc and the second path unit P2may be required. If the operating clock frequency of the bank controller Bc and the operating clock frequency of the second path unit P2are not synchronized with each other, the degree of freedom in the design of the clock domain may be relatively high. Therefore, the difficulty of hardware design is decreased, thereby making it possible to more easily derive the desired hardware operation. The bank controller Bc may use different address systems in the case of exchanging data via the first path unit P1and in the case of exchanging data via the second path unit P2. In other words, the bank controller Bc may use a first address system if exchanging data via the first path unit P1and a second address system if exchanging data via the second path unit P2. In this case, the first address system and the second address system may be different from each other. A bank controller Bc is not necessarily required for each memory bank. In other words, a bank controller Bc may not be used to schedule, but instead serves to transfer signals, and thus, is not a required component for each memory bank having two ports. Therefore, one bank controller Bc can be operably coupled to control multiple memory banks. The multiple memory banks may operate independently even if they are controlled by the bank controller Bc. However, the embodiment is not limited thereto. As a matter of course, the bank controller Bc may exist for each memory bank. In this case, the bank controller Bc may control each memory bank individually. Referring toFIG.15andFIG.16, if the first memory unit2100aexchanges data via the first path unit P1, the first address system may be used. If the first memory unit2100aexchanges data via the second path unit P2, the second address system may be used. Similarly, if the second memory unit2100bexchanges data via the first path unit P1, a third address system may be used. If the second memory unit2100bexchanges data via the second path unit P2, the second address system may be used. In this case, the first address system and the third address system may be the same as each other. However, the embodiment is not limited thereto. The first address system and the third address system may each be used exclusively for the first processing unit160aand the second processing unit160b, respectively. The second address system may be commonly applied to the first processing unit160aand the second processing unit160b. InFIG.16, the operating clock frequency of the second path unit P2may operate at 1 GHz, as an example. This may be a frequency that is not synchronized with the operating clock frequency of 750 MHz of the bank controller Bc. In other words, the operating clock frequency of the second path unit P2may be freely set without being dependent on the operating clock frequency of the bank controller Bc at all. A generic global memory has used slow SRAM (e.g., 750 MHz) and a global interconnection (e.g., 1 GHz) faster than that, inevitably resulting in delays due to the CDC operation. On the other hand, the shared memory2000in accordance with some embodiments has room to use the first path unit P1in addition to the second path unit P2, thereby making it possible to avoid delays resulting from the CDC operation. Furthermore, in the generic global memory, a plurality of processing units use one global interconnection6000, and thus, when an amount of data transfer occurs at the same time, the decrease in the overall processing speed is likely to occur. On the other hand, the shared memory2000in accordance with some embodiments has room to use the first path unit P1in addition to the second path unit P2, thereby making it possible to achieve the effect of properly distributing the data throughput that could be concentrated on the global controller2200as well. FIG.17is a block diagram for illustrating a software hierarchy of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.17, the software hierarchy of the neural processing device in accordance with some embodiments may include a deep learning (DL) framework10000, a compiler stack20000, and a back-end module30000. The DL framework10000may mean a framework for a deep learning model network used by a user. For example, a neural network that has finished training may be generated using a program such as TensorFlow or PyTorch. The compiler stack20000may include an adaptation layer21000, a compute library22000, a front-end compiler23000, a back-end compiler24000, and a runtime driver25000. The adaptation layer21000may be a layer in contact with the DL framework10000. The adaptation layer21000may quantize a neural network model of a user generated by the DL framework10000and modify graphs. In addition, the adaptation layer21000may convert a type of model into a required type. The front-end compiler23000may convert various neural network models and graphs transferred from the adaptation layer21000into a constant intermediate representation (IR). The converted IR may be a preset representation that is easy to handle later by the back-end compiler24000. The optimization that can be done in advance in the graph level may be performed on such an IR of the front-end compiler23000. In addition, the front-end compiler23000may finally generate the IR through the task of converting it into a layout optimized for hardware. The back-end compiler24000optimizes the IR converted by the front-end compiler23000and converts it into a binary file, enabling it to be used by the runtime driver. The back-end compiler24000may generate an optimized code by dividing a job at a scale that fits the details of hardware. The compute library22000may store template operations designed in a form suitable for hardware among various operations. The compute library22000provides the back-end compiler24000with multiple template operations required by hardware, allowing the optimized code to be generated. The runtime driver25000may continuously perform monitoring during driving, thereby making it possible to drive the neural network device in accordance with some embodiments. Specifically, it may be responsible for the execution of an interface of the neural network device. The back-end module30000may include an ASIC (application-specific integrated circuit)31000, an FPGA (field-programmable gate array)32000, and a C-model33000. The ASIC31000may refer to a hardware chip determined according to a predetermined design method. The FPGA32000may be a programmable hardware chip. The C-model33000may refer to a model implemented by simulating hardware on software. The back-end module30000may perform various tasks and derive results by using the binary code generated through the compiler stack20000. FIG.18is a diagram for illustrating the back-end compiler ofFIG.17in detail. Referring toFIG.18, the back-end compiler24000may include a job scheduler JS. The job scheduler JS may determine processing order and timing of jobs to be processed by an operating system of the neural processing device1. Although the job scheduler JS may be implemented as software, the embodiment is not limited thereto. That is, the job scheduler JS may also be implemented as a hardware module. Further, the job scheduler JS may also be located in a layer other than the back-end compiler24000. In this case, the job scheduler JS may be named a job scheduler circuit, but for the sake of convenience, the term is unified as a job scheduler. Moreover, the job scheduler JS may be implemented as a circuit or circuitry. FIG.19is a conceptual diagram for illustrating pass-through and job scheduling of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIGS.1and19, the neural processing device1in accordance with some embodiments of the disclosure may include an address space ID (ASID) manager AM, a plurality of entities E2, E3, E4, the job scheduler JS, and a command queue CQ. The ASID manager AM, the plurality of entities E2, E3, E4, and the command queue CQ may be respectively named as an ASID manager circuit, a plurality of entity circuits, and a command queue circuit, but for the sake of convenience, the terms are unified as the ASID manager, the plurality of entities, and the command queue. Further, the ASID manager AM, the plurality of entities E2, E3, E4, and the command queue CQ may each be implemented as a circuit or circuitry. In this case, at least one context CTX0to CTX4may be executed by the neural processing device1. The at least one context CTX0to CTX4may refer to a kind of programs executed by the neural processing device1. AlthoughFIG.19shows, for example, five contexts CTX0to CTX4, the embodiment is not limited thereto. In other words, the number of contexts can vary as desired. The at least one context CTX0to CTX4may include, for example, a first context CTX0, a second context CTX1, a third context CTX2, a fourth context CTX3, and a fifth context CTX4. ASIDs may be managed via the ASID manager AM. ASIDs may exist in correspondence to a number of contexts that the neural processing device1can manage. That is, if the neural processing device1can manage only three contexts due to the limitations of the device, up to three ASIDs may exist. In contrast, since the number of contexts that can be managed by the software of the neural processing device1, i.e., the operating system, is nearly infinite, a method of reducing a number that can be managed simultaneously by assigning ASIDs may be necessary to schedule these contexts. The number of ASIDs may be determined according to hardware conditions such as a number of registers held by the neural processing device1. Hereinafter, the embodiment will be described on the assumption that the number of ASIDs is two. The ASID manager AM may receive ASID allocation requests from the at least one context CTX0to CTX4, respectively. In this case, the ASID allocation requests may be received in sequence according to the time points at which jobs of the respective contexts are created. The ASID manager AM may allocate at least one ASID to each context. In this case, the ASID manager AM may be a software module implemented by the neural processing device1, but the embodiment is not limited thereto. Since the number of contexts may be greater than the number of ASIDs, there may be contexts to which ASIDs are allocated and contexts to which ASIDs are not allocated. At this time, in the case of ASIDs that have already been allocated by other contexts, the ASIDs may be bound and not be allocated to new contexts. If binding of an ASID is ended, i.e., unbound, that ASID may be allocated to a new context. InFIG.19, by way of example, the first context CTX0may be allocated a first ASID ASID0, and the second context CTX1may be allocated a second ASID ASID1. The third context CTX2, the fourth context CTX3, and the fifth context CTX4may not be allocated an ASID. The ASID manager AM may allocate the ASIDs in the order in which requests are received from each context. However, the embodiment is not limited thereto. Depending on whether an ASID is allocated, each context may proceed to a pass-through route Rpt or a non-pass-through route Rnpt. That is, contexts that have been allocated an ASID may proceed to the pass-through route Rpt, and contexts that have not been allocated an ASID may proceed to the non-pass-through route Rnpt. InFIG.19, the first context CTX0and the second context CTX1may proceed to the pass-through route Rpt, and the third context CTX2, the fourth context CTX3, and the fifth context CTX4may proceed to the non-pass-through route Rnpt. The pass-through route Rpt may proceed directly to the command queue CQ without going through the entities and the job scheduler JS. The command queue CQ may be a queue for the neural processing device1to execute jobs. The command queue CQ may receive jobs in sequence and store the jobs as standby jobs. The standby jobs stored in the command queue CQ may be executed in sequence by the neural processing device1. FIG.20is a conceptual diagram for illustrating job execution according to ASIDs of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIGS.19and20, a latency that may occur when proceeding to the pass-through route Rpt can be found. The second context CTX1and the job scheduler JS may be arranged in the software domain SW. That is, the software domain SW may be an operating part implemented in software. The hardware domain HW may be an operating part implemented by actual physical hardware. The second context CTX1may have a pass-through authority in the software domain SW and transmit jobs directly to the command queue CQ. Accordingly, the job scheduler JS may not need to schedule jobs. Accordingly, a run job RJ can be executed immediately in the hardware domain HW (if there is no previous job PJ). If there exists a previous job PJ, a latency by an amount of a waiting time Tw may occur until the previous job PJ ends. However, this waiting time Tw may not be an added overhead because it is not a latency due to job scheduling but an inevitable part of job execution. Referring again toFIG.19, in the case of proceeding to the non-pass-through route Rnpt, the third context CTX2, the fourth context CTX3, and the fifth context CTX4may transmit and store jobs to the third entity E2, the fourth entity E3, and the fifth entity E4, respectively. Entities may exist for each context. However, the embodiment is not limited thereto. The entities may be buffer memories that sequentially store the jobs of each context. The job scheduler JS may schedule jobs of the third entity E2, the fourth entity E3, and the fifth entity E4. The job scheduler JS may schedule the jobs of the third entity E2, the fourth entity E3, and the fifth entity E4, and make sync requests requesting ASID allocation from the ASID manager AM pending in sequence. In this case, the sync requests for requesting ASID allocation from the ASID manager AM may be what is retrying allocation since a pass-through authority was not granted at the initial allocation request. A sync request may be transmitted for each context, and one or more jobs may be associated with one sync request. Each sync request may be a request for one context. The job scheduler JS may transmit each job to the command queue CQ once ASIDs are allocated according to the sync requests. The command queue CQ may store both jobs transmitted from each context by the pass-through authority and jobs transmitted with ASIDs allocated by the job scheduler JS, and execute jobs in sequence. FIG.21is a conceptual diagram for illustrating job execution according to ASIDs of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIGS.19and21, the third context CTX2may not be allocated an ASID by the ASID manager AM and thus may not have a pass-through authority. Therefore, the job of the third context CTX2may proceed to the non-pass-through route Rnpt by the job scheduler JS. First, the third context CTX2may perform a push job PsJ. The push job PsJ may mean that the third context CTX2provides and stores the job to the third entity E2. The job scheduler JS may not be executed immediately at the end time point of the push job PsJ. In other words, there may arise a latency until the job scheduler JS is woken up by the push job PsJ. Accordingly, a first scheduling overhead OH1may occur. That is, the first scheduling overhead OH1may mean a latency for the job scheduler JS to take over the progress in the third context CTX2. The job scheduler JS may perform an emit job EJ after the first scheduling overhead OH1has passed. The emit job EJ may be a task of allocating an ASID to the third context CTX2and providing the job to the command queue CQ. The hardware domain HW, i.e., the neural processing device1may not be executed immediately at the end time point of the emit job EJ. In other words, there may arise a latency until the neural processing device1is woken up by the emit job EJ. Accordingly, a second scheduling overhead OH2may occur. That is, the second scheduling overhead OH2may mean a latency for the neural processing device1to take over the progress in the job scheduler JS. If the previous job PJ is in progress, a latency may occur due to the inevitable part of job execution, such as the waiting time Tw inFIG.20, and accordingly, at least part of the second scheduling overhead OH2may not be revealed. However, if the previous job PJ has already been completed, the second scheduling overhead OH2may be revealed, and thus inefficiency may arise. Therefore, the neural processing device1in accordance with some embodiments of the disclosure can grant pass-through authorities as many as the number of ASIDs held, and thus eliminate at least the first scheduling overhead OH1or the second scheduling overhead OH2. Through this, it is possible to reduce the latency of job scheduling and maximize the performance and speed of the entire device. FIG.22is a conceptual diagram for illustrating ASID allocation of an ASID manager of a neural processing device in accordance with some embodiments of the disclosure, andFIG.23is a conceptual diagram for illustrating ASID unbinding of a neural processing device in accordance with some embodiments of the disclosure.FIG.24is a conceptual diagram for illustrating allocation of an LRU ASID of a neural processing device in accordance with some embodiments of the disclosure, andFIG.25is a conceptual diagram for illustrating allocation of an LRU ASID of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIGS.19and22, the ASID manager AM may manage three ASID IDs by way of example. That is, the ASID manager can manage the first ASID ASID0, the second ASID ASID1, and the third ASID ASID2. The ASID manager AM may allocate the first ASID ASID0to the second context CTX1and the second ASID ASID1to the third context CTX2. Further, the ASID manager AM may allocate the third ASID ASID2to the first context CTX0. The fourth context CTX3and the fifth context CTX4to which ASIDs have not been allocated may transmit a first sync request S1and a second sync request S2to the ASID manager AM, respectively. At this time, the first sync request S1and the second sync request S2may be transmitted in sequence by the job scheduler JS. That is, the first sync request S1may be received by the ASID manager AM before the second sync request S2. At this time, the jobs performed by the fourth context CTX3may be one or more. That is, a first job J1and a second job J2may be requested by the fourth context CTX3to be processed. Referring toFIGS.19and23, the first ASID ASID0bound (or, allocated) to the third context CTX2may be unbound again once all jobs of the third context CTX2are processed. Accordingly, the first ASID ASID0may be in a state of being not connected to any context, i.e., an unbound ASID. The ASID manager AM may allocate the unbound ASID according to sync requests on standby. At this time, an order of allocation may be processed according to the input order by the FIFS (first in first served) method. That is, inFIG.23, the fourth context CTX3of the first sync request S1may be allocated an ASID first, and then the fifth context CTX4of the second sync request S2may be allocated an ASID. Referring toFIGS.19,24, and25, the ASID manager AM may select unbound ASIDs and choose the least recently used (LRU) ASID among the unbound ASIDs. The LRU ASID may be the oldest previously used ASID. However, the embodiment is not limited thereto. The embodiment performs the allocation of ASIDs by the LRU method, and thus can prevent bias in the allocation of particular ASIDs and perform uniform use. Through this, the hardware associated with the ASIDs can be used uniformly. In addition, when the ASID allocation is performed by the LRU method, it is possible to increase a probability that the same ASID is allocated to the same context again. If the same ASID is allocated to the same context again, the task can be performed more efficiently. InFIGS.24and25, by way of example, the first ASID ASID0may be allocated to the fourth context CTX3as the LRU ASID. The first job J1and the second job J2of the fourth context CTX3may be transmitted to the command queue CQ by the job scheduler JS. The first job J1and the second job J2of the fourth context CTX3may be submitted to the command queue CQ by the job scheduler JS. Accordingly, the fourth context CTX3may disappear from the current pending list, and only the second sync request S2of the fifth context CTX4may be pending. FIG.26is a conceptual diagram for illustrating deep learning calculations performed by a neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.26, an artificial neural network model40000is one example of a machine learning model and is a statistical learning algorithm implemented based on the structure of a biological neural network or is a structure for executing the algorithm, in machine learning technology and cognitive science. The artificial neural network model40000may represent a machine learning model having an ability to solve problems by learning to reduce the error between an accurate output corresponding to a particular input and an inferred output by repeatedly adjusting the weight of the synapse by nodes. Nodes are artificial neurons that have formed a network by combining synapses, as in a biological neural network. For example, the artificial neural network model40000may include any probabilistic model, neural network model, etc., used in artificial intelligence learning methods such as machine learning and deep learning. A neural processing device in accordance with some embodiments may implement the form of such an artificial neural network model40000and perform calculations. For example, the artificial neural network model40000may receive an input image and may output information on at least a part of an object included in the input image. The artificial neural network model40000may be implemented by a multilayer perceptron (MLP) including multilayer nodes and connections between them. An artificial neural network model40000in accordance with the embodiment may be implemented using one of various artificial neural network model structures including the MLP. As shown inFIG.29, the artificial neural network model40000includes an input layer41000that receives input signals or data40100from the outside, an output layer44000that outputs output signals or data40200corresponding to the input data, and n (where n is a positive integer) hidden layers42000to43000that are located between the input layer41000and the output layer44000and that receive a signal from the input layer41000, extract characteristics, and forward them to the output layer44000. Here, the output layer44000receives signals from the hidden layers42000to43000and outputs them to the outside. The learning methods of the artificial neural network model40000include a supervised learning method for training to be optimized to solve a problem by the input of supervisory signals (correct answers), and an unsupervised learning method that does not require supervisory signals. The neural processing device may directly generate training data, through simulations, for training the artificial neural network model40000. In this way, by matching a plurality of input variables and a plurality of output variables corresponding thereto with the input layer41000and the output layer44000of the artificial neural network model40000, respectively, and adjusting the synaptic values between the nodes included in the input layer41000, the hidden layers42000to43000, and the output layer44000, training may be made to enable a correct output corresponding to a particular input to be extracted. Through such a training phase, it is possible to identify the characteristics hidden in the input variables of the artificial neural network model40000, and to adjust synaptic values (or weights) between the nodes of the artificial neural network model40000so that an error between an output variable calculated based on an input variable and a target output is reduced. FIG.27is a conceptual diagram for illustrating training and inference operations of a neural network of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.27, the training phase may be subjected to a process in which a large number of pieces of training data TD are passed forward to the artificial neural network model NN and are passed backward again. Through this, the weights and biases of each node of the artificial neural network model NN are tuned, and training may be performed so that more and more accurate results can be derived. Through the training phase, the artificial neural network model NN may be converted into a trained neural network model NN_T. In the inference phase, new data ND may be inputted into the trained neural network model NN_T again. The trained neural network model NN_T may derive result data RD through the weights and biases that have already been used in the training, with the new data ND as input. For such result data RD, what training data TD were used in training and how many pieces of training data TD were used in the training phase may be important. Hereinafter, a method for job scheduling of a neural processing device in accordance with some embodiments of the disclosure will be described with reference toFIGS.19,28, and29. Any description overlapping with the embodiments described above will be omitted or simplified. FIG.28is a flowchart for illustrating a method for job scheduling of a neural processing device in accordance with some embodiments of the disclosure. Referring toFIG.28, the ASID manager receives ASID allocation requests from contexts at S100. Specifically, referring toFIG.19, the ASID manager AM may receive ASID allocation requests from at least one context CTX0to CTX4, respectively. In this case, the ASID allocation requests may be received in sequence according to the time points at which jobs of the respective contexts are created. Referring again toFIG.28, the ASID manager may determine whether there is any unbound ASID at S200. If there is an unbound ASID, the ASID is allocated to one of the at least one context at S300. Specifically, referring toFIG.19, the ASID manager AM may allocate at least one ASID to each context. At this time, in the case of ASIDs that have already been allocated to other contexts, the ASIDs may be bound and not be allocated to new contexts. If binding of an ASID is ended, i.e., unbound, that ASID may be allocated to a new context. Depending on whether an ASID is allocated, each context may proceed to a pass-through route Rpt or a non-pass-through route Rnpt. Contexts that have been allocated an ASID may proceed to the pass-through route Rpt. Referring again toFIG.28, the contexts provide jobs directly to the command queue at S400. Specifically, referring toFIG.19, the pass-through route Rpt may proceed directly to the command queue CQ without going through the entities and the job scheduler JS. The command queue CQ may be a queue for the neural processing device1to execute jobs. The command queue CQ may receive in sequence and store jobs. The jobs stored in the command queue CQ may be executed in sequence by the neural processing device1. Referring again toFIG.28, if there is no unbound ASID in S200, jobs are provided to entities at S500. Specifically, referring toFIG.19, in the case of proceeding to the non-pass-through route Rnpt, the respective contexts may provide jobs to entities corresponding to themselves. Entities may exist for each context. However, the embodiment is not limited thereto. The entities may be buffer memories that sequentially store the jobs of each context. Referring again toFIG.28, the job scheduler transmits sync requests at S600. Specifically, referring toFIG.19, the job scheduler JS may schedule jobs of the third entity E2, the fourth entity E3, and the fifth entity E4. The job scheduler JS may schedule the jobs of the third entity E2, the fourth entity E3, and the fifth entity E4, and make sync requests requesting ASID allocation to the ASID manager AM pending in sequence. A sync request may be transmitted for each context, and one or more jobs may be associated with one sync request. Each sync request may be a request for one context. Referring again toFIG.28, the job scheduler provides jobs to the command queue at S700. Specifically, referring toFIG.19, the job scheduler JS may transmit each job to the command queue CQ once ASIDs are allocated according to the sync requests. FIG.29is a flowchart for illustrating allocating the ASIDs ofFIG.28in detail. Referring toFIG.29, unbound ASIDs are selected among at least one ASID at S310. Specifically, referring toFIGS.19and23, the first ASID ASID0bound (allocated) to the third context CTX2may be unbound again once all jobs of the third context CTX2are processed. Accordingly, the first ASID ASID0may be in a state of being not connected to any context, i.e., an unbound ASID. The ASID manager AM can select unbound ASIDs. Referring again toFIG.29, an LRU ASID that is the oldest previously used is chosen among the unbound ASIDs at S320. Specifically, referring toFIGS.19and24, the ASID manager AM may select unbound ASIDs and choose the LRU ASID among the unbound ASIDs. The LRU ASID may be the oldest previously used ASID. However, the embodiment is not limited thereto. Referring again toFIG.29, the LRU ASID is allocated to a context at S330. Specifically, referring toFIGS.19and24, the first ASID ASID0may be allocated to the fourth context CTX3as the LRU ASID. The first job J1and the second job J2of the fourth context CTX3may be transmitted to the command queue CQ by the job scheduler JS. While the inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims. It is therefore desired that the embodiments be considered in all respects as illustrative and not restrictive, reference being made to the appended claims rather than the foregoing description to indicate the scope of the disclosure.
91,546
11861402
The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components. DETAILED DESCRIPTION Cloud computing platforms may provide many powerful capabilities for performing computing operations. However, taking advantage of these computing capabilities manually may be complex and/or require significant training and/or expertise. Prior techniques for providing cloud computing platforms and services often require customers to understand details and configurations of hardware and software resources to establish and configure the cloud computing platform. Configuring such cloud computing platforms may involve long running operations and/or complex operations (e.g., a sequence of operations including multiple steps). For example, an operation to deploy an application on a virtual machine may involve provisioning a virtual host, installing an operating system on the virtual host, and configuring an application for execution on the operating system. Each of such operations may be authorized in the context of a user session that is initialized based on a user (e.g., an administrator) providing their credentials. To prevent unauthorized access, user sessions typically have a relatively short expiration timeout (e.g., a session timeout of minutes, hours, etc.). In an example where the user session has a thirty minute timeout, while a first operation (e.g., deploying the virtual host) may be allowed to complete in the context of the user session. However, the session may expire prior to execution of the second operation, resulting in a failure of the deployment. Methods and apparatus disclosed herein enable refresh of user tokens to prevent such failures. A software defined data center (SDDC) is a data storage facility implemented using an infrastructure that is virtualized and delivered as a service to one or more customers. After deployment of a SDDC, the SDDC provides policy-driven automation to enable provisioning and ongoing management of logical compute resources, storage resources, and network resources. For example, customers may select/create policies that cause the SDDC to deploy applications quickly based on policy-driven provisioning that dynamically matches resources to continually changing workloads and business demands. An SDDC can be deployed as a private cloud, a hybrid cloud, or a public cloud and can run on multiple hardware stacks, hypervisors, and clouds. A virtual machine (VM) is a software computer that, like a physical computer, runs an operating system and applications. An operating system installed on a virtual machine is referred to as a guest operating system. Because each virtual machine is an isolated computing environment, virtual machines (VMs) can be used as desktop or workstation environments, as testing environments, to consolidate server applications, etc. Virtual machines can run on hosts or clusters. The same host can run a plurality of VMs, for example. As disclosed in detail herein, methods and apparatus disclosed herein enable automatic refresh of tokens used in deployment, configuration, and management of SDDCs and virtual machine resources in cloud computing platforms. The improvements to cloud management systems (e.g., management systems from VMware® such as the vCloud Automation Center™ (vCAC) from VMware®, the vRealize® Automation Cloud Automation Software from VMware®, or management systems from any other entity), interfaces, portals, etc. disclosed herein may be utilized individually and/or in any combination. For example, all or a subset of the described improvements may be utilized. As used herein, availability refers to the level of redundancy required to provide continuous operation expected for the workload domain. As used herein, performance refers to the computer processing unit (CPU) operating speeds (e.g., CPU gigahertz (GHz)), memory (e.g., gigabytes (GB) of random access memory (RAM)), mass storage (e.g., GB hard drive disk (HDD), GB solid state drive (SSD)), and power capabilities of a workload domain. As used herein, capacity refers to the aggregate number of resources (e.g., aggregate storage, aggregate CPU, etc.) across all servers associated with a cluster and/or a workload domain. In examples disclosed herein, the number of resources (e.g., capacity) for a workload domain is determined based on the redundancy, the CPU operating speed, the memory, the storage, the security, and/or the power requirements selected by a user. For example, more resources are required for a workload domain as the user-selected requirements increase (e.g., higher redundancy, CPU speed, memory, storage, security, and/or power options require more resources than lower redundancy, CPU speed, memory, storage, security, and/or power options). Example Virtualization Environments Many different types of virtualization environments exist. Three example types of virtualization environments are: full virtualization, paravirtualization, and operating system virtualization. Full virtualization, as used herein, is a virtualization environment in which hardware resources are managed by a hypervisor to provide virtual hardware resources to a virtual machine. In a full virtualization environment, the virtual machines do not have direct access to the underlying hardware resources. In a typical full virtualization environment, a host operating system with embedded hypervisor (e.g., a VMware ESXi™ hypervisor) is installed on the server hardware. Virtual machines including virtual hardware resources are then deployed on the hypervisor. A guest operating system is installed in the virtual machine. The hypervisor manages the association between the hardware resources of the server hardware and the virtual resources allocated to the virtual machines (e.g., associating physical random access memory (RAM) with virtual RAM). Typically, in full virtualization, the virtual machine and the guest operating system have no visibility and/or direct access to the hardware resources of the underlying server. Additionally, in full virtualization, a full guest operating system is typically installed in the virtual machine while a host operating system is installed on the server hardware. Example full virtualization environments include VMware ESX®, Microsoft Hyper-V®, and Kernel Based Virtual Machine (KVM). Paravirtualization, as used herein, is a virtualization environment in which hardware resources are managed by a hypervisor to provide virtual hardware resources to a virtual machine and guest operating systems are also allowed direct access to some or all of the underlying hardware resources of the server (e.g., without accessing an intermediate virtual hardware resource). In a typical paravirtualization system, a host operating system (e.g., a Linux-based operating system) is installed on the server hardware. A hypervisor (e.g., the Xen® hypervisor) executes on the host operating system. Virtual machines including virtual hardware resources are then deployed on the hypervisor. The hypervisor manages the association between the hardware resources of the server hardware and the virtual resources allocated to the virtual machines (e.g., associating physical random access memory (RAM) with virtual RAM). In paravirtualization, the guest operating system installed in the virtual machine is configured also to have direct access to some or all of the hardware resources of the server. For example, the guest operating system may be precompiled with special drivers that allow the guest operating system to access the hardware resources without passing through a virtual hardware layer. For example, a guest operating system may be precompiled with drivers that allow the guest operating system to access a sound card installed in the server hardware. Directly accessing the hardware (e.g., without accessing the virtual hardware resources of the virtual machine) may be more efficient, may allow for performance of operations that are not supported by the virtual machine and/or the hypervisor, etc. Operating system virtualization is also referred to herein as container virtualization. As used herein, operating system virtualization refers to a system in which processes are isolated in an operating system. In a typical operating system virtualization system, a host operating system is installed on the server hardware. Alternatively, the host operating system may be installed in a virtual machine of a full virtualization environment or a paravirtualization environment. The host operating system of an operating system virtualization system is configured (e.g., utilizing a customized kernel) to provide isolation and resource management for processes that execute within the host operating system (e.g., applications that execute on the host operating system). The isolation of the processes is known as a container. Several containers may share a host operating system. Thus, a process executing within a container is isolated from other processes executing on the host operating system. Thus, operating system virtualization provides isolation and resource management capabilities without the resource overhead utilized by a full virtualization environment or a paravirtualization environment. Alternatively, the host operating system may be installed in a virtual machine of a full virtualization environment or a paravirtualization environment. Example operating system virtualization environments include Linux Containers LXC and LXD, Docker™, OpenVZ™, etc. In some instances, a SDDC (or a pool of linked SDDCs) may include multiple different virtualization environments. For example, a SDDC may include hardware resources that are managed by a full virtualization environment, a paravirtualization environment, and an operating system virtualization environment. In such an SDDC, a workload may be deployed to any of the virtualization environments. FIG.1illustrates an example environment of use100including a software-defined data center (SDDC)102implemented in accordance with the teachings of this disclosure. The example SDDC102of the illustrated example ofFIG.1includes core components106, deployed servers123, an operations manager128, an automation manager130, and a site recovery manager132. An example administrator146and/or client148(e.g., a web browser, a web service, an application programming interface (API), a command line client (CLI), etc.) accesses the SDDC102via a network150(e.g., a public network, a private network, a virtual private network, etc.). In examples disclosed herein, the administrator146and the client148are implemented by computing devices executing interfaces that facilitate access to resources, services, etc., across the network150. In certain examples, a load balancer (not shown) can route a plurality of clients148between a plurality of SDDC nodes102. However, for purposes of simplicity, a single SDDC102is shown in the example ofFIG.1. Additional SDDC nodes102can be implemented according to the example ofFIG.1. The example core components106of the illustrated example ofFIG.1include a virtual environment infrastructure108, an example network virtualizer110, and an example virtual storage area network112. The example virtual environment infrastructure108is a virtualization platform that includes an example hypervisor114, an example services server116, an example virtualization client118, and an example virtual file system120. In the illustrated example ofFIG.1, the virtual environment infrastructure108can be implemented using the vSphere virtualization suite developed and sold by VMware® of Palo Alto, California, United States. The example hypervisor114can be implemented using the VMware ESXi™ hypervisor developed and sold by VMware® The example services server116can be implemented using the VMware vCenter® Server developed and sold by VMware® The example virtualization client118can be implemented using the VMware vSphere® client developed and sold by VMware®. The example virtual file system120can be implemented using the VMware vSphere Virtual Machine File System developed and sold by VMware® Additionally or alternatively, some or all of the components of the virtual environment infrastructure108can be implemented using products, software, systems, hardware, etc. from companies other than VMware. In other examples, the virtual environment infrastructure108can include additional or different components other than those shown inFIG.1. The example network virtualizer110is a network virtualization platform that can be used to provide virtual network resources for network computing environments. The example network virtualizer110can be implemented using the VMware NSX® network virtualization platform developed and sold by VMware®. The example virtual storage area network112is a data storage virtualization platform that can be used to provide virtual data store resources for network computing environments. The example virtual storage area network112can be implemented using the VMware® Virtual SAN™ (vSAN) software-defined storage platform developed and sold by VMware®. Additionally or alternatively, the network virtualizer110and/or the virtual storage area network112can be implemented using products from companies other than VMware®. In the illustrated example ofFIG.1, one or more VMs (or containers) are used to implement the deployed servers123. In the illustrated example, the servers123include one or more example web servers124a, one or more example app servers124b, and one or more database (DB) servers124c. The servers123are deployed and/or configured by one or more of an example operations manager128, an example automation manager130, and an example site recovery manager132. The example operations manager128is provided to automate information technology (IT) operations management of the SDDC102to run the servers123. The example operations manager128may be implemented using the VMware® vRealize® Operations (vROPS) IT Operations Management product developed and sold by VMware®. The example operations manager128is provided to automate responsive actions to business needs in real-time to deliver personalized infrastructure, applications, and IT operations when business needs arise within the SDDC102. The example automation manager130can be implemented using the VMware's vRealize® Automation (vRA) product developed and sold by VMware®. The example site recovery manager132is provided to implement different levels of availability of the SDDC102for different servers123. For example, some servers123may require higher levels of redundancy or network rerouting capabilities to ensure a higher level of availability for services (e.g., access to the servers123and/or underlying data) even during resource failures. In some examples, other, non-critical servers123may only require low to moderate availability. The example site recovery manager132can be implemented using the VMware® Site Recovery Manager Disaster Recovery Software developed and sold by VMware®. Example approaches disclosed herein augment the functionality of the automation manager130to monitor for automation requests, manage provisioning of resources, applications, other code, etc., according to feature toggles that hide or make available functionality to client(s)148requesting provisioning of resources and/or other execution of SDDC102compute, storage, and/or network resources. Feature toggles enable step-by-step development of functionality (e.g., executable program code, application, other resource, etc.) without exposing unfinished or untested functionality to the client148. Rather than requiring a code change to enable or disable functionality, a feature toggle can be used to test functionality, disable that functionality, and/or promote the functionality to a production release (e.g., made available for provisioning to the client148, etc.). In certain examples, functionality can be made available to some users but hidden from other users according to a feature toggle. As such, rather than being binary toggles applicable site-wide, a feature toggle can be tailored to a certain set or subset of clients148(e.g., to expose/hide a feature for a beta-test client148, for certain tenant(s), etc.). In addition to improving operation and flexibility of the SDDC102, feature toggling can be stored in a database126(e.g., housed in one or more database server(s)124c, etc.) rather than held in memory. By storing feature toggles in the database126, a status or setting of a feature toggle (e.g., on/off, yes/no, tenant/global, etc.) can be changed during runtime (e.g., in response to a representational state transfer (REST) request, etc.) rather than requiring recoding or deployment of a code update. Toggles can be stored in the database126in a hierarchical structure according to tenant (e.g., organization, user, client, etc.), for example. Alternatively or in addition, toggles can be stored according to functionality/feature type, resource location, and/or other criterion, for example. FIG.2is a block diagram of an example implementation of the automation manager130ofFIG.1. The example automation manager130of the illustrated example ofFIG.2includes an example automation request interface210, an example provisioning controller220, an example tenant administrator230, and an example automation executor250. In operation, the example automation request interface210receives an automation request directed to the automation executor250. The example provisioning controller220inspects the request and, if necessary, interacts with the example tenant administrator230to determine whether the automation request is applicable to a particular client148, whether a tenant-specific version, rather than a global version, exists for the client148, etc., before passing the automation request to the automation executor250. The example automation request interface210of the illustrated example ofFIG.2enables users (e.g., administrators) to submit automation requests to the automation manager130for execution. In some examples, the automation request interface210is implemented as a user interface presented via a web page to the user. In some examples, such requests can be submitted via a programmatic interface such as, for example, an application programming interface (API), a REST interface, etc. In examples disclosed herein, when the client148initiates a session with the automation request interface210(e.g., logs into the web page provided by the automation request interface210and/or otherwise submits a request from the client148, etc.), the provisioning controller220facilitates provisioning of resources for execution via the automation executor250. In examples disclosed herein, the automation request interface210passes the automation request to the provisioning controller220. The automation executor250facilitates execution of the request using resources provisioned by the provisioning controller220. The tenant administrator230processes and/or otherwise evaluates the request to determine whether particular organization-specific resources, settings, constraints, permissions, etc., should be applied for the requesting client148. The example provisioning controller220of the illustrated example ofFIG.2is implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), field programmable logic device(s) (FPLD(s)), digital signal processor(s) (DSP(s)), Coarse Grained Reduced precision architecture (CGRA(s)), image signal processor(s) (ISP(s)), etc. However, in some examples, the provisioning controller220is implemented as a service of the automation manager130. The example tenant administrator230of the illustrated example ofFIG.2analyzes a request from the provisioning controller220to identify a tenant, if applicable, and determine tenant-specific content, rules, resources, etc., for the request from the client148. In certain examples, only global resources are provisioned, and the tenant administrator230is not present or is inactive. In other examples, the tenant administrator230identifies an applicable tenant and associated permission/restriction, other configuration, etc., associated with a request from the client148. Tenants, organizations, and/or other users (collectively referred to herein as tenants) can be organized according to a hierarchy, for example. A tenant is associated with a particular instantiation or configuration of resources such as the SDDC102. As such, resource provisioning can proceed differently for different tenants. For example, a first tenant has a first configuration providing a first set of access to resources of the SDDC102, and a second tenant has a second configuration providing a second set of access to resources of the SDDC102. In a default-tenant deployment, configuration occurs in a default tenant, and users log into the SDDC102via the client148and/or the administrator146(e.g., using a same universal resource locator (URL), other universal resource indicator (URI), etc.) and have features assigned to the respective user based on an associated role, for example. In a single-tenant deployment, a tenant is created for an organization to use an instance of the SDDC102, for example. Tenant users access the instance of the SDDC102via the client148and/or the administrator146using a tenant-specific access point (e.g., a URL, other URI, etc.). In a multi-tenant deployment, a separate tenant is created for each organization that uses the instance of the SDDC102, for example. Tenants access the SDDC102via the client148and/or the administrator146using tenant-specific access points (e.g., a URL, other URI, etc.). Each tenant is segregated from other tenants and from a default tenant, although a user such as an administrator with a system-wide role can view and manage configurations across tenants. Access to resources, functionality, and/or other features of the SDDC102can be configured according to the tenant, for example. The example tenant administrator230of the illustrated example ofFIG.2is implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), field programmable logic device(s) (FPLD(s)), digital signal processor(s) (DSP(s)), Coarse Grained Reduced precision architecture (CGRA(s)), image signal processor(s) (ISP(s)), etc. However, in some examples, the tenant administrator230is implemented as a service of the automation manager130. In the illustrated example ofFIG.2, the example tenant administrator230is implemented in a same automation manager130as the provisioning controller220. However, in some examples, the tenant administrator230can be implemented in a separate automation manager from the provisioning controller220. That is, the provisioning controller220and the tenant administrator230can be implemented in separate containers, separate virtual machines, etc. The example automation executor250of the illustrated example ofFIG.2executes automation instructions included in the automation request received via the automation request interface210. Such automation instructions can result in, for example, the provisioning of a virtual host, installation of an operating system on the virtual host, configuration of an application for execution on the operating system, etc. Each of such operations can be authorized in the context of a user session that is initialized based on a user (e.g., an administrator) providing their credentials. The example automation executor250of the illustrated example ofFIG.2is implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), field programmable logic device(s) (FPLD(s)), digital signal processor(s) (DSP(s)), Coarse Grained Reduced precision architecture (CGRA(s)), image signal processor(s) (ISP(s)), etc. However, in some examples, the automation executor250is implemented as a service of the automation manager130. FIG.3is a block diagram of an example implementation of the provisioning controller220ofFIG.2. The example provisioning controller220includes an example request processor310, an example content retriever320, an example toggle manager330, an example content compiler340, and an example request transmitter350. In operation, the example provisioning controller220receives a request for provisioning of a cloud resource(s) of the example SDDC102(e.g., an automation request from the client148, the administrator146, etc.). For example, the request includes provisioning of a Web service to provide a form and/or other interface to the client148. The example provisioning controller220evaluates the received request to identify and/or retrieve toggle(s) associated with the requested resource(s). The example provisioning controller220determines whether a particular tenancy (e.g., a division or difference in configuration and/or available functionality, etc., between clients148, etc.) affects feature toggling and/or other provisioning of a service, interface, and/or other resource of the SDDC102, for example. The provisioning controller220then provides the processed provisioning request results to the automation executor250to execute the provisioned resource(s) for the client148. The example request processor310of the illustrated example ofFIG.3processes a request received via the example automation request interface210ofFIG.2to automatically provision one or more cloud resources. In certain examples, the automation request includes a resource identification, a function call, a service request (e.g., a GET request), etc. The example request processor310parses the request to identify the requesting client148, a target of the request, and at least one resource included in the request for provisioning, for example. In response to the provisioning request, the example content retriever320uses the information extracted from the request by the request processor310to request the provisioned resource(s) for configuration, allocation, and/or deployment for the requesting client148. For example, Web page/form content, a container/virtual machine, an application, etc., is requested by the content retriever320. Alternatively or in addition, the content retriever320formats a request for content, application, other resource, for execution by the automation executor250. The example content retriever320also interacts with the example toggle manager330. The example toggle manager330processes information extracted by the example request processor310and/or the example content retriever320from the provisioning request to determine whether any feature toggle(s) apply to the requested resource(s). For example, the requesting client148, a target of the request, and at least one resource included in the request for provisioning can form one or more keys to be used by the example toggle manager330to form a toggle query of the database126to retrieve feature toggle(s) and/or toggle-related information (e.g., toggle identifier(s), a toggle profile, etc.) from the database126. In operation, the example toggle manager330queries the example database126using one or more keys to retrieve (e.g., using a GET command, etc.) one or more feature toggles applicable to a particular user/client/tenant, a particular resource request, etc. In certain examples, the feature toggle query can be tenant-based, and tenant information from the tenant administrator230is used by the example toggle manager330to interrogate the database126. For example, the example SDDC102can be assigned to a single tenant and/or the SDDC102can include resources divided among multiple tenants. In a tenant-based or tenant-aware configuration, one or more feature toggles can be associated with only a subset of tenants, can have different values for different tenants, etc. For example, both global and tenant-specific feature toggles can be stored in the example database126. Features/functionality can be set as a tenant-specific toggle at a first point in time (e.g., for feature testing, feature validation, limited feature deployment, premium feature deployment, etc.), and that functionality can become a globally-toggled feature available to multiple users/clients at a second time, for example. A feature can be enabled with a global toggle, and an improvement, customization, and/or other modification of the feature can become a tenanted feature toggle, for example. If a client/user148(and/or its associated provisioning service) requests all toggles in a provisioning request to the provisioning controller220processed by the toggle manager330, toggles specific to a tenant/organization associated with that client/user148override global toggles having a same key. That is, the toggles can be organized according to a hierarchy and/or other tenant-based organizational structure in the database126such that a global or general toggle is used unless the particular tenant has a specific variant for that toggle, in which case the tenant-specific toggle is used instead of the global toggle and/or modifies one or more values/aspects of the global toggle, etc. In certain examples, a get request executed using a system user token (e.g., a persistent or time-limited identifier of an administrator or other system-level user and/or associated system-level session, etc.) returns all applicable toggles stored in the database126without overrides. In certain examples, in addition to using tenant information to form a toggle query of the example database126, the example toggle manager330also includes a processing strategy for the toggle(s) to be retrieved from the example database126. For example, the selected processing strategy modifies how the toggle data is formatted/structured for a requesting service. In some examples, toggle(s) are returned to the toggle manager330from the example database126as objects from a provisioning service. In other examples, toggle(s) are returned to the toggle manager330from the example database126as a micro-service and/or hybrid cloud object (e.g., a Symphony™ object, etc.). As such, the toggle manager330can provide toggle(s) in a variety of formats to accommodate a requesting service. The requesting service need not convert the toggle information. Instead, the toggle manager330acts as a single point to manage toggles stored in the database126(e.g., on one or distributed across multiple database servers124c, etc.). Toggle(s) returned by the example toggle manager330are provided to the example content compiler340. The example content compiler340compiles and/or otherwise assembles information for provisioning of resource(s) including feature toggle(s) in response to the received request. For example, in response to a get toggles request, the content compiler340compiles a set of retrieved global and/or tenant-specific toggles to be provided to the requesting service. In response to another type of provisioning request or content query, the content compiler340compiles information (e.g., information for provisioning of a resource such as a webpage, container, virtual machine, etc.), including retrieved global and/or tenant-specific toggles for execution by the automation executor250, for example. The example result transmitter350provides the resulting compiled information (e.g., a set of feature toggles, content include feature toggles, etc.) from the example content compiler340to the example automation executor250. For example, the result transmitter350provides a service to transfer content (e.g., retrieved toggle(s) and/or resource content, etc.) to the automation executor250to instantiate, deploy, and/or otherwise execute resource(s) for the requester (e.g., the client148, the administrator146, etc.). The example request processor310, the example content retriever320, the example toggle manager330, the example content compiler340, and/or the example request transmitter350of the illustrated example ofFIG.3is/are implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), ASIC(s), PLD(s), FPLD(s), programmable controller(s), GPU(s), DSP(s), CGRA(s), ISP(s), etc. In some examples, the example request interceptor310, the example request processor310, the example content retriever320, the example toggle manager330, the example content compiler340, and/or the example request transmitter350are implemented by separate logic circuits. In some examples, the example request processor310implements means for accessing. In some examples, the example content retriever320implements means for retrieving. In some examples, the example toggle manager330implements means for managing. In some examples, the example content compiler340implements means for combining. In some examples, the example request transmitter350implements means for sending. FIG.4is a block diagram of an example implementation of the toggle manager330ofFIG.3. The example toggle manager330ofFIG.4includes an example toggle interface410, an example tenant identifier420, an example toggle processor430, and an example toggle updater440. The example toggle interface410receives a trigger and/or other request from the example content retriever320. The example toggle interface410processes the request for one or more feature toggles from the example content retriever320. The example toggle interface410can identify whether the request is a request to get all toggles, a request to get a certain tenant's toggle(s), a request for another subset of toggle(s), a request for content and/or other provisioning including toggle(s), etc. The example toggle interface410parses the request to extract information corresponding to tasks to be handled by the example toggle manager330. The example tenant identifier420of the example ofFIG.4interacts with the example tenant administrator230to identify a tenant associated with a provisioning request received by the example request processor310of the example provisioning controller220. The example tenant identifier420can provide tenant identification information, such as a tenant type, tenant identification (ID) number, tenant name, tenant hierarchy, other organization information, etc., to form a query or request by the toggle processor430to the toggle database126. For example, the example tenant identifier420queries the example tenant administrator230, which returns the tenant identification information to the example tenant identifier420based on an identity of the client148and/or administrator146, session information, system configuration, login information, other information from the example toggle interface410, etc. The example tenant identifier420passes the tenant identification information along to the example toggle processor430. The example toggle processor430assembles a query and/or other request to the example toggle database126. The example toggle processor430takes the tenant identification from the example tenant identifier420and request information from the example toggle interface410to generate a query, request, and/or other access of the example toggle database126. The example toggle processor430sends the query to the example toggle database126and receives a response to the query from the example toggle database126. The example toggle processor430provides the query result from the example toggle database126to the example content compiler340. In certain examples, the example toggle interface410receives a toggle update request or instruction to update toggle(s) in the example toggle database126. The example toggle interface410provides the update request/instruction to the example toggle updater440. The toggle update request/instruction can add and/or remove toggles for one or more tenants (e.g., globally, tenant-specific, etc.) to/from the example toggle database126, for example. The toggle update request/instruction can adjust a toggle value, update program code and/or other feature associated with a toggle, etc., for one or more tenants (e.g., globally, tenant-specific, etc.) in the example toggle database126, for example. As such, a request and/or other instruction from the example toggle updater440adjusts (e.g., via a REST command, etc.) toggle-related content of the example toggle database126, for example. The example toggle interface410, the example tenant identifier420, the example toggle processor430, and/or the example toggle updater440of the illustrated example ofFIG.4is/are implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), ASIC(s), PLD(s), FPLD(s), programmable controller(s), GPU(s), DSP(s), CGRA(s), ISP(s), etc. In some examples, the example toggle interface410, the example tenant identifier420, the example toggle processor430, and/or the example toggle updater440are implemented by separate logic circuits. In some examples, the example toggle interface410implements means for analyzing. In some examples, the example tenant identifier420implements means for identifying. In some examples, the example toggle processor430implements means for retrieving and means for processing. In some example, the example toggle updater430implements means for updating. FIG.5is a block diagram of an example implementation of the tenant administrator230ofFIG.2. The example tenant administrator230ofFIG.5includes an example tenant interface510, an example tenant determiner520, and an example tenant data store530. The example tenant interface510of the example ofFIG.5receives a request from the example tenant identifier420to identify a tenant associated with a provisioning request received by the example request processor310of the example provisioning controller220. The example tenant interface510can provide tenant identification information, such as a tenant type, tenant identification (ID) number, tenant name, tenant hierarchy, other organization information, etc., to the example tenant identifier420. For example, the example tenant identifier420queries the example tenant interface510of the example tenant administrator230. The example tenant interface510queries the tenant determiner520, which returns tenant identification information based on a lookup of the example tenant data store530. The example tenant determiner520queries the example tenant data store530based on an identity of the client148and/or administrator146, session information, system configuration, login information, other information from the example toggle interface410, etc. The example tenant determiner520returns the tenant identification information to the example tenant interface510. The example tenant interface510provides the tenant identification information to the requesting tenant identifier420, for example. The example tenant interface510, the example tenant determiner520, and/or the example tenant data store530of the illustrated example ofFIG.5is/are implemented by a logic circuit such as a hardware processor. However, any other type of circuitry can additionally or alternatively be used such as one or more analog or digital circuit(s), logic circuits, programmable processor(s), ASIC(s), PLD(s), FPLD(s), programmable controller(s), GPU(s), DSP(s), CGRA(s), ISP(s), etc. In some examples, the example tenant interface510, the example tenant determiner520, and/or the example tenant data store530are implemented by separate logic circuits. In some examples, the example tenant interface410implements means for receiving. In some examples, the example tenant determiner520implements means for determining. FIG.6is a data flow diagram600showing an example interaction between the example client148, the example web server124a, the example provisioning controller220, and the example toggle database126to provide web page content to the client148in response to a provisioning request. As shown in the example ofFIG.6, a web page is requested602by the client148from the example web server124a. The example web server124aresponds to the request602with a request or “get” action604for an associated first toggle from the example provisioning controller220. For example, the web server124a, which will construct the web page content for the client148, identifies an indicator of tenant-specific content which triggers the get toggle request. Alternatively or in addition, the example web server124adoes not identify an indicator of tenant-specific content but queries the example provisioning controller220to determine whether any feature toggle exists to be included in the web page content being provisioned by the web server124afor the client148, for example. The example provisioning controller220retrieves and/or otherwise gets606the first toggle from the example toggle database126. The example toggle database126returns608the first toggle to the provisioning controller220. For example, the example toggle database126returns program code associated with a first feature actionable by a first tenant associated with the example client148to be included in the web page content being provisioned by the example web server124a. The example provisioning controller220provides610the first toggle to the example web server124a. Based on the value of the first toggle, the example web server124adisplays a form X612or a form Y614to the client148. For example, when the first toggle has a value A, the web server124arenders form X612including toggle content A for the client148, and, when the first toggle has a value B, the web server124arenders form Y614including toggle content B for the client148. The example client148interacts with the generated form A or B to fill in the form and submit the form616back to the example web server124a. The example web server124aroutes the form data618to the example provisioning controller220. The example provisioning controller220processes the form data and, based on the form data, requests620a second toggle from the example toggle database126. The example toggle database126returns622the second toggle to the provisioning controller220. For example, the example toggle database126returns program code associated with a second feature actionable by the first tenant associated with the example client148. The example provisioning controller220processes the second toggle and executes a first process624or a second process626based on the value of the second toggle. For example, when the second toggle has a value C, then the example provisioning controller220executes the first process624. When the second toggle has a value D, then the example provisioning controller220executes the second process626. The example provisioning controller220returns628a result of the execution of the first process or the second process to the example web server124. The example web server124adisplays the result630to the example client148. As such, for example, the web server124aprovides different content/functionality/etc. to the client148based on the second toggle provided to the provisioning controller220from the toggle database126. While an example implementation of the provisioning controller220ofFIG.2is illustrated inFIG.3, an example implementation of the toggle manager330ofFIG.3is illustrated inFIG.4, and an example implementation of the tenant administrator230ofFIG.2is illustrated in FIG.5, one or more of the elements, processes and/or devices illustrated inFIGS.2,3,4, and/or5can be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example request processor310, the example content retriever320, the example user toggle manager330, the example content compiler340, the example result transmitter350, and/or, more generally, the example provisioning controller220ofFIGS.2and/or3; the example toggle interface410, the example tenant identifier420, the example toggle processor430, the example toggle updater440, and/or, more generally, the example toggle manager330ofFIGS.3and/or4; the example tenant interface510, the example tenant determiner520, the example tenant data store530, and/or more generally, the example tenant administrator ofFIGS.2and/or5can be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example request processor310, the example content retriever320, the example user toggle manager330, the example content compiler340, the example result transmitter350, and/or, more generally, the example provisioning controller220ofFIGS.2and/or3; the example toggle interface410, the example tenant identifier420, the example toggle processor430, the example toggle updater440, and/or, more generally, the example toggle manager330ofFIGS.3and/or4; the example tenant interface510, the example tenant determiner520, the example tenant data store530, and/or more generally, the example tenant administrator ofFIGS.2and/or5can be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example request processor310, the example content retriever320, the example user toggle manager330, the example content compiler340, the example result transmitter350, and/or, more generally, the example provisioning controller220ofFIGS.2and/or3; the example toggle interface410, the example tenant identifier420, the example toggle processor430, the example toggle updater440, and/or, more generally, the example toggle manager330ofFIGS.3and/or4; the example tenant interface510, the example tenant determiner520, the example tenant data store530, and/or more generally, the example tenant administrator ofFIGS.2and/or5is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example provisioning controller220ofFIGS.2,3, and/or4, and/or the example tenant administrator230ofFIGS.2and/or5can include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.2,3,4, and/or5, and/or can include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the provisioning controller220ofFIGS.2,3, and/or4is shown inFIGS.7-9. The machine readable instructions can be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor1012shown in the example processor platform1000discussed below in connection withFIG.10. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor1012, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor1012and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS.7-9, many other methods of implementing the example provisioning controller220can alternatively be used. For example, the order of execution of the blocks can be changed, and/or some of the blocks described can be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks can be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry can be distributed in different network locations and/or local to one or more devices (e.g., a multi-core processor in a single machine, multiple processors distributed across a server rack, etc.). The machine readable instructions described herein can be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein can be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions can be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may involve one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions can be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that can together form a program such as that described herein. In another example, the machine readable instructions can be stored in a state in which they can be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit. The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C #, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc. As mentioned above, the example processes ofFIGS.7,8, and/or9can be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous. FIG.7is a flowchart representative of example machine-readable instructions that can be executed to implement the example provisioning controller220ofFIGS.2and/or3. The example process700of the illustrated example ofFIG.7begins when the example request processor310(FIG.3) receives a provisioning request via the example automation request interface210ofFIG.2. (Block710). For example, the request processor310receives a request via a first service (e.g., a REST service, a Web service, and/or other software service to perform an automated task, respond to a hardware event, or listen for a data request, etc.) to provision a virtual machine, an HTML or other Web-based resource, etc. The example request processor310processes the provisioning request. (Block720). For example, the request processor310processes the provisioning request to identify one or more feature toggles included in and/or otherwise associated with the provisioning request (e.g., in the request, associated with the request client148and/or other tenant, associated with the resource(s) to be provisioned, etc.). The example toggle manager330(FIG.3) processes the identified feature toggle(s) associated with the provisioning request. (Block730). In certain examples, the toggle manager330uses a second service (e.g., a REST service, a Web service, and/or other software service to perform an automated task, respond to a hardware event, or listen for a data request, etc.) to process the feature toggle(s) with respect to the example toggle database126(FIGS.1and3). For example, the toggle manager330identifies one or more feature toggle(s) to be updated in the example toggle database126and updates the toggle database126. As another example, the example toggle manager330forms a GET request and/or other query to retrieve applicable feature toggle(s) from the example toggle database126and executes the request with respect to the database126to retrieve available toggle(s) (e.g., global toggle(s) and/or tenant-specific toggle(s) that preempt one or more global toggles, etc.). Alternatively or in addition, the example toggle manager330identifies one or more feature toggle(s) indicated in the provisioning request for which values are to be retrieved from the example toggle database126. For example, the toggle manager330identifies a flag, hook, reference, variable, parameter, and/or other indicator associated with a feature toggle in the provisioning request and queries the toggle database126to retrieve such feature toggle value(s). Example instructions that can be used to implement block730are described below in connection withFIG.8. The example content retriever320, content compiler340, and result transmitter350(FIG.3) facilitate provisioning of resource(s) according to the feature toggle(s) and provisioning request. (Block740). For example, the example content retriever320(FIG.3) retrieves resource content associated with the provisioning request and/or information identifying resource content associated with the provisioning request that can be used by the example automation executor250(FIG.2) to provision resource(s). The example content compiler340compiles the retrieved resource(s)/resource information with feature toggle value(s) from the example toggle manager330to form an output for the example result transmitter350to provide to the example automation executor250(e.g., to enable provisioning of a virtual machine, container, Web page, application, etc., for a client148, administrator146, etc.). In certain examples, the content compiler340uses a third service (e.g., a REST service, a Web service, and/or other software service to perform an automated task, respond to a hardware event, or listen for a data request, etc.) to combine resource information with feature toggle content to form a provisioning output for the example result transmitter350to provide to the example automation executor250for execution. The example process700of the illustrated example ofFIG.7then terminates but may be repeated in response to a subsequent provisioning request. FIG.8is a flowchart providing further example detail regarding processing a toggle associated with a provisioning request (Block730of the example ofFIG.7), representative of example machine readable instructions that can be executed to implement the example toggle manager330ofFIGS.3and/or4. The example process730ofFIG.8begins when a provisioning request is analyzed by the example toggle interface410(FIG.4) to identify a feature toggle and/or reference to a feature toggle in the provisioning request. (Block810). For example, a request to provision HTML content is analyzed by the example toggle retriever410to identify an indicator of a feature toggle (e.g., a flag, hook, reference, variable, parameter, and/or other indicator associated with a feature toggle in the provisioning request, etc.). Alternatively or in addition, the provisioning request is associated with a general query for feature toggles (e.g., global feature toggle(s) and/or tenant-specific feature toggle(s)) associated with the provisioning request, a target resource of the provisioning request, a requesting tenant, etc. Alternatively or in addition, the provisioning request may be or include an update to a feature toggle in the example toggle database126(e.g., adding a new feature toggle, updating an existing feature toggle, etc.). The example tenant identifier420(FIG.4) identifies a tenant associated with the provisioning request and/or the administrator146/client148accessing the SDDC102. (Block820). For example, the tenant identifier420retrieves a tenant identification from the provisioning request and/or from a configuration, setting, profile, etc., of the client148, administrator146, and/or other entity logged into or otherwise accessing the SDDC102to make/trigger the provisioning request. The tenant identification can be associated with a user and/or session token, for example. The example toggle processor430(FIG.4) accesses the example toggle database126. (Block830). Based on a type of toggle operation (e.g., get, query, update, etc.), a next action is triggered by the example toggle processor430. (Block840). For example, when the operation is an update operation, then the example toggle updater440(FIG.4) accesses a feature toggle record in the example toggle database126(FIGS.1and3). (Block850). The example toggle updater440updates the toggle record in the example database126with a new feature toggle record, an updated feature toggle record, etc. (Block852). When the example toggle operation is a get toggle(s) operation, then the example toggle processor430gets one or more applicable toggle(s) from the example toggle database126. (Block860). For example, toggle(s) can be retrieved from the database126based on resource, request, tenant, etc. Example instructions that can be used to implement block860are described below in connection withFIG.9. The example toggle processor430returns feature toggle(s) from the example toggle database126to the example content compiler340. (Block862). When the example toggle operation is a query for a particular feature toggle or set of feature toggles, then the example toggle processor430queries the example toggle database126for the requested feature toggle(s). (Block870). The example toggle processor430retrieves toggle value(s) from the example toggle database126. (Block872). For example, the toggle processor430retrieves and returns content, program code, parameter, etc., associated with the requested feature toggle(s) to the example content compiler340. The example process730the illustrated example ofFIG.8then terminates but can be repeated, for example, upon a subsequent request to be processed. In the illustrated example, control returns to the example instructions ofFIG.7. FIG.9is a flowchart providing further example detail regarding getting feature toggle(s) from the example toggle database126(Block860of the example ofFIG.8), representative of example machine readable instructions that can be executed to implement the example toggle processor430ofFIG.4. The example process860ofFIG.9begins when a feature toggle get request (e.g., a REST GET request and/or other query, etc.) is submitted. The example toggle processor430handles the get toggle(s) request. (Block910). In certain examples, the toggle processor430determines a processing strategy for the get request. (Block920). For example, the selected processing strategy indicates how the feature toggle data appears or is to be provided to the provisioning service (e.g., what information, in what format, level of detail, etc.). In certain examples, two processing strategies can be provided by a provisioning service and/or a toggling service to return a result body in response to a toggle get request. The processing strategy can be selected based on a requesting service so that the requesting service does not have to change or adjust the returned toggle result content. For example, a first processing strategy returns a toggle represented as a provisioning service object. A second processing strategy returns a toggle as a micro-service and/or hybrid cloud object, for example. The example toggle processor430then queries the example toggle database126(FIGS.1and3) for feature toggle(s). (Block930). The example toggle processor430determines a user type associated with the query. (Block940). For example, the example toggle processor430determines whether the query is associated with a system/global user (e.g., the administrator146, a super user, etc.) or a particular tenant (e.g., an organization, a user, etc.). When the query is associated with a system user, then all feature toggles are returned from the toggle database126in response to the query. (Block950). When the query is associated with a particular tenant, then global toggles are overridden with particular tenant toggles where applicable. (Block960). For example, Toggle 1 has a global value of “A” and tenant-specific values “B” and “C”. Tenant value B is associated with a Tenant 1, and tenant value C is associated with a Tenant 2. When Tenant 1 requests the value of Toggle 1, the tenant-specific value B is returned. When Tenant 2 requests the value of Toggle 1, the tenant-specific value C is returned. When a Tenant 3 requests the value of Toggle 1, the global value A is returned. Continuing this example, if Toggles 1 and 2 are queried and Tenant 1 has no specific value for Toggle 2, then the tenant-specific value for Toggle 1 and the global value for Toggle 2 are returned in response to the query, for example. As such, based on the tenant, the global or general value is overridden to return the tenant-specific value. The example process860the illustrated example ofFIG.9then terminates but may be repeated, for example, upon a subsequent get request for the example toggle database126. In the illustrated example, control returns to the example instructions ofFIG.8. FIG.10is a block diagram of an example processor platform1000structured to execute the instructions ofFIG.5to implement the example provisioning controller220ofFIGS.2and/or3. The processor platform1000can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad′), an Internet appliance, a gaming console, a headset or other wearable device, or other type of computing device. The processor platform1000of the illustrated example includes a processor1012. The processor1012of the illustrated example is hardware. For example, the processor1012can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor1012implements the example request processor310, the example content retriever320, the example toggle manager330, the example content compiler340, and/or the example result transmitter350. The processor1012of the illustrated example includes a local memory1013(e.g., a cache). The processor1012of the illustrated example is in communication with a main memory including a volatile memory1014and a non-volatile memory1016via a bus1018. The volatile memory1014may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory1016may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory1014,1016is controlled by a memory controller. The processor platform1000of the illustrated example also includes an interface circuit1020. The interface circuit1020may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In the illustrated example, one or more input devices1022are connected to the interface circuit1020. The input device(s)1022permit(s) a user to enter data and/or commands into the processor1012. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices1024are also connected to the interface circuit1020of the illustrated example. The output devices1024can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit1020of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor. The interface circuit1020of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network1026. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform1000of the illustrated example also includes one or more mass storage devices1028for storing software and/or data. Examples of such mass storage devices1028include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The machine executable instructions1032ofFIGS.6-9can be stored in the mass storage device1028, in the volatile memory1014, in the non-volatile memory1016, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. The example mass storage device1028can implement the example toggle database126ofFIGS.1and/or3, for example. Alternatively or in addition, the example feature toggle database126can be implemented separately and accessible to the processor platform1000via the network1026. FIG.11is a block diagram of an example processor platform1100structured to implement the example tenant administrator230ofFIGS.2and/or5. The processor platform1100can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), an Internet appliance, a gaming console, a headset or other wearable device, or other type of computing device. The processor platform1100of the illustrated example includes a processor1112. The processor1112of the illustrated example is hardware. For example, the processor1112can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor1112implements the example tenant interface510and the example tenant determiner520. The processor1112of the illustrated example includes a local memory1113(e.g., a cache). The processor1112of the illustrated example is in communication with a main memory including a volatile memory1114and a non-volatile memory1116via a bus1118. The volatile memory1114may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory1116may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory1114,1116is controlled by a memory controller. The example tenant data store530can be implemented using one or more of the memory1114,1116, for example. The processor platform1100of the illustrated example also includes an interface circuit1120. The interface circuit1120may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In the illustrated example, one or more input devices1122are connected to the interface circuit1120. The input device(s)1122permit(s) a user to enter data and/or commands into the processor1112. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices1124are also connected to the interface circuit1120of the illustrated example. The output devices1124can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit1120of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor. The interface circuit1120of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network1126. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform1100of the illustrated example also includes one or more mass storage devices1128for storing software and/or data. Examples of such mass storage devices1128include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The machine executable instructions1132ofFIGS.6-9may be stored in the mass storage device1128, in the volatile memory1114, in the non-volatile memory1116, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. The example mass storage device1128can implement the example tenant data store530ofFIG.5, for example. A block diagram illustrating an example software distribution platform1205to distribute software such as the example computer readable instructions1032ofFIG.10and/or the example computer readable instructions1132ofFIG.11to third parties is illustrated inFIG.12. The example software distribution platform1205can be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties can be customers of the entity owning and/or operating the software distribution platform. For example, the entity that owns and/or operates the software distribution platform may be a developer, a seller, and/or a licensor of software such as the example computer readable instructions1032ofFIG.10and/or the example computer readable instructions1132ofFIG.11. The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub-licensing. In the illustrated example, the software distribution platform1205includes one or more servers and one or more storage devices. The storage devices store the computer readable instructions1032, which may correspond to the example computer readable instructions700ofFIGS.7,8, and/or9, and/or store the computer readable instructions1132, which may correspond to the example computer readable instructions700ofFIGS.7,8, and/or9, as described above. The one or more servers of the example software distribution platform1205are in communication with a network1210, which can correspond to any one or more of the Internet and/or any of the example networks150,1026,1126described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale and/or license of the software can be handled by the one or more servers of the software distribution platform and/or via a third party payment entity. The servers enable purchasers and/or licensors to download the computer readable instructions1032,1132from the software distribution platform1205. For example, the software, which can correspond to the example computer readable instructions700ofFIGS.7,8, and/or9, can be downloaded to the example processor platform1000, which is to execute the computer readable instructions1032to implement the example provisioning controller220ofFIGS.2,3, and/or4. Additionally or alternatively, the software can correspond to the example computer readable instructions700ofFIGS.7,8, and/or9can be downloaded to the example processor platform1100, which is to execute the computer readable instructions1132to implement the example tenant administrator230ofFIGS.2and/or5. In some examples, one or more servers of the software distribution platform1205periodically offer, transmit, and/or force updates to the software (e.g., the example computer readable instructions1032ofFIG.10, the example computer readable instructions1132ofFIG.11) to ensure improvements, patches, updates, etc. are distributed and applied to the software at the end user devices. From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that enable dynamic composition and adjustment of provisioned resources based on feature toggles. Such feature toggles can be maintained, applied, and updated in a hierarchical or organizational manner based on an associated tenant, system user, and/or other criterion for selective application of one or more feature toggle values. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by reducing or eliminating down-time involved in software and/or other functionality updates resulting from testing, releasing, and/or updating program code and/or other features. That is, processor uptime can be increased, processor responsiveness can be improved, and processor cycles can be used more efficiently to perform automation tasks, rather than waiting for a user/system to refresh a cache and/or provide other updates offline. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer. Rather than storing platform functionality/features in a cache or other transient memory, which is cleared when the cloud and/or other virtualized environment and/or a component of the environment is refreshed, reset, restarted, rebooted, etc., feature toggles representing platform functionality/features are stored in a database and/or other data store that persists across refreshes/restarts. Additionally, the database provides a larger, more flexible, more dynamic body for toggle data storage, not limited by the size of a cache or RAM or similar memory. Instead, the persistent database can be resized to accommodate a variety of feature toggles according to a hierarchy and/or other organization. Updates to the database can be made instantly available to a querying service, rather than requiring a system reboot, manual reconfiguration, etc. As such, changes to feature toggle content in the database is separated from and operates independent of provisioning services and control to provide functionality to users of the cloud platform. In certain examples, by centralizing feature toggles in a database, any part of the data center can access feature toggles, change feature toggle values, use feature toggles in resource provisioning, etc. Storing feature toggles in the database allows the toggles to be modified with REST requests, without code changes. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. Example methods, apparatus, systems, and articles of manufacture to dynamically determine and exercise feature toggles based on tenant and/or other criterion in a virtualized computing environment are disclosed herein. Further examples and combinations thereof include the following: Example 1 is a resource provisioning apparatus including: a request processor to process a provisioning request in a software-defined data center for provisioning of a resource; and a toggle manager. The example toggle manager is to: determine a feature toggle associated with the resource of the provisioning request, the resource associated with a tenant of the software-defined data center, the tenant identified using a first tenant identifier; retrieve the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; process the feature toggle to provision the resource according to the first value of the feature toggle; and facilitate provisioning of the resource according to the first value. Example 2 includes example 1 and further includes: a content retriever to retrieve information for the resource of the provisioning request; a content compiler to combine the information with the first value of the feature toggle; and a result transmitter to transmit the combined information with first value to provision the resource. Example 3 includes example 1, wherein the toggle manager includes: a toggle interface to analyze the provisioning request to determine the feature toggle associated with the resource of the provisioning request; a tenant identifier to identify the tenant and the associated first tenant identifier; and a toggle processor. The example toggle processor is to: retrieve the feature toggle from the database using the first tenant identifier; process the feature toggle to provision the resource according to the first value of the feature toggle; and facilitate provisioning of the resource according to the first value. Example 4 includes example 3, wherein the toggle manager further includes a toggle updater to update the feature toggle in the database. Example 5 includes example 1, wherein the toggle manager is to select a processing strategy for toggle retrieval, the processing strategy corresponding to a format for the first value and the second value of the feature toggle. Example 6 includes example 1, wherein the database is to organize feature toggles and associated values according to a hierarchy, the hierarchy including system values and tenant-specific values for one or more feature toggles, including a first tenant-specific value to override a first system value for the respective tenant. Example 7 includes example 1, wherein the toggle manager is to analyze the provisioning request to identify the feature toggle associated with the resource of the provisioning request and an associated toggle operation, the toggle operation to include at least one of a toggle update, a toggle query, or a get toggles request. Example 8 is at least one non-transitory computer readable storage medium including instructions that, when executed, cause at least one processor to at least: determine, using a first service, a feature toggle associated with a resource of a provisioning request, the resource associated with a tenant of a software-defined data center, the tenant identified using a first tenant identifier; retrieve, using a second service, the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; process, using the second service, the feature toggle to provision the resource according to the first value of the feature toggle; and facilitate provisioning of the resource according to the first value. Example 9 includes example 8, wherein the instructions, when executed, further cause the at least one processor to: retrieve information for the resource of the provisioning request; combine the information with the first value of the feature toggle; and transmit the combined information with first value to provision the resource. Example 10 includes example 8, wherein the instructions, when executed, further cause the at least one processor to update the feature toggle in the database. Example 11 includes example 8, wherein the instructions, when executed, further cause the at least one processor to select a processing strategy for toggle retrieval, the processing strategy corresponding to a format for the first value and the second value of the feature toggle. Example 12 includes example 8, wherein the instructions, when executed, further cause the at least one processor to organize feature toggles and associated values according to a hierarchy, the hierarchy including system values and tenant-specific values for one or more feature toggles, including a first tenant-specific value to override a first system value for the respective tenant. Example 13 includes example 8, wherein the instructions, when executed, further cause the at least one processor to analyze the provisioning request to identify the feature toggle associated with the resource of the provisioning request and an associated toggle operation, the toggle operation to include at least one of a toggle update, a toggle query, or a get toggles request. Example 14 is a method to process a feature toggle in a provisioning request. The method includes: determining, by executing an instruction with at least one processor, a feature toggle associated with a resource of the provisioning request, the resource associated with a tenant of a software-defined data center, the tenant identified using a first tenant identifier; retrieving the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; processing, by executing an instruction with the at least one processor, the feature toggle to provision the resource according to the first value of the feature toggle; and facilitating provisioning of the resource according to the first value. Example 15 includes example 14 and further includes: retrieving information for the resource of the provisioning request; combining the information with the first value of the feature toggle; and transmitting the combined information with first value to provision the resource. Example 16 includes example 14 and further includes updating the feature toggle in the database. Example 17 includes example 14 and further includes selecting a processing strategy for toggle retrieval, the processing strategy corresponding to a format for the first value and the second value of the feature toggle. Example 18 includes example 14 and further includes organizing feature toggles and associated values according to a hierarchy, the hierarchy including system values and tenant-specific values for one or more feature toggles, including a first tenant-specific value to override a first system value for the respective tenant. Example 19 includes example 14 and further includes analyzing the provisioning request to identify the feature toggle associated with the resource of the provisioning request and an associated toggle operation, the toggle operation to include at least one of a toggle update, a toggle query, or a get toggles request. Example 20 is a resource provisioning apparatus including: memory circuitry to include instructions; and at least one processor to execute the instructions to at least: determine a feature toggle associated with a resource of a provisioning request, the resource associated with a tenant of a software-defined data center, the tenant identified using a first tenant identifier; retrieve the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; process the feature toggle to provision the resource according to the first value of the feature toggle; and facilitate provisioning of the resource according to the first value. Example 21 is an apparatus to provision a resource in a cloud environment. The example apparatus includes: means for determining a feature toggle associated with a resource of a provisioning request, the resource associated with a tenant identified using a first tenant identifier; means for retrieving the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; and means for processing the feature toggle to provision the resource according to the first value of the feature toggle. Example 22 is a server to distribute first instructions on a network. The example the server includes: at least one storage device including second instructions; and at least one processor to execute the second instructions to transmit the first instructions over the network, the first instructions, when executed, to cause at least one device to: determine a feature toggle associated with a resource of a provisioning request, the resource associated with a tenant of a software-defined data center, the tenant identified using a first tenant identifier; retrieve the feature toggle from a database using the first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; process the feature toggle to provision the resource according to the first value of the feature toggle; and facilitate provisioning of the resource according to the first value. The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.
93,580
11861403
DETAILED DESCRIPTION The detailed description of the appended drawings is intended as a description of the currently preferred embodiments of the present disclosure, and is not intended to represent the only form in which the present disclosure may be practiced. It is to be understood that the same or equivalent functions may be accomplished by different embodiments that are intended to be encompassed within the spirit and scope of the present disclosure. FIG.1illustrates a schematic block diagram of a processing system100in accordance with an exemplary embodiment of the present disclosure. The processing system100includes a set of processor cores of which first and second processor cores102aand102bare shown. The first processor core102aand the second processor core102bare interchangeably referred to as “the processor core102a” and “the processor core102b”, respectively. The processing system100further includes a set of communication buses104, a shared memory106, first and second accelerator circuits108aand108b, and first and second thread management circuits110aand110b. It will be understood by those of ordinary skill in the art that the processing system100may include various other circuits and systems (e.g., communication modules, device drivers, peripherals, or the like) for its operation, which are not shown in order not to obscure the disclosure. The first accelerator circuit108aand the second accelerator circuit108bare interchangeably referred to as “the accelerator circuit108a” and “the accelerator circuit108b”, respectively. The first thread management circuit110aand the second thread management circuit110bare interchangeably referred to as “the thread management circuit110a” and “the thread management circuit110b”, respectively. Each processor core102aand102bis coupled to the set of communication buses104and is configured to execute (or run) various applications. For example, as shown inFIG.1, the first processor core102aexecutes an application112. Examples of such applications may include, but are not limited to, image processing applications, audio or video processing applications, machine learning applications, data processing applications, network routing applications, or the like. In one embodiment, each processor core102aand102bis configured to run multiple applications concurrently. However, for the sake of brevity, the first processor core102ais shown to execute a single application (i.e., the application112). The application112may include various portions (or threads) of which first through third portions112a-112care shown. Hereinafter, the terms “portion” and “thread” are used interchangeably throughout the disclosure. Concept of threads is well known to those of skill in the art. Each processor core102aand102bis configured to communicate various requests via the set of communication buses104to the first and second thread management circuits110aand110bfor hardware-accelerated execution of one or more portions of the executed applications. For example, the first processor core102amay communicate a first request to the first thread management circuit110afor execution (i.e., hardware-accelerated execution) of the first portion112aby the first accelerator circuit108a. In other words, the first processor core102acommunicates the first request to the first thread management circuit110ato offload the execution of the first portion112ato the first accelerator circuit108a. The first request includes an identifier (e.g., a domain identifier) of the first processor core102aand an identifier of the first accelerator circuit108a. Further, the first request is indicative of a set of operations to be performed by the first accelerator circuit108afor executing the first portion112a. After offloading the execution of the one or more portions, each processor core102aand102binitiates the execution of a subsequent portion of the corresponding executed applications. For example, after offloading the execution of the first portion112a, the first processor core102ainitiates the execution of another portion (e.g., the second portion112band/or the third portion112c) of the application112. Each processor core102aand102bis further configured to communicate thread joining requests to the first and second thread management circuits110aand110bfor joining one or more offloaded portions with other portions being executed by the corresponding processor cores102aand102b. For example, the first processor core102amay communicate a thread joining request to the first thread management circuit110afor joining the first portion112awith the third portion112cbeing executed by the first processor core102a. When the thread joining requests are communicated by the first and second processor cores102aand102b, each processor core102aand102bhalts the execution of the portions that are currently being executed, until a result of the execution of the one or more offloaded portions is received by each processor core102aand102b. For example, when a thread joining request is communicated by the first processor core102a, the first processor core102ahalts the execution of the current portion (e.g., the third portion112c) until a result of the execution of the first portion112ais received by the first processor core102a. Each thread joining request may be one of a first type (i.e., Join (id)), a second type (i.e., Join (all)), or a third type (i.e., Join (any)). A thread joining request of the first type, communicated by the first processor core102aor the second processor core102b, is a request for enquiring whether execution of a specific offloaded portion is complete. A thread joining request of the second type, communicated by the first processor core102aor the second processor core102b, is a request for enquiring whether execution of each previously offloaded portion is complete. A thread joining request of the third type, communicated by the first processor core102aor the second processor core102b, is a request for enquiring whether execution of any offloaded portion is complete. Thread joining requests are explained in further detail in conjunction withFIG.3. Based on the result of the execution of each of the one or more offloaded portions, each processor core102aand102bis configured to integrate the executed one or more portions with the corresponding executed applications and resume the halted execution of other portions. For example, the first processor core102aintegrates the first portion112awith the application112and resumes the execution of the third portion112c. Operations performed by the first processor core102aare explained in detail in conjunction withFIGS.2A and2B,3,5A and5B, and6A-6C. Each processor core102aand102bmay be implemented by way of central processing units, processors, microprocessors, electronic control units, microcontroller units, and the like. Although the processing system100is shown to include two processor cores (i.e., the first and second processor cores102aand102b), it will be apparent to those of skill in the art that the processing system100may include any number of processor cores without deviating from the scope of the disclosure. The set of communication buses104is configured to facilitate communication between the first and second processor cores102aand102b, the shared memory106, the first and second accelerator circuits108aand108b, the first and second thread management circuits110aand110b, and any other component on the processing system100. For example, the set of communication buses104receives the first request from the first processor core102aand communicates the first request to the first thread management circuit110a. The set of communication buses104may include a set of system buses, a set of peripheral buses, a set of address buses, a set of data buses, a set of control buses, a set of user buses, or a combination thereof. The set of communication buses104may be compliant with various bus protocols. The bus protocols may include, but not are limited to, an advanced microcontroller bus architecture (AMBA) protocol, an advanced high performance (AHB) bus protocol, or the like. The bus protocols may further include an advanced system bus (ASB) protocol, an advanced peripheral bus (APB) protocol, an advanced extensible interface (AXI) protocol, or the like. The shared memory106is coupled to the set of communication buses104and is configured to store data (i.e., input data) required for the execution of the one or more offloaded portions. The shared memory106is further configured to store results of the execution of the one or more offloaded portions, for reading by the processor cores102aand102b. In one embodiment, the shared memory106may be a dynamic random-access memory (DRAM) that is external to the first and second processor cores102aand102b. In another embodiment, the shared memory106may be a static random-access memory (SRAM) that is located within the first processor core102a. In some embodiments, the shared memory106may include specific segments or portions, each of which may be assigned to the accelerator circuit108aor108bfor execution of portions or threads by the corresponding accelerator circuit108aor108b. The first accelerator circuit108ais coupled to the set of communication buses104and the first thread management circuit110a. Similarly, the second accelerator circuit108bis coupled to the set of communication buses104and the second thread management circuit110b. Each accelerator circuit108aand108bincludes dedicated hardware that is configured to execute the one or more portions of the various applications being executed by the first and second processor cores102aand102b. In other words, the execution of the one or more portions of the various applications may be offloaded to each accelerator circuit108aand108bby the first and second processor cores102aand102b. The first accelerator circuit108ais further configured to receive acceleration requests for execution of the one or more portions from the first thread management circuit110a. The first accelerator circuit108ais further configured to communicate, to the first thread management circuit110a, acceleration responses following completion of the execution of the one or more portions by the first accelerator circuit108a. Similarly, the second accelerator circuit108bis configured to receive acceleration requests for execution of the one or more portions from the second thread management circuit1106. The second accelerator circuit1086is configured to communicate, to the second thread management circuit110b, acceleration responses following a completion of the execution of the one or more portions by the second accelerator circuit108b. Examples of the first and second accelerator circuits108aand108binclude, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), direct memory access (DMA) controllers, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like. Examples of the first and second accelerator circuits108aand108bfurther include network processors, network interface controllers, crypto-processors, artificial intelligence (AI) accelerators, systolic arrays, or the like. Although the processing system100is shown to include two accelerator circuits (i.e., the first and second accelerator circuits108aand108b), it will be apparent to those of skill in the art that the processing system100may include any number of accelerator circuits without deviating from the scope of the disclosure. The first and second thread management circuits110aand110bare coupled to the first and second processor cores102aand102bby way of the set of communication buses104. The first thread management circuit110ais further coupled to the first accelerator circuit108aand the second thread management circuit110bis further coupled to the second accelerator circuit108b. The first thread management circuit110ais configured to facilitate accelerator thread management for the execution of the one or more portions of the various applications by the first accelerator circuit108a. The first thread management circuit110ais further configured to facilitate accelerator thread management for the integration of the executed one or more portions with the various applications. The first thread management circuit110aincludes processing circuitry114and a memory element116, such that the processing circuitry114is coupled to the memory element116. The memory element116may be realized by way of a plurality of flip-flops. The memory element116is configured to store a thread identifier table118and a thread completion table120. The thread identifier table118indicates an availability (e.g., “allocated” or “available for allocation”) of a set of thread identifiers of the first thread management circuit110a. It will be apparent to those of skill in the art that the set of thread identifiers may include any number of thread identifiers. For the sake of brevity, it is assumed that the set of thread identifiers includes first through fourth thread identifiers. In other words, four thread identifiers (i.e., the first through fourth thread identifiers) are available with the first thread management circuit110afor allocation. The thread completion table120is indicative of a completion status of each portion that is allocated a thread identifier of the first through fourth thread identifiers. The processing circuitry114includes suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for performing requisite operations for facilitating accelerator thread management. In one embodiment, the processing circuitry114may be realized by way of sequential or combinational logic circuits. The processing circuitry114is configured to receive, from the processor cores102aand102b, the requests for execution of the one or more portions by the first accelerator circuit108a. The processing circuitry114is configured to allocate, to the processor cores102aand102b, available thread identifiers for the execution of the one or more portions by the first accelerator circuit108a. The processing circuitry114is further configured to update the thread identifier table118based on the allocation of the thread identifiers. The processing circuitry114is fluffier configured to communicate responses to the processor cores102aand102b, such that the responses include the allocated thread identifiers. The processing circuitry114is further configured to communicate, to the first accelerator circuit108a, acceleration requests for the execution of the one or more portions by the first accelerator circuit108a. The processing circuitry114is further configured to receive acceleration responses from the first accelerator circuit108a. The acceleration responses may be indicative of a completion of the execution of the one or more portions by the first accelerator circuit108a. The processing circuitry114is further configured to update the thread completion table120to indicate the completion of the execution of the one or more portions, based on the acceleration responses. The processing circuitry114is further configured to receive, from the processor cores102aand102b, thread joining requests after the reception of the requests for the execution of the one or more portions. The processing circuitry114is further configured to communicate, to the processor cores102aand102b, thread joining responses, based on the received thread joining requests. The processing circuitry114is further configured to update the thread identifier table and the thread completion table120, based on the communication of the thread joining responses. The processor cores102aand102bmay integrate the executed one or portions with the various applications (e.g., the first portion112awith the application112) based on the thread joining responses. It will be apparent to those of skill in the art that the second thread management circuit110bis similar to the first thread management circuit110ain terms of structure and function. In operation, the first processor core102ais configured to execute the application112and communicates the first request to the first thread management circuit110afor the execution of the first portion112aby the first accelerator circuit108a. Following the communication of the first request, the first processor core102amay execute another portion (e.g., the third portion112c) of the application112. In other words, the first processor core102amay not wait for the first portion112ato be executed before proceeding to execute other portions of the application112. Based on the received first request, the processing circuitry114allocates the first thread identifier, available in the thread identifier table118, to the first processor core102afor the execution of the first portion112a. The processing circuitry114updates the thread identifier table118to indicate the allocation of the first thread identifier to the first processor core102a. Based on the allocation of the first thread identifier, the processing circuitry114communicates a first response to the first processor core102a. The first response includes the allocated first thread identifier and indicates the allocation of the first thread identifier for the execution of the first portion112a. The processing circuitry114further communicates, to the first accelerator circuit108a, a first acceleration request for the execution of the first portion112a. The first acceleration request includes the allocated first thread identifier. When the execution of the first portion112ais complete, the first accelerator circuit108agenerates and communicates a first acceleration response to the first thread management circuit110a. The first acceleration response includes the first thread identifier and is indicative of the completion of the execution of the first portion112a. Further, the first accelerator circuit108amay write a first execution result of the execution of the first portion112ato the shared memory106. The processing circuitry114receives, from the first processor core102a, a thread joining request after the reception of the first request. Based on the received thread joining request and an indication by the thread completion table120that the execution of the first portion112ais complete, the processing circuitry114communicates a thread joining response to the first processor core102a. The thread joining response includes the first thread identifier and indicates the completion of the execution of the first portion112aby the first accelerator circuit108a. Based on the thread joining response, the first processor core102aaccesses the shared memory106and reads (i.e., retrieves) the first execution result from the shared memory106. The first processor core102aintegrates the executed first portion112awith the application112, based on the read first execution result. FIGS.2A and2B, collectively illustrate a process flow diagram200that represents facilitation of accelerator thread management by the first thread management circuit110a, in accordance with an exemplary embodiment of the present disclosure.FIGS.2A and2Bare explained in conjunction withFIG.1. With reference toFIG.2A, the first processor core102athat is configured to execute the application112communicates the first request to the first thread management circuit110aby way of the set of communication buses104(as shown by arrow202). In one embodiment, the first request is a function call for the execution of the first portion112aby the first accelerator circuit108a. The first request includes the identifier of the first processor core102a, the identifier of the first accelerator circuit108a, and/or an identifier of the first thread management circuit110a. The first request further includes a first address in the shared memory106that stores first input data required for executing the first portion112a. The first processor core102amay be configured to write the first input data to the shared memory106at the first address prior to communicating the first request. The first request is further indicative of a requisite first set of operations to be performed by the first accelerator circuit108afor executing the first portion112a. Based on the first request, the processing circuitry114is configured to determine an availability of a thread identifier in the thread identifier table118for allocation to the first processor core102afor the execution of the first portion112a(as shown by arrow204). In one example, the thread identifier table118may indicate that all four thread identifiers of the first thread management circuit110aare currently unallocated and are available for allocation. Based on the determination that one or more thread identifiers in the thread identifier table118are available, the processing circuitry114is configured to allocate one of the available thread identifiers (e.g., the first thread identifier) to the first processor core102afor the execution of the first portion112a(as shown by arrow206). Following the allocation of the first thread identifier, the processing circuitry114is configured to update the thread identifier table118to indicate that the first thread identifier is allocated to the first processor core102a(as shown by arrow208). Updating of the thread identifier table118is described in detail in conjunction withFIGS.4A-4E. The processing circuitry114is further configured to communicate a first read request to the shared memory106(as shown by the arrow210) for reading the first input data stored at the first address in the shared memory106. The first read request includes the first address of the first input data. Based on the first read request, the processing circuitry114accesses the first address in the shared memory106. Consequently, the shared memory106is configured to communicate, to the first thread management circuit110a, a first read response (as shown by arrow212). The first read response may indicate the first input data stored at the first address in the shared memory106. Based on the first read response, the processing circuitry114is further configured to validate the first input data stored at the first address (as shown by arrow214). In other words, the processing circuitry114determines whether the first input data, stored at the first address, is valid data for the execution of the first portion112a. In an embodiment, if the processing circuitry114fails to validate the first input data stored at the first address, the processing circuitry114may generate and communicate a first exception to the first processor core102a. The first exception may indicate that the first input data stored at the first address is invalid. In a non-limiting example, it is assumed that the processing circuitry114successfully validates the first input data stored at the first address. Based on the successful validation of the first input data, the processing circuitry114is configured to communicate a first response to the first processor core102a(as shown by arrow216). The first response includes the first thread identifier allocated to the first processor core102afor the execution of the first portion112a. Further, based on the successful validation of the first input data, the processing circuitry114is configured to communicate a first acceleration request to the first accelerator circuit108afor the execution of the first portion112aby the first accelerator circuit108a(as shown by arrow218). In other words, the processing circuitry114communicates the first acceleration request based on the reading of the first input data stored at the first address. The first acceleration request includes the first thread identifier and the first address of the first input data. The first acceleration request is also indicative of the first set of operations to be performed on the first input data for executing the first portion112a. Based on the first acceleration request, the first accelerator circuit108ais configured to execute the first portion112a(as shown by arrow220). The first accelerator circuit108aexecutes the first portion112aby performing the first set of operations on the first input data and generates a first execution result. Upon execution of the first portion112a, the first accelerator circuit108ais configured to communicate a first write request to the shared memory106for writing to the shared memory106, the first execution result (as shown by arrow222a). When the first execution result is successfully written to the shared memory106, the shared memory106communicates a first write response to the first accelerator circuit108a(as shown by arrow222b). Further, based on the first write response from the shared memory106(i.e., when the execution of the first portion112ais complete), the first accelerator circuit108ais configured to generate a first acceleration response (as shown by arrow224) and communicate the first acceleration response to the first thread management circuit110a(as shown by arrow226). The first acceleration response includes the first thread identifier and is indicative of a completion of the execution of the first portion112aby the first accelerator circuit108a. The first acceleration response may further include a first result address where the first execution result is stored in the shared memory106. Based on the received first acceleration response, the processing circuitry114is configured to update the thread completion table120to indicate that the execution of the first portion112a, allocated to the first thread identifier, is complete (as shown by arrow228). Update of the thread completion table120is described in detail in conjunction withFIGS.4A-4E. In a non-limiting example, the execution of the first portion112amay correspond to determination of a Fast Fourier Transform (FFT) of a first signal (i.e., the first input data). In such a scenario, the first request may include the first address of the first signal, stored in the shared memory106, and the set of operations may be indicative of FFT to be performed on the first signal. The first execution result, stored at the first result address, may be the Fourier Transform of the first signal. With reference toFIG.2B, the first processor core102ais further configured to communicate a second request to the first thread management circuit110aby way of the set of communication buses104(as shown by arrow230). For the sake of brevity, it is assumed that the second request is communicated after the updating of the thread completion table120based on the first acceleration response. However, it will be apparent to those of skill in the art that the second request may be communicated at any time-instance after the communication of the first request. In one embodiment, the second request is a function call for the execution of the second portion112bby the first accelerator circuit108a. In other words, the first processor core102acommunicates the second request to the first thread management circuit110afor hardware-accelerated execution of the second portion112b. The second request includes the identifier of the first processor core102a, the identifier of the first accelerator circuit108a, and/or the identifier of the first thread management circuit110a. The second request further includes a second address in the shared memory106that stores second input data required for executing the second portion112b. The first processor core102amay be configured write the second input data to the shared memory106prior to communicating the second request. The second request is further indicative of a second set of operations to be performed for executing the second portion112b. Based on the second request, the processing circuitry114is configured to determine an availability of a thread identifier in the thread identifier table118for allocation to the first processor core102afor the execution of the second portion112b(as shown by arrow232). In one example, the thread identifier table118may indicate that the first thread identifier is already allocated to the first processor core102afor the execution of the first portion112a. The thread identifier table118may further indicate that each of the second through fourth thread identifiers is available for allocation. Based on the determination that one or more thread identifiers in the thread identifier table118are available, the processing circuitry114is configured to allocate one of the available thread identifiers (e.g., the second thread identifier) to the first processor core102afor the execution of the second portion112b(as shown by arrow234). Following the allocation of the second thread identifier, the processing circuitry114is configured to update the thread identifier table118to indicate that the second thread identifier is allocated to the first processor core102a(as shown by arrow236). Updating of the thread identifier table118is described in detail in conjunction withFIGS.4A-4E. The processing circuitry114is further configured to communicate a second read request to the shared memory106(as shown by the arrow238) for reading the second input data stored at the second address in the shared memory106. The second read request includes the second address of the second input data. Based on the second read request, the processing circuitry114accesses the second address in the shared memory106. Consequently, the shared memory106is configured to communicate, to the first thread management circuit110a, a second read response (as shown by arrow240). The second read response may indicate the second input data stored at the second address in the shared memory106. Based on the second read response, the processing circuitry114is further configured to validate the second input data stored at the second address (as shown by arrow242). In other words, the processing circuitry114determines whether the second input data, stored at the second address, is valid data for the execution of the second portion112b. In an embodiment, if the processing circuitry114fails to validate the second input data stored at the second address, the processing circuitry114may generate and communicate a second exception to the processor core102a. The second exception may indicate that the second input data stored at the second address is invalid. In a non-limiting example, it is assumed that the processing circuitry114successfully validates the second input data stored at the second address. Based on the successful validation of the second input data, the processing circuitry114is configured to communicate a second response to the first processor core102a(as shown by arrow244). The second response includes the second thread identifier allocated to the first processor core102afor the execution of the second portion112b. Further, the second response may be indicative of the communication of the second acceleration request for the execution of the second portion112b. Further based on the successful validation of the second input data, the processing circuitry114communicates a second acceleration request to the first accelerator circuit108afor the execution of the second portion112b(as shown by arrow246). In other words, the processing circuitry114communicates the second acceleration request based on the reading of the second input data stored at the second address. The second acceleration request includes the second thread identifier and the second address of the second input data in the shared memory106. The second acceleration request is also indicative of the second set of operations to be performed on the second input data for executing the second portion112b. Based on the second acceleration request, the first accelerator circuit108aexecutes the second portion112b(as shown by arrow248). The first accelerator circuit108aexecutes the second portion112bby performing the second set of operations on the second input data and generates a second execution result. Upon execution of the second portion112b, the first accelerator circuit108ais configured to communicate a second write request to the shared memory106for writing to the shared memory106, the second execution result (as shown by arrow250a). When the second execution result is successfully written to the shared memory106, the shared memory106communicates a second write response to the first accelerator circuit108a(as shown by arrow250b). Further, based on the second write response from the shared memory106(i.e., when the execution of the second portion112bis complete), the first accelerator circuit108ais configured to generate a second acceleration response (as shown by arrow252) and communicate the second acceleration response to the first thread management circuit110a(as shown by arrow254). The second acceleration response includes the second thread identifier and is indicative of a completion of the execution of the second portion112bby the first accelerator circuit108a. The second acceleration response may further include the second result address where the second execution result is stored in the shared memory106. Based on the received second acceleration response, the processing circuitry114is configured to update the thread completion table120to indicate that the execution of the second portion112b, allocated to the second thread identifier, is complete (as shown by arrow256). Updating of the thread completion table120is described in detail in conjunction withFIGS.4A-4E. FIG.3is a process flow diagram300that represents facilitation of an integration of the executed first and second portions112aand112bwith the application112by the first thread management circuit110a, in accordance with an exemplary embodiment of the present disclosure.FIG.3is explained in conjunction withFIGS.2A and2B. The first processor core102ais configured to communicate a thread joining request to the first thread management circuit110a(as shown by arrow302). The first processor core102amay communicate the thread joining request at any time-instance after the communication of the first request. In one embodiment, the thread joining request is of the first type (i.e., Join (id)) and includes the first thread identifier. Therefore, the thread joining request is a request for joining the first portion112a. In one embodiment, the first thread management circuit110areceives the thread joining request after the reception of the first acceleration response. Based on the first thread identifier included in the thread joining request and the thread completion table120, the first thread management circuit110ais configured to determine a completion status of the first portion112athat is allocated the first thread identifier (as shown by arrow304). Based on the thread completion table120that is updated based on the first acceleration response, the processing circuitry114determines that the execution of the first portion112ais complete. Consequently, the processing circuitry114is configured to communicate, to the first processor core102a, a thread joining response (as shown by arrow306). The thread joining response includes the first thread identifier and the first result address where the first execution result is stored in the shared memory106. The thread joining response indicates to the first processor core102athat the execution of the first portion112aby the first accelerator circuit108ais complete. After communicating the thread joining response, the processing circuitry114is configured to update the thread identifier table118and the thread completion table120to indicate that the first thread identifier is available for re-allocation (as shown by arrow308). In other words, the first thread identifier is now available for allocation to any other portion, of any application, for execution by the first accelerator circuit108a. Upon receiving the thread joining response, the first processor core102ais configured to communicate a third read request to the shared memory106for reading the first execution result (as shown by arrow310). The third read request includes the first result address. Based on the third read request, the shared memory106communicates a third read response, including the first execution result, to the first processor core102a(as shown by arrow312). Based on the third read response that includes the first execution result, the first processor core102aintegrates the executed first portion112awith the application112(as shown by arrow314). In other words, the first processor core102amay use the first execution result to join the first portion112awith the application112. For example, the first processor core102amay further execute one or more other portions (e.g., the third portion112c) of the application112based on the first execution result. Methods of integrating or (joining) executed portions or threads in parallel programming or parallel computing environments are well known to those of skill in the art. In another embodiment, the thread joining request of the first type is received after the first request, but prior to the completion of the execution of the first portion112a. In such a scenario, the processing circuitry114halts the communication of the thread joining response until the thread completion table120indicates that execution of the first portion112ais complete. In other words, the processing circuitry114waits until the thread completion table120indicates that execution of the first portion112ais complete. When the thread completion table120indicates that the execution of the first portion112ais complete, the processing circuitry114communicates the thread joining response to the first processor core102a. It will be apparent to a person of ordinary skill in the art that when the thread joining request is of the first type and includes the second thread identifier instead of the first thread identifier, similar steps may be followed for integrating the executed second portion112bwith the application112. In another embodiment, the thread joining request is of the second type (i.e., Join (all)) and is communicated by the first processor core102aafter the communication of the first and second requests. Upon receiving the thread joining request of the second type, the processing circuitry114is configured to determine a completion status of each portion (i.e., each previous portion) that is associated with the first processor core102aand is currently allocated a thread identifier in the thread identifier table118. The determination of the completion status of each portion is based on the thread identifier table118and the thread completion table120. For example, the processing circuitry114determines a completion status of each of the first and second portions112aand112bthat are associated with the first processor core102aand are allocated the respective first and second thread identifiers. If the thread completion table120indicates that the execution of both the first and second portions112aand112bis complete, the processing circuitry114is configured to communicate a thread joining response to the first processor core102a. The thread joining response includes the first and second thread identifiers, and the first and second result addresses where the respective first and second execution results are stored in the shared memory106. The thread joining response indicates that the execution of the first and second portions112aand112bby the first accelerator circuit108ais complete. However, if the first thread management circuit110adetermines, based on the thread completion table120, that the execution of any of the first and second portions112aand112bis incomplete, the processing circuitry114halts the communication of the thread joining response. The processing circuitry114halts the communication of the thread joining response until the thread completion table120indicates that execution of both the first and second portions112aand112bis complete. In other words, the processing circuitry114waits until the thread completion table120indicates that execution of the first and second portions112aand112b, that are allocated the respective first and second thread identifiers, is complete (i.e., completion status). When the thread completion table120indicates that the execution of the both the first and second portions112aand112bis complete, the processing circuitry114communicates the thread joining response to the first processor core102a. The thread joining response includes the first and second thread identifiers and the first and second result addresses. After the communication of the thread joining response, the processing circuitry114is configured to update the thread identifier table118to indicate that the first and second thread identifiers are available for re-allocation. In other words, the first and second thread identifiers are available for allocation to any other portion(s) of any application, for execution by the first accelerator circuit108a. Upon receiving the thread joining response, the first processor core102ais configured to communicate a third read request to the shared memory106for reading the first and second execution results. The third read request includes the first and second result addresses. Based on the third read request, the shared memory106communicates a third read response, including the first and second execution results, to the first processor core102a. Based on the third read response, the first processor core102aintegrates the executed first and second portions112aand112bwith the application112. In another embodiment, the thread joining request is of the third type and is communicated after the communication of the first and second requests. Upon receiving the thread joining request of the third type, the processing circuitry114is configured to determine a completion status of each portion (i.e., each previous portion) that is associated with the first processor core102aand is currently allocated a thread identifier in the thread identifier table118. The determination of the completion status of each portion is based on the thread identifier table118and the thread completion table120. For example, the processing circuitry114determines a completion status of each of the first and second portions112aand112bthat are associated with the first processor core102aand are allocated the first and second thread identifiers, respectively. If the thread joining request is of the third type and the thread completion table120indicates that the execution of the first portion112ais complete but the execution of the second portion112bis incomplete, the processing circuitry114communicates a thread joining response to the first processor core102a. The thread joining response includes the first result address and indicates of the completion of the execution of the first portion112a. The thread joining response may further indicate that the execution of the second portion112bis incomplete. In other words, if the thread joining request is of the third type and is received after the reception of the second request and the completion of the execution of the first portion112a, but prior to the completion of the execution of the second portion112b, the thread joining response that includes only the first result address is communicated to the first processor core102a. Therefore, the processing circuitry114does not wait for the completion of the execution of the second portion112bfor communicating the thread joining response to the first processor core102a. As described in the foregoing, the first processor core102aintegrates the executed first portion112awith the application112, based on the first execution result. Further, the processing circuitry114updates the thread identifier table118to indicate that the first thread identifier is available for re-allocation. However, if the thread joining request is of the third type and the thread completion table120indicates that the execution of both the first and second portions112aand112bis complete, the processing circuitry114communicates a thread joining response, including the first and second thread identifiers and the first and second result addresses, to the first processor core102a. The thread joining response indicates that the execution of the first and second portions112aand112bis complete. As described in the foregoing, the first processor core102aintegrates the executed first and second portions112aand112bwith the application112, based on the first and second execution results, respectively. Further, the processing circuitry114updates the thread identifier table118to indicate that the first and second thread identifiers are available for re-allocation. However, if the thread joining request is of the third type and if the thread completion table120indicates that the execution of neither the first portion112anor the second portion112bis complete, the processing circuitry114communicates a completion-pending response to the first processor core102a. The completion-pending response indicates that execution of neither the first portion112anor the second portion112bis complete. In other words, the completion-pending response indicates that the execution of both the first and second portions112aand112bis pending. After receiving the completion-pending response, the processing circuitry114may again communicate another thread joining request to the first thread management circuit110aafter a pre-determined time-interval. FIGS.4A-4E, collectively illustrate the thread identifier table118and the thread completion table120at various time-instances in accordance with an exemplary embodiment of the present disclosure.FIGS.4A-4Eare explained in conjunction withFIGS.2A,2B, and3. With reference toFIG.4A, the thread identifier table118is shown to include first through fourth columns402a-402dand first and second rows404aand404b. The first through fourth columns402a-402dcorrespond to the first through fourth thread identifiers, respectively, and the first and second rows404aand404bcorrespond to the first and second processor cores102aand102b, respectively.FIG.4Acorresponds to a first time-instance that is prior to the reception of the first and second requests by the first thread management circuit110a. Therefore, inFIG.4A, each cell (not labelled) of the thread identifier table118indicates that each of the first through fourth thread identifiers is currently (i.e., at the first time-instance) not allocated to any of the first and second processor cores102aand102b. In one embodiment, when a value of each cell of a column equals “0”, the corresponding thread identifier is considered as unallocated by the processing circuitry114. For example, as shown inFIG.4A, each cell of the first column402aequals “0” at the first time-instance. Thus, at the first time-instance, the first thread identifier corresponding to the first column402ais currently available for allocation. As a corollary, when a value of a cell of a column equals “1”, the corresponding thread identifier is considered as allocated by the processing circuitry114. In an alternate embodiment, when a value of each cell of a column equals “1”, the corresponding thread identifier is considered as unallocated by the processing circuitry114. As a corollary, when a value of a cell of a column equals “0”, the corresponding thread identifier is considered as allocated by the processing circuitry114. The thread completion table120is shown to include fifth through eighth columns406a-406dand third and fourth rows408aand408b. The fifth through eighth columns406a-406dcorrespond to the first through fourth thread identifiers, respectively, and the third and fourth rows408aand408bcorrespond to the first and second processor cores102aand102b, respectively. In one embodiment, when a value of any cell of a column equals “1”, execution of a portion that is allocated a corresponding thread identifier (e.g., the first thread identifier) is considered as complete by the processing circuitry114. As a corollary, when a value of each cell of a column (e.g., the first column402a) equals “0”, execution of a portion that is allocated a corresponding thread identifier is considered as incomplete by the processing circuitry114. Since, at the first time-instance, the first through fourth thread identifiers are unallocated, a value of each cell of the thread completion table120equals “0”. In an alternate embodiment, when a value of any cell of a column equals “0”, execution of a portion that is allocated a corresponding thread identifier (e.g., the first thread identifier) is considered as complete by the processing circuitry114. As a corollary, when a value of each cell of a column (e.g., the first column402a) equals “1”, execution of a portion that is allocated a corresponding thread identifier is considered as incomplete by the processing circuitry114. With reference toFIG.4B, a second time-instance after the reception of the first request by the first thread management circuit110a, but prior to the reception of the first acceleration response by the first thread management circuit110ais shown. At the second time-instance, based on the first request, the processing circuitry114allocates the first thread identifier to the first portion112a. Thus, at the second time-instance, the processing circuitry114updates the thread identifier table118to update the value a first cell corresponding to the first column402aand the first row404afrom ‘0’ to “1”, indicating that the first thread identifier is allocated to the first processor core102a. At the second time-instance, the value of each cell of the thread completion table120still equals “0”, since the first thread management circuit110ais yet to receive the first acceleration response. With reference toFIG.4C, a third time-instance after the reception of the second request and the first acceleration response by the first thread management circuit110a, but prior to the reception of the second acceleration response by the first thread management circuit110ais shown. At the third time-instance, the thread identifier table118indicates that the first and second thread identifiers are allocated to the first processor core102a. Thus, the values of the first cell and a second cell corresponding to the second column402band the first row404aare updated to “1” to indicate that the first and second thread identifiers are allocated to the first processor core102a. Further, at the third time-instance, the thread completion table120is updated to indicate the completion of the execution of the first portion112a, based on the first acceleration response. Thus, the value of a third cell, of the thread completion table120, corresponding to the fifth column406aand the third row408ais updated from ‘0’ to “1” to indicate that the execution of the first portion112athat was allocated the first thread identifier is complete. With reference toFIG.4D, a fourth time-instance after the reception of the first and second acceleration responses by the first thread management circuit110a, but prior to the communication of the thread joining response by the first thread management circuit110ais shown. At the fourth time-instance, the thread completion table120indicates the completion of the execution of the second portion112b. Thus, at the fourth time-instance, the processing circuitry114updates the thread completion table120to indicate the completion of the execution of the second portion112b, based on the received second acceleration response. The value of a fourth cell, of the thread completion table120, corresponding to the sixth column406band the third row408ais updated from “0” to “1” to indicate that the execution of the second portion112bthat was allocated the second thread identifier is complete. With reference toFIG.4E, a fifth time-instance after the communication of the thread joining response to the first processor core102aby the first thread management circuit110ais shown. The thread identifier table118indicates that the first and second thread identifiers are available for re-allocation. In other words, at the fifth time-instance, the processing circuitry114updates the thread identifier table118and the thread completion table120based on the communication of the thread joining response. The values of the first and second cells are updated from “1” to “0”, indicating that the first and second thread identifiers are available for re-allocation. Similarly, the values of the third and fourth cells are also updated from “1” to “0”, since the first and second thread identifiers are unallocated and available at the fifth time-instance. FIGS.5A and5B, collectively represent a flowchart500that illustrates facilitation of accelerator thread management by the first thread management circuit110afor execution of a portion of the application112in accordance with an exemplary embodiment of the present disclosure.FIGS.5A and5Bare described in conjunction withFIGS.2A and2B. With reference toFIG.5A, at step502, the processing circuitry114stores the thread identifier table118and the thread completion table120in the memory element116of the first thread management circuit110a. The thread identifier table118indicates an availability (e.g., “allocated” or “available for allocation”) of each thread identifier, of the first through fourth thread identifiers, for execution of a portion (e.g., the first portion112a). The thread completion table120is indicative of a completion status of each portion that is allocated a thread identifier of the first through fourth thread identifiers. At step504, the processing circuitry114receives, from the first processor core102a, a request (e.g., the first request) for execution of a portion (e.g., the first portion112a) of the application112by the first accelerator circuit108a. At step506, the processing circuitry114determines, based on the thread identifier table118, whether a thread identifier is available for allocation to the first processor core102afor the execution of the portion (e.g., the first portion112a). If at step506, the processing circuitry114determines that a thread identifier is not available for allocation to the first processor core102a, step508is performed. At step508, the processing circuitry114generates an exception (e.g., the first exception) indicating that no thread identifier is available for allocation. At step510, the processing circuitry114communicates the generated exception to the first processor core102aby way of the set of communication buses104and the process stops. If at step506, the processing circuitry114determines that a thread identifier (e.g., the first thread identifier) is available for allocation to the first processor core102a, step512is performed. At step512, the processing circuitry114allocates, based on the received request (e.g., the first request), the available thread identifier to the first processor core102afor the execution of the portion by the first accelerator circuit108a. At step514, the processing circuitry114updates the thread identifier table118to indicate that the available thread identifier is allocated to the first processor core102a. At step516, the processing circuitry114communicates, to the shared memory106, a read request (e.g., the first read request) for reading input data (e.g., the first input data) associated with the portion (as described in the foregoing description ofFIG.2A). At step518, the processing circuitry114receives a read response (e.g., the first read response) from the shared memory106(as described inFIG.2A). The processing circuitry114validates the input data indicated by the read response. At step520, the processing circuitry114communicates, based on the allocation of the thread identifier, a response (e.g., the first response) to the first processor core102a. The communicated response includes the allocated thread identifier. With reference toFIG.5B, at step522, the processing circuitry114communicates, to the first accelerator circuit108a, an acceleration request (e.g., the first acceleration request) for the execution of the portion by the first accelerator circuit108a. The processing circuitry114communicates the acceleration request based on the reading of the input data. At step524, the processing circuitry114receives an acceleration response (e.g., the first acceleration response) from the first accelerator circuit108a, following a completion of the execution of the portion by the first accelerator circuit108a. The acceleration response may be indicative of an address (e.g., the first result address) in the shared memory106, where a result (e.g., the first execution result) of the execution of the portion is stored. At step526, based on the received acceleration response, the processing circuitry114updates the thread completion table120to indicate that the execution of the portion allocated the thread identifier is complete. WhileFIGS.5A and5Bare explained with respect to the execution of the first portion112a, it will be apparent to those of skill in the art that similar steps may be performed by the processing circuitry114for the facilitating accelerator thread management for the execution of the second portion112b. FIGS.6A-6C, collectively represent a flowchart600that illustrates facilitation of accelerator thread management by the first thread management circuit110afor joining one or more executed portions with the corresponding application112in accordance with an exemplary embodiment of the present disclosure. With reference toFIG.6A, at step602, the first thread management circuit110areceives the thread joining request from the first processor core102a. The thread joining request may be received at any time-instance (e.g., the second through fourth time-instances) after the reception of the first request. At step604, the processing circuitry114determines whether the thread joining request is of the first type. If at step604, the processing circuitry114determines that the thread joining request is of the first type, step606is performed. At step606, the processing circuitry114determines whether the thread completion table120indicates that the execution of a portion that corresponds to the thread joining request is complete. For example, if the thread joining request includes the first thread identifier, the processing circuitry114determines, based on the thread completion table120, whether execution of the first portion112athat is allocated the first thread identifier is complete. If at step606, the processing circuitry114determines that the thread completion table120does not indicate that the execution of the corresponding portion is complete, step608is performed. For example, when the thread joining request including the first thread identifier is received by the processing circuitry114, the thread completion table120may indicate that the execution of the first portion112ais incomplete. At step608, the processing circuitry114halts communication of a thread joining response until the thread completion table120indicates that the execution of the portion corresponding to the thread joining request is complete. For example, the processing circuitry114waits until the first accelerator circuit108acommunicates the first acceleration response indicating that the execution of the first portion112aby the first accelerator circuit108ais complete. Upon receiving the first acceleration response from the first accelerator circuit108a, the processing circuitry114updates the thread completion table120to indicate that the execution of the first portion112athat is allocated the first thread identifier is complete. When the thread completion table120indicates that the execution of the portion is complete, step610is performed. If at step606, the processing circuitry114determines, based on the thread completion table120, that the execution of the portion corresponding to the thread joining request is complete, step610is performed. At step610, the processing circuitry114communicates a thread joining response to the first processor core102a. The thread joining response includes the thread identifier (e.g., the first thread identifier) and a corresponding result address (e.g., the first result address) where a corresponding execution result (e.g., the first execution result) is stored in the shared memory106. Further, the thread joining response may indicate that the execution of the portion is complete. At step612, following the communication of the thread joining response, the processing circuitry114updates the thread identifier table118and the thread completion table120to indicate that the thread identifier is available for re-allocation and the process stops. If at step604, the processing circuitry114determines that the thread joining request is not of the first type, step614is performed. At step614, the processing circuitry114determines whether the thread joining request is of the second type. If at step614, the processing circuitry114determines that the thread joining request is of the second type, step616is performed. With reference toFIG.6B, at step616, the processing circuitry114determines whether the thread completion table120indicates a completion of execution of each portion (i.e., each previous portion) that is associated with the first processor core102aand currently allocated a thread identifier. In other words, the processing circuitry114determines whether execution of each portion (e.g., the first and second portions112aand112b) offloaded by the first processor core102ais complete. If at step616, the processing circuitry114determines that the thread completion table120does not indicate the completion of each portion (i.e., each previous portion), step618is performed. At step618, the processing circuitry114halts communication of a thread joining response until the thread completion table120indicates the completion of the execution of each previous portion. The communication of a thread joining response is halted based on a determination by the processing circuitry114that the execution of at least one of the first and second portions112aand112bis incomplete. The processing circuitry114waits until an acceleration response is received, from the first accelerator circuit108a, for each thread identifier currently allocated to the first processor core102a. When the thread completion table120indicates the completion of the execution of each previous portion, step620is performed. If at step616, the processing circuitry114determines that the thread completion table120indicates the completion of each previous portion, step620is performed. At step620, the processing circuitry114communicates the thread joining response to the first processor core102a. The thread joining response may be indicative of each thread identifier allocated to each previous portion. Consequently, step612is performed. If at step614, the processing circuitry114determines that the thread joining request is not of the second type (i.e. if the thread joining request is of the third type), step622is performed. With reference toFIG.6C, at step622, the processing circuitry114determines whether the thread completion table120indicates a completion of execution of any previous portion. In other words, the processing circuitry114determines whether the thread completion table120indicates a completion of execution of any portion that is allocated a thread identifier, of the first through fourth thread identifiers, and corresponds to the first processor core102a. If at step622, the processing circuitry114determines that the thread completion table120indicates a completion of execution of one or more previous portions, step624is performed. At step624, the processing circuitry114communicates the thread joining response to the first processor core102a. The thread joining response includes the thread identifiers allocated to each of the one or more previous portions and a result address (e.g., the first result address) of each of the one or more previous portions. Consequently, step612is performed, where the processing circuitry114updates the thread identifier table118and the thread completion table120to indicate that the thread identifier allocated to each of the one or more previous portions is now available for re-allocation. If at step622, the processing circuitry114determines that the thread completion table120does not indicate a completion of execution of any previous portion, step626is performed. At step626, the processing circuitry114communicates, to the first processor core102a, a completion-pending response indicating that the completion of each previous portion is pending (as described in the foregoing description ofFIG.3) and the process stops. Thus, the first thread management circuit110afacilitates seamless thread management in a parallel programming or a parallel computing environment that uses techniques such as multiple instruction, multiple data (MIMD) or single program, multiple data (SPMD). The first thread management circuit110ais capable of receiving, from multiple processor cores, various portions (or threads) of various applications for execution by the corresponding accelerator circuit108a. The thread identifier table118and the thread completion table120, implemented by way of hardware in the memory element116, enable fast and easy tracking of a completion status of each portion or thread received for execution. Dynamic allocation (and re-allocation) of thread identifiers (e.g., the first and second thread identifiers) results in optimized utilization of available thread identifiers, avoiding cumbersome software-based allocation of thread identifiers. The first thread management circuit110aleverages hardware (e.g., the processing circuitry114and the memory element116) for facilitating accelerator thread management for execution of portions (i.e., threads) and integration of the executed portions, which results in increased speeds and lower latency times for thread management in the processing system100. While various embodiments of the present disclosure have been illustrated and described, it will be clear that the present disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present disclosure, as described in the claims. Further, unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.
66,889
11861404
DETAILED DESCRIPTION Various embodiments are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting front the spirit and scope of the disclosure. The present disclosure provides an improvement over the prior art by enabling system jobs or other processing entities that can be queued for processing in a compute environment to perform arbitrary actions on resources outside the compute nodes in the environment. Furthermore, the computing device performing the steps herein causes actions to be taken associated with the submitted job outside the previously constrained space. Embodiments of the disclosure relate to system jobs, and systems of creating and using system jobs, methods of creating and using system jobs, computer-readable storage media for controlling a computing device to manage system jobs and a compute environment operating according to the principles disclosed herein. As introduced above, one example of a job is a consume job that consumes resources for a particular project, such as a weather study. The present disclosure provides for a different type of job that is flexible and performs other operations and/or modifications in the compute environment. System jobs can be created and/or submitted remotely or internally within a compute environment and can spawn child operations into a resource manager but the master job resides strictly within the workload manager and/or scheduler. System jobs will preferably contain one or more steps with dependencies. Each step that is involved in processing a system job may consist of one or more tasks where each task modifies the internal and/or external environment of the compute environment or the job. Internal environment changes include, but are not limited to: creating reservations, setting variables, modifying credentials, policies, thresholds, priorities, etc. External changes include modifying resources, database settings, peer interfaces, external credentials, launching arbitrary scripts, launching applications, provisioning resources, etc. A system job can require several steps to complete its process and terminate. Throughout this process, at various stages, a state of a particular task needs to be identified. Step state is based on success or failure of task execution. Steps can possess triggers. Steps can generate and consume job level and global level variables. Step dependencies can be based on internal or external factors including, but not limited to: job, step, trigger, time, or environment based dependencies. Time dependencies can be based on absolute time, or time relative to some job internal or external event. Dependencies can include local or global variable settings. Dependencies can be based on return value of arbitrary configurable probes. Steps may optionally allocate resources. Steps may optionally be associated with a walltime. There are several differentiators associated with system jobs. They allow at least one of: (1) integration of environmental data into job flow decisions; (2) creation of arbitrary probes, continuous task retry, etc.; (3) integration of environment data into task execution; (4) dynamic resource reallocation based on results of previous tasks; (5) integration of compute tasks, tasks involving non-compute resources (i.e. data bases, provisioning systems, data managers, etc), and changes to compute environment meta data (such as policies, thresholds, priorities, credential configuration, etc); (6) access to live global cluster and job centric information; (7) envelopment of traditional compute tasks in higher layer wrappers; (8) allowing greater environment management; (8) synchronization of tasks managing unrelated resources and resource types; (9) co-allocation of resources and requirements, scheduling, reservation; (10) guarantees of completion for loose aggregations of request types application of tight and loose time constraints on requests (including periodic window, timeframe proximity, and deadline based constraints); and (11) optimization of loose aggregations of requests. System jobs are also referred to as workload management object event policies. The purpose of a workload management object event policy is to allow or cause actions to be associated with a workload management object such as a reservation, a compute/system job, a node, a cluster, a user, a resource manger and/or other queue-able workload units that trigger a given action either based on a time criteria or other measurable condition. An example of this can be a system/compute job having an associated event policy that launches a script 10 minutes prior to job completion. This script could send an e-mail to the user notifying them that the job is almost finished, or it can set in action the launch of another job that has a dependency on the results of the initial job being mostly complete. Another example is that of a reservation with an associated event policy that deletes temporary files and restarts all of the reserved nodes to purge them of sensitive data and to clear memory prior to usage by another entity. An example of the method aspect of the disclosure includes the steps of receiving a request for the creation of an entity to manage or perform at least one operation within a compute environment. The entity is preferably a system job as described herein. The method further includes creating the entity, wherein the entity has arbitrary dependencies, associating the entity with a workload management object and using the entity to perform at least one operation and/or modification on the compute environment. FIG.3illustrates an example of how a system job326can be used to set up a virtual private cluster or a job-specific virtual cluster. InFIG.3, the user312submits a job326via a queue302to a resource manager106. A queue318is also shown as having jobs submitted to the scheduler104. The queue310illustrates in more detail a compute job and system jobs associated with it that will be processed on the cluster110. While the job326is submitted by the user312, the associated system jobs can be selected by the user312or via an automatic process that receives some input from the user312and also can reference policy information or service level agreement information to generate system jobs to help to monitor and manager the compute environment for the submitted job326. The job steps discussed and the functions performed that are associated with the job can be arbitrary. The concrete examples illustrate how the arbitrary capabilities can be applied. A queue310holds a system job326and a number of other job steps320,322,324,328. The first job step320involves contacting not the cluster but a provisioning manager330to set up a compute environment. The subsequent job step322arranges for storage management with a storage manager332; the third job step324contacts a license manager334to make sure the applications that are needed are available. The fourth step326executes the actual job in the virtual environment within the cluster110and the final step328involves staging the data out of this environment and destroying or collapsing the virtual cluster. The above example illustrates the operation of system jobs where there could be any combination of the various tasks associated with a system job. System jobs have a number of distinct differences from standard consume jobs326. A system operating under the principle described herein provides full support meaning that jobs allow arbitrary dependencies and combinations or relationships between job steps. They also allow arbitrary actions in which arbitrary things can be executed, arbitrary services can be driven, arbitrary data can be modified, arbitrary policies and configurations of the scheduler can be adjusted. They can be set to require resource allocation and can be set up so they only come live when those resources can be allocated and dedicated to the system job. They also have the ability to have arbitrary impact on the system. FIG.4shows an example of using a system job to perform a rolling maintenance. Rolling maintenance can include updating a nodes software, performing rolling provisioning, patches and software upgrades as well as other functions. In a rolling maintenance, a site has a desire to either check or change current applications, operating systems or kernel versions in their compute nodes or other cluster resources. For example, assume that a compute node needs to have software reinstalled and updated. Previously, this process would be done by taking the entire node down after all the jobs assigned to that node are complete, making the system unavailable, installing by hand all the nodes with the new level of software and once checks are made turning all nodes back to the users to continue running jobs. This process is made more efficient by the application of system jobs. FIG.4illustrates a series of nodes402with the associated with resource manager106, scheduler104and provisioning manager330. Using system jobs, a system administrator, rather than performing all the above-mentioned steps, simply submits a system job which performs the update automatically. For example, the system job schedules at the earliest possible time on each node an independent node update, a software update and in addition to updating the node, it also performs a sanity and/or health check. In event of failure, the system job notifies the administrator so that he or she should take action as needed on the nodes that actually failed. This reduces the human administration time required in any update or modification. Cluster402ofFIG.4illustrates a series of jobs 1-6 running some of the nodes 1-5 with time along the X axis. As shown, node 1 is currently running job 1 and in some time in the future, job 1 will complete and a system job 1 will operate for some time, followed by job 5. Some of these nodes are currently empty, namely node 4 which is running system job 4. When the administrator actually schedules the system job, the system preferably identifies the earliest time that the job could occur on each node. The system job can also be modified to identify any particular time to begin, i.e., it may be instructed to find the earliest time starting one week from today, an earliest possible time from any predetermined time or a scheduled time. For example, on node 4 the job can start immediately, which it does, and then update that node and turns it over to run job 4 which automatically happens as soon as it completes its health and sanity check. On other nodes the system job is scheduled for immediate processing upon completion of existing workloads. The update is completed as soon as possible and the node is again automatically turned over to user access and jobs (shown as job 6) can begin or continue to run. The system jobs principle takes advantage of the fact that the system jobs are actually not running our on the compute host (die cluster). When a system job requires allocation of a resource such as node 1, as soon as node 1 is available, the job launches a request to the provisioning service330. The provisioning service330then updates the node as necessary to handle the job. As soon as that step of the system job is complete, a health check trigger is launched verifying the node is operational. If the health check trigger is successful, the node if freed and the system job is canceled. If the health check is unsuccessful, an e-mail is sent out and the node is reserved indefinitely. The e-mail is sent to the administrator so he or she can correct whatever problems occurred. In a similar case, in till cases the system job is not actually run on the compute host even though the compute host is allocated and impacted by the system job. FIG.5illustrates the method aspect of the disclosure related to the use of a system job required for maintenance. The method includes a number of steps performed by the system job. The first step includes the system job transmitting a communication to the provisioning manager to provision an allocated resource (502). Each system job will have a requirement for a specific node. For example, in the example shown inFIG.5, the system job requires that the system job only runs with regard to node 1 because it requires node 1. The job is not available to start until the node is allocated and dedicated to this job. Once that job runs, it uses the provisioning to provision a particular operating system (or for some other provisioning need) that has been requested. Next, the method includes running a script that communicates with the node to verity that the provisioning step was properly carried out and that the node is healthy (504). If step504reports success (506), then the system job sends and e-mail and terminates the job (508) thus allowing other compute jobs to immediately use the node within the cluster. If step (504) fails (506), then the system job reports the failure, and creates a system reservation for the node, and terminates the job (510) leaving the node in a reserve state until an administrator can respond to the failure and correct the operating system. This example was the application of a system job to allow for rolling maintenance. Jobs associated with rolling maintenance that are scheduled are not a resource manager process. They are higher level jobs that perform arbitrary tasks outside processes handled by the resource manager. A trigger is a subset of a system job and has dependencies and can interface with web services, local processes, socket interfaces and can manage priorities. This allows an administrator to have the workload manager not being tied to a resource manager. The administrator can schedule a file system backup (e.g., job 1 and 2 will use the file system and job 3 will back up the file system). The scheduler typically has a locked model where the scheduler only knows about the resource manager. FIG.6shows another use of a system job, in particular for backing up a file system. In this particular situation, assume that a cluster has a number of file systems available and they are available across a parallel set of nodes. This scenario is illustrated inFIG.7in cluster702having a variety of sixteen nodes704with file system A (ESA), file system B (FSB), file system C (FLC), and file system file system D (FSD). There are four nodes associated with each file system. Suppose the site has a goal of backing up each file system and in order to do that, it must quiesce each individual file system so that there is no activity when it is hacked up. To quiesce each file system means to terminate activity thus allowing aspects of a parallel system to come to a completed state. When a system is quiesced, previously planned transmissions and signals are all delivered and activity is allowed to stop in a natural manner. To accomplish this set of requirements, an object is created that submits a series of system jobs. The first system job requests allocation of all four nodes associated with file system A (602). This is performed using a feature requirement. Once it has all the nodes dedicated, the first step is that it issues a communication to the backup file system which backs up the file system (604). When that completes, the system job verifies the success of the process (606). In this case, regardless of whether the back was successful, the job reports the verification information and updates the database recording that information and then terminates allowing the nodes to be used by the user (608). It is possible to modify the scenario slightly in which the file system must be quiesced. The file system can be quiesced for a period of die before everything synchronizes. Within a system job, it is possible to have the ability or step to force a duration, a step can either complete when its task is complete or when a duration has been reached. Therefore, this example could be modified so that step (602) simply to allocate the resources and quiesce them for a period of 10 minutes to allow full synchronization of the parallel aspects followed by the backup step (604) and step (606) which determines the success of the process, and wherein step (608) which updates the database with the success status. To create a system job there are a number of different models. A system job can be automatically created by submitting a standard job to a particular quality of service where the quality of service requires enablement of special services such as automatic provisioning or dedicated network bandwidth. In such a case, the user submits a standard job with a selected quality of service. For example assume a user submits a job with a quality of service related to a dedicated bandwidth. With such a request, the scheduler would take the job request and encapsulate it in a system job. The first step in a system job 1 is to identify the resources and then communicate with the network manager to dynamically partition the network so as to provide the guaranteed bandwidth. Once that is completed, the system job will proceed to allow the submitted job to process. The same model is also used to allow data stage-in, data stage-out and have tightly coordinated resource usage after the environment is set up. The system jobs allow one to have a tight time frame control. Without system jobs, normal performance of job steps causes one step to follow the next step but does not constrain how tightly the second step must follow. A system job can tightly constrain steps such that a subsequent job will run immediately following the first job thus allowing chaining of a prerequisite job and post requisite steps. In the situation of a rolling maintenance, within the graphical user interface, a user does not even need to be aware that the system job exists. In most cases, system jobs run “under the covers” to enable outlying functionality. An administrator can indicate in a graphical interface to run a particular script on all nodes which will automatically install the application. The administrator can also indicate that the application will be updated on all nodes using a cluster provisioning manager. The rest of the steps are done automatically without the administrator's knowledge. An important attribute of system jobs is that a system job is queueable. A system job can have dependency on types of resources, dependency on other system jobs or batch compute jobs. System jobs can incorporate dynamic content sensitive triggers, which allow them to customize the environment or customize the general local scheduling environment. The steps in a system job may or may not have a duration, and they may or may not have a resource allocation or a resource co-allocation. They do have the ability to perform arbitrary execution or use arbitrary services. For example, system jobs can tap in and activate services such as a peer-to-peer service or a resource manager. Furthermore, system jobs can be reserved and can have relative or absolute priority. Embodiments within the scope of the present disclosure may also include non-transitory computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such non-transitory computer-readable media can disclose RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. A computer-readable storage medium is limited to hardware storage such as RAM, ROM, hard drives and the like and expressly excludes wireless interfaces or signals per se. Combinations of the above should also be included within the scope of the computer-readable media. Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps. Those of skill in the art will appreciate that other embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Although the above description may contain specific details, they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the disclosure are part of the scope of this disclosure. Accordingly, the appended claims and their legal equivalents should only define the invention, rather than any specific examples given.
22,586
11861405
DETAILED DESCRIPTION System100for hosting container based virtual machines between different computing environments are as illustrated inFIG.1. System100can include orchestrator110having an associated data repository108, user equipment (UE) devices120A-120Z, one or more process interface (PI)122and computing environments140A-140Z. Orchestrator110, UE devices120A-120Z and computing environments140A-140Z can be in communication with one another via network190. Network190can be a physical network and/or a virtual network. A physical network can be, for example, a physical telecommunications network connecting numerous computing nodes or systems such as computer servers and computer clients. A virtual network can, for example, combine numerous physical networks or parts thereof into a logical virtual network. In another example, numerous virtual networks can be defined over a single physical network. UE devices120A-120Z can be associated e.g. to enterprise agent users and end users. Enterprise agent users can be associated to enterprises that have their applications hosted within one or more of computing environments140A-140Z and end users can be users who use the services that are provided by such hosted applications. Computing environments140A-140A of system100can be associated to respective computing environment providers. Computing environments of computing environments140A-140Z can include e.g. private computing environments and public computing environments. Computing environments of computing environments140A-140Z can include one or more private computing environment known as, e.g., an internal or enterprise cloud that resides, e.g., on an enterprise's intranet or hosted data center. Alternatively or additionally, computing environments of computing environments140A-140Z can include one or more shared public computing environment shared by multiple enterprise tenants with use of a multi-tenant cloud architecture. According to one embodiment, where computing environments140A-140Z include computing environments configured as public cloud computing environments, computing environment providers associated to respective computing environments140A-140Z, can be providers known as public cloud services providers, e.g., IBM® CLOUD® cloud services, AMAZON® WEB SERVICES® (AWS®), or MICROSOFT® AZURE® cloud services (IBM® and IBM CLOUD are registered trademarks of International Business Machines Corporation, AMAZON®, AMAZON WEB SERVICES® and AWS® are registered trademarks of Amazon.com, Inc, and MICROSOFT® and AZURE® are registered trademarks of Microsoft Corporation.) Embodiments herein can be described with reference to differentiated fictitious public computing environment (cloud) providers such as ABC-CLOUD, ACME-CLOUD, MAGIC-CLOUD, and SUPERCONTAINER-CLOUD. According to one embodiment, respective ones of computing environments140A-140Z can map to one computing environment provider and each computing environment provider can operate one computing environment of computing environments140A-140Z. According to one embodiment, each of computing environments140A-140Z can map to one computing environment provider and each computing environment provider can operate one or multiple computing environments of computing environments140A-140Z. Orchestrator110, according to one embodiment, can be external to each of computing environments140A-140Z. Orchestrator110according to one embodiment can be co-located with one or more computing environment of computing environments140A-140Z. Each of the different UE devices120A-120Z can be associated to a different user. Regarding UE devices120A-120Z, a UE device of UE devices120A-120Z, in one embodiment, can be a computing node device provided by a client computer, e.g. a mobile device, e.g., a smartphone or tablet, a laptop, smartwatch or personal computer that runs one or more program, e.g., including a web browser for opening and viewing web pages. Orchestrator110can be configured to have features to intelligently support placement and respawning of containers amongst compute nodes of computing environments140A-140Z. Orchestrator110can be configured to iteratively obtain metrics data from respective ones of computing environments140A-140Z and using the iteratively obtained metrics data can iteratively update data repository108. Data repository108of orchestrator110can store various data. In node utilization area2121, data repository108can store historical node utilization metrics data values over time. Metrics data collected by orchestrator110and stored in node utilization area2121can be associated to, e.g., a container and application ID and node identifier so that the stored data provides statistics on node performance as well as application performance over time. Node utilization metrics data can include lower layer metrics data, e.g., CPU utilization data, memory utilization data, storage utilization data, and I/O utilization data. Node utilization metrics data can alternatively or additionally include higher layer metrics data, e.g., latency utilization data, errors utilization data, traffic utilization data, and saturation utilization data. Node utilization metrics data when collected can be expressed in terms of raw utilization values and as percent of maximum utilization values. Thus, orchestrator110can extract node capacity values from utilization values. Received metrics data for storage in node utilization area2121can also include capacity data, e.g., in terms of maximum number of hardware interrupts per second, thread switches per second in the case of CPU capacity, memory space capacity in the case of memory storage space capacity in the case of storage and bandwidth for I/O capacity. Data repository108in application service level agreement (SLA) area2122can store SLA parameter values for SLA parameters specified by an enterprise using system100. System100can be configured so that an enterprise agent user can be presented a user interface (UI) e.g. on UE device120A that allows the enterprise agent user to specify SLA parameter values for container-based applications associated to the hosting request. SLA parameters can include, e.g., CPU related parameters, memory related parameters, storage related parameters, I/O related parameters (lower layer parameters), latency related parameters, errors related parameters, traffic utilization parameters, saturation related parameters (higher layer parameters). Data repository108in global availability registry2123can store an iteratively updated list of available compute nodes within system100available to host a container based application. Global availability registry2123can store data on predicted availability of compute nodes within system100across a plurality of availability performance metrics, e.g., CPU availability, memory availability, storage availability, and I/O availability. Performing predictions as to availability of a compute node can be performed using various techniques, e.g., machine learning employing regression analytics, and other machine learning techniques. Global availability registry2123can store data indicating the availability of respective compute nodes of system100to host one or more new respawned container. Data repository108in global application registry area2124can store data on predicted utilization characteristics of hosted container based applications hosted within system100based on historical hosted container based applications that are placed and/or respawned within system100. Orchestrator110can predict runtime utilization characteristics of a container based application running within system100using, e.g., a combination of historical application utilization data stored within node utilization area2121and SLA parameter values stored in area2122. Global applications registry2124can store data indicating characteristics of container based applications as determined from obtained metrics data. Orchestrator110can run various processes. Orchestrator110can run e.g. application programming interface (API) process111, availability process112, application process113, and placement process114. Orchestrator110miming API process111can make calls for data on clusters of computing environments140A-140Z and can responsively examine return data in structured form system clusters, e.g. in XML format or JSON format. Orchestrator110running API process111can also iteratively push updated global registry data and global application registry data to data repositories associated to respective clusters of computing environments140A-140Z. Global registry data and global application registry data can include e.g. tables, ordered lists, and/or trained predictive models that have been trained by machine learning. Orchestrator110running availability process112can predict an availability of compute nodes of computing environments140A-140Z using metrics data stored in node utilization area2121of orchestrator data repository108. API process111or orchestrator110can be provided using an open service broker API. An open service broker API can facilitate such functions as (a) obtaining data on services and a service broker provides; (b) provisioning service instances; (c) connecting container based applications to service instances; and (d) deprovisioning service instances. Orchestrator110running application process113can predict application utilization characteristics of container based applications running in system100during a next time period. Orchestrator110running application process113can use e.g. metrics data of node utilization area2121as well as application SLA parameter data of application SLA area2122of orchestrator data repository108. Orchestrator110running placement process114can determine an initial placement of a container based application responsively to a request to host a container based application received on behalf of an enterprise. Orchestrator110running placement process114can examine, e.g., metrics data of node utilization area2121can examine, e.g., application SLA parameter data provided in application SLA area2122presented by an enterprise agent user as well as data of global availability registry2123indicating availability data of various nodes of system100. Respective ones of computing environments140A-140Z can include one or more cluster. For example, computing environment140A can include cluster140AA. Computing environment140B can include cluster140BA and computing environment140Z can include cluster140ZA. Respective ones of computing environments140A-140Z can include, e.g., a single cluster or multiple clusters. For example, in one embodiment, computing environment140A can include clusters1400AA to1400AZ of which cluster1400AA is depicted. Respective clusters, such as cluster1400AA,1400BA, and1400ZA, can include a respective set of compute nodes12A-12Z. Respective clusters of system100can include compute nodes12A-12Z. Respective compute nodes can be defined, e.g., by a physical computing node10according toFIG.5provided by a bare metal machine or alternatively, a compute node herein can be provided by a hypervisor based virtual machine (VM). It will be recognized that each computing environment of computing environments140A-140Z of system100can include a plurality of clusters. For example, computing environment140A can include clusters1400AA-1400AZ. Computing environment140A can include clusters1400BA-1400BZ. Computing environment140Z can include clusters1400ZA-1400ZZ. The fill set of clusters of system100can be referred to herein as clusters1400AA-1400ZZ, each having a manager node1410and a plurality of compute nodes12A-12Z. A respective compute node12A-12Z of a cluster can host one or more container. A container can be mapped to a specific application and an application mapping to a container can be regarded to be a container based application. In the described example ofFIG.1, container CA runs application A, container CB runs application B, and container CC runs application C. Containers herein provide operating system level virtualization to deliver software in a package. Containers herein can be isolated from one another and can bundle their own software libraries, configuration files, and the like. Containers herein can communicate with each other through predetermined channels. Multiple containers running on a common compute node can share the operating system (OS) of the compute node. A respective cluster of system100, in addition to including a plurality of compute nodes12A-12Z, can include a manager node1410. Manager node1410of a respective cluster can include an associated data repository1408that stores global availability registry1412and global application registry1414pushed by orchestrator110. Global availability registry1412can be iteratively pushed from data repository108by orchestrator110. Global application registry1414can be iteratively updated based on data of global application registry2124of data repository108of orchestrator110. Global availability registry1412can store data on predicted availability of compute nodes12A-12Z distributed throughout system100, i.e., compute nodes12A-12Z of computing environment14A as well as compute nodes12A-12Z of computing environments14B-14Z,140A,140B-140Z. Global application registry1414iteratively updated by orchestrator110can store data on predicted utilization characteristics of container based applications running in system100. Manager node1410can run various processes. Manager node1410running API process1421can e.g. receive global availability registry data from orchestrator110as well as global application registry data from orchestrator110. Manager node1410running API process1421can also call and collect metrics data from compute nodes12A-12Z associated to its respective cluster. Manager node1410running API process1421can also send metrics data collected from compute nodes12A-12Z associated to its respective cluster to orchestrator110for storage into node utilization area2121of data repository108on orchestrator110. Manager node1410running metrics collection process1422can collect metrics data from compute nodes12A-12Z from its associated cluster, e.g. cluster1400AA, cluster1400BA in the case of computing environment140B, or cluster1400ZA in the case of computing environment140Z. Metrics data collected can include, e.g., CPU utilization data, memory utilization data, storage utilization data, I/O utilization data (lower layer metrics data). Metrics data can also include capacity metrics data, e.g., capacity data in terms of CPU, memory, storage, and I/O associated to a particular compute node (lower layer metrics data). Metrics data collected can include, e.g., latency utilization data, error utilization data, traffic utilization data, and saturation utilization data (higher layer metrics data). Manager node1410running API process1421can include manager node1410supporting more than one API. Manager node1410running API process1421can feature a first API for support of communication of lower layer metrics data, and a second API for support of communication of higher layer metrics data. The first API for support of communication of lower layer metrics data can be provided using Prometheus® metrics collection service. Prometheus® is a registered trademark of the Linux Foundation. The second API for support of communication of higher layer metrics data can be provided using the ISTIO service mesh layer available from IBM Cloud™. IBM Cloud™ is a trademark of International Business Machines Corporation. ISTIO a configurable, open source service-mesh layer that connects, monitors, and secures containers in a container based cluster. Manager node1410running scheduling process1423can identify and select a compute node of system100on which to respawn a container based application currently running on a cluster associated to manager node1410. Manager node1410running scheduling process1423can include manager node1410determining that a termination event has occurred terminating a container. In response to the terminating, manager node1410can perform examining data of global availability registry1412and global application registry1414to identify and select a compute node of system100that defines a respawn location for the container to be respawned. Manager node1410performing scheduling process1423can include manager node1410examining data of global availability registry1412and global application registry1414on the local cluster of manager node1410and based on the examining identifying and selecting a suitable compute node within any cluster of system100for hosting a respawned instance of the terminated container. Notably, the selected compute node selected for hosting the respawning can either be a compute node of the certain cluster associated to manager node1410of the current cluster or can be a compute node of a cluster external to the current cluster such as an cluster of an external computing environment140B or an external cluster of current computing environment140A. In one scenario, manager node1410of cluster1400AA can determine that a container has terminated and running scheduling process1423, manager node1410can identify the compute node of cluster140BA associated with computing environment140B for hosting the respawned container. Manager node1410running respawning process1424can perform respawning of a container on one of compute nodes12A-12Z associated to the cluster to which manager node1410is associated. According to one scenario, manager node1410running respawning process1424can respawn a container previously running in cluster1400AA. Manager node1410of cluster1400AA running respawning process1424can respawn a container previously running within a cluster external to a current cluster of a computing environment external to computing environment140A such as computing environment140B. Manager node1410of cluster1400AA running respawning process1424can respawn a container previously running within a cluster external to a current cluster of a computing within computing environment140A. Manager node1410of cluster1400AA running communication parameters process1425can assign communication parameters e.g. IP addresses to newly placed respawned containers so that end users associated to UE devices of UE devices120A-120Z can communicate with a hosted application hosted within one or more cluster. Clusters herein, according to one embodiment, can perform functions in common with a Kubernetes® container management system. For example, compute nodes12A-12Z, according to one embodiment, can have features and functions in common with a worker node of a Kubernetes® container management system. Manager node1410can have features and functions in common with a Kubernetes® master node, according to one embodiment. Kubernetes® is a trademark of the Linux Foundation. According to one embodiment, a cluster can have features in common with a Docker® Swarm™ container management system. Docker® Swarm™ is a trademark of Docker, Inc. A method for performance by orchestrator110, in communication with computing environments140A-140Z and UE devices120A-120Z, is described in connection with the flowchart ofFIG.2. At block1101, orchestrator110can be sending, using API process1421, data call data for receipt by respective clusters of computing environments140A-140Z. The respective clusters of computing environments140A-140Z can responsively send, at block401, metrics data to orchestrator110. The respective clusters of computing environments140A-140Z, by their respective manager nodes1410, can send the metrics data using respective API process1421of the respective manager nodes1410. In response to receipt of the metrics data, orchestrator110can update global availability registry2123at block1102and can further update global application registry2124at block1103. Performing blocks1102and1103and orchestrator110can initially update node utilization area2121to include most recent node utilization data for each node of each respective cluster1400AA-1400AZ of system100. Metrics data sent at block401can include, e.g., CPU utilization data, memory utilization data, storage utilization data, and I/O utilization data. Metrics data sent at block401can also include, e.g., CPU capacity data, memory capacity data, storage capacity data, and I/O capacity data for storage into node utilization area2121of data repository108. Metrics data sent at block401can include, e.g., latency utilization data, errors utilization data, traffic utilization data, and saturation utilization data (higher layer metrics data). Global availability registry2123can store data specifying predicted availability of each compute node of system100at a next time relative to a current time. According to one embodiment, orchestrator110can apply regression analytics machine learning for updating a global availability registry2123at block1102. For each compute node12A-12Z in system100, e.g. distributed throughout various ones of computing environments140A-140Z, orchestrator110can perform regression analytics as described in connection withFIG.3. Shown inFIG.3A-3D, orchestrator110using regression analytics machine learning to predict availability of a certain compute node within a certain computing environment140A-140Z is described in connection withFIGS.3A-3D. Referring toFIG.3A, orchestrator110can plot a plurality of CPU availability values for a certain compute node over time up to the current time t=N and can plot regression line3003with reference to the plotted data values. Orchestrator110, in the scenario described with reference toFIG.3A, can determine that regression line value3004at a next time period t=N+1 relative to the current time t=N is the value for predicted CPU availability at a next time period. Referring toFIG.3B, orchestrator110can plot a sequence of memory availability values for a certain compute node over time up to the current time t=N and can draw regression line3007based on the plotted data values. Orchestrator110can determine that regression line value3008at next time period, t=N+1 is the predicted memory availability value for the certain compute node during a next time period. Referring toFIG.3C, orchestrator110can plot a sequence of storage availability and metrics data values for a certain compute node from node utilization area2121over time and can plot regression line3011based on the plotted data values. Orchestrator110with regression line3011plotted can determine that regression line value3012is the predicted storage availability value at a next time period t=N+1 relative to the current time t=N. Referring toFIG.3D, orchestrator110can plot a sequence of data values of I/O availability for a certain compute node up to the current time t=N and can plot regression line3015based on the plotted I/O availability data values. Using the regression line, orchestrator110can determine that regression line value3016is the predicted I/O availability at the next time period t=N+1 relative to the current time t=N. Orchestrator110can apply the described regression analysis to each compute node of system100being managed by orchestrator110. Each cluster1400AA-1400AZ of system100can include e.g. one to thousands of compute nodes. Orchestrator110can be iteratively updating predicted availability values for all compute nodes of system100, e.g. each compute node12A-12Z, of each cluster1400AA-1400ZZ, of system100iteratively over time. Historical availability values can be derived from utilization values, e.g. as the difference between a capacity value and a utilization value. A CPU availability parameter value can be derived as the difference between a CPU capacity value and a CPU utilization value. A memory availability parameter value can be derived as the difference between a memory capacity value and a memory utilization value. A storage availability parameter value can be derived as the difference between a storage capacity value and a storage utilization value. An I/O availability parameter value can be derived as the difference between a I/O capacity value and an I/O utilization value. The regression analytics processing described with reference toFIGS.3A-3Ddefines machine learning predictive models that can be iteratively trained over time with incoming training data, which training data can be provided by incoming metrics data sent at block401. InFIG.3E, there is depicted another predictive model for use in return of predicted computing node availability. Predictive model3002as depicted inFIG.3Ecan be trained with use of iteratively applied training data. Orchestrator110can train predictive model3002according to one embodiment. Each iteratively applied training dataset can include e.g. for a given historical period of the deployment tenure of a compute node, the combination of (a) compute node ID, (b) capacity data specifying capacity metrics data for the compute node, e.g., in terms of CPU capacity, memory capacity, storage capacity, and I/O capacity, (c) applications data specifying the container based applications running on the given compute node of the given period, (d) user loading data, specifying a number of onboarded end users of the compute node for the given period, and outcome data provided by (e) utilization results associated to the given historical period. The utilization results can be expressed in terms, e.g., of CPU utilization data, memory utilization data, storage utilization data, I/O utilization data, latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data observed for the historical time period. The described training dataset can be applied for a succession of historical time periods for a deployment tenure for a compute node. Trained as described, predictive model3002is able to predict availability for a computer node for a next time period, t=N+1, based on e.g. the query data which can comprise, e.g., compute node ID, the applications running on the compute node, and user loading conditions. Where predictive model3002has been trained using both lower layer utilization data (such as CPU utilization data, memory utilization data, storage utilization data, and I/O utilization data), and higher layer utilization data (latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data), expected higher layer utilization data for a next time period, t=N+1, can be input as part of the query data for output of data on predicted computer node availability. Predictive model3002, once trained, can be queried with use of query data. Query data can include a compute node ID, expected applications data for a next time period, expected user loading data, and expected higher layer utilization data, for a next time period. In response to the query data, predictive model3002can output a prediction specifying predicted availability for a compute node specified in the query data. Orchestrator110can iteratively train the predictive models ofFIGS.3A-3E, and can iteratively query the trained predictive models to generate lists of predicted availability of each compute node of system100across a plurality of performance characteristics, e.g. involving CPU availability, memory availability, storage availability, and I/O availability. Orchestrator110can iteratively push such updated lists to respective manager nodes1410. In addition or alternatively, orchestrator110can iteratively push most recently trained instances of trained predictive models as set forth in connection withFIGS.3A-3Eto manager nodes1410of the respective clusters1400AA-1400ZZ of system100. Orchestrator110performing update global applications registry block1103is described in connection with predictive models4002and4004. Referring toFIG.4A, orchestrator110can train predictive model4002to predict average utilization for respective container-based applications run by system100. Predictive model4002can be trained with training data and once trained can be configured to predict average utilization of an application across a plurality of metrics, e.g., CPU utilization data, memory utilization data, storage utilization data, I/O utilization data, latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data. Predictive model4002can be trained with use of iteratively applied training datasets wherein each dataset is associated to one deployment period for a container previously run within system100. An iteratively applied training dataset can include the combination of (a) application ID, (b) average utilization for a first metric e.g. CPU utilization, (c) average utilization per second metric, e.g., memory utilization, (d) average utilization for third metric, e.g., storage utilization, (e) average utilization for an Nth metric, e.g., I/O utilization, and (f) average number of end users associated to the deployment period. Predictive model4002once trained is able to respond to query data to generate a predicted average utilization for a certain application across a plurality of metrics, e.g., CPU metrics, memory metrics, storage metrics, and I/O metrics. Regarding (b)-(e), the first through Nth metrics can also or alternatively include higher layer metrics data, e.g., latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data. Query data that can be applied to predictive model4002for generation of a prediction output can include application ID in combination of number of end users. At block1103, orchestrator110can query predictive model4002which predictive model can be previously trained prior to block1103in the background in response to updates of history data. Query data applied to predictive model4002for return of a prediction can include an application identifier and an end users value. The end users value can be an aggregated average of end users across historical deployment periods for an application. The number of end user associated to the just terminated container can be used as the end users value. At block1103, orchestrator110can query instances of predictive model4002which have been instantiated for each candidate container based application available in system100. In one embodiment, the updated global application registry2124updated at block1103can include table data with updated data values specifying predicted average utilization values for various applications across varying metrics, e.g. CPU metrics, memory metrics, storage metrics, I/O available utilization metrics, latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data. To update global applications registry at block1103, orchestrator110at block1103can additionally or alternatively be querying trained predictive model4004previously trained prior to block1103. Predictive model4004can be trained in the manner of predictive model4002except that where average utilization metrics values were used for training of predictive model4002, peak utilization metrics values were previously used for training predictive model4004. Predictive model4004, like predictive model4002, can be trained to provide predictions for a plurality of applications, e.g. each candidate application available in system100. Predictive model4004, once trained, is able to respond to query data to provide a prediction as to predicted peak utilization for respective applications available in system100across multiple utilization parameters. Query data for querying predictive model4004can include application ID in combination with a value specifying a count of end users. The end users count value can be, for example, an aggregated average count of end users for the respective prior historical deployment periods associated to the applied training data for training predictive model4004. System100can be configured so that whenever orchestrator110or a manager node1410generates a prediction as to an application utilization parameter value (e.g., CPU related, memory related, storage related, or I/O related) the prediction can be biased by, or replaced by, an SLA parameter value associated to the application utilization parameter. Global application registry2124, in addition to table values specifying predicted average utilization and predicted peak utilization for each respective applications available in system100across a plurality of metrics, can include most recent versions of predictive model4002and predictive model4004using most recently available training data. As noted, instances of predictive model4002and predictive model4004can be queried to predict utilization of each application running in system100. At block1104, orchestrator110can send registry push data to respective ones of clusters within computing environments140A-140Z. The pushed registry data can include updated data from global availability registry2123and global application registry2124as recently updated at blocks1102and1103. Orchestrator110can send the registry push data through an API defined by API process112and the respective each cluster of system100can receive the registry push data through respective APIs defined by API process1421of the respective clusters. Registry push data can include e.g. table data specifying predicted availability characteristics of nodes12A-12Z of each respective cluster of system100as well as table data specifying predicted application utilization characteristics of each respective candidate application that can be run by system100. In addition or alternatively, registry push data can include updated trained models that have been trained during the last training iteration as is explained further herein. Trained predictive models that can be pushed with registry push data pushed at block1104can include updated most recently trained instances of the predictive models ofFIG.3A-3D, predictive model3002, predictive model4002, and predictive model4004described in reference toFIGS.4A and4B. With trained models being pushed for use by each respective cluster1400AA-1400ZZ, the trained models can be fast acting at the respective clusters having already been subject to training prior to pushing. At block402, respective clusters1400AA-1400ZZ can store the received registry push data sent at block1104into its respective data repository1408of cluster1400AA-1400ZZ. The pushing of trained predictive models to respective manager nodes1410of clusters1400AA-1400ZZ allows a respective manager node110to query a previously trained predictive model without further training and assures low latency response time of the respective manager nodes1410for selecting a respawn host in response to a container termination. When instances of trained predictive models according toFIGS.3A-3E, and4A-4Ehave been pushed to a local cluster, the manager node1410can query the trained predictive models for local cluster generation of table data specifying predicted availability characteristics of nodes12A-12Z and table data specifying predicted application utilization characteristics of respective candidate applications that can be run by system100. The local cluster generated table data can be stored in global availability registry1412and global application registry1412. In some embodiments, a manager node100of a local cluster can use a combination of orchestrator generated and local cluster generated prediction table data, e.g., can use the orchestrator generated table data for coarse filtering out of candidate compute nodes, and can query local instances of trained predictive models for return of higher accuracy prediction table data. Embodiments herein recognize that query data for querying predictive models3002,4002, and4004can include data in dependence on metrics data of a just terminated local container that is more readily available on the local cluster on which the container just terminated. The table data specifying predicted application utilization characteristics of respective candidate applications that can be run by system100can be limited to applications running on the local cluster to limit the querying time for table generation. In one embodiment predictive model querying of the trained predictive models according toFIGS.3A-3E, and4A-4Ecan be performed on demand in response to container termination to further reduce query time. For predictive models4002,4004predictive model querying of the trained predictive models according toFIGS.3A-3E, and4A-4Ecan be performed on demand in response to container termination and can be restricted to the just terminated application to further reduce query time. At block1201, an enterprise agent user using a UE device of UE devices120A-120Z can define and send hosting request data for receipt by orchestrator110. In response to the receipt of the hosting request data, orchestrator110can perform action decision1105to determine initial placement of a container based application specified in the hosting request data sent at block1201on behalf of an enterprise by an enterprise agent user. For performing of action decision1105, orchestrator110can examine data of global availability registry2123to determine which nodes are available to perform the hosting and also data of global application registry2124. At action decision block1105, orchestrator110can examine data of global availability registry2123and global application registry2124in order to perform initial placement of a container based application specified in the hosting request data of block1201. Based on the examination of global availability registry2123and global application registry2124, orchestrator110can identify and select a compute node for hosting the application specified in hosting request data sent at block1201. Orchestrator110, for identification and selection of a compute node for hosting the application specified in hosting request data sent at block1201, can apply criterion for hosting a respawned container (except criterion related to a just terminated container) as explained in action decision block405. Responsively at block1106, orchestrator110can send hosting command data to the computing environment having the selected compute node. The hosting command data can be received by the manager node1410of the cluster in which the selected compute node is located. The manager node1410, in response to the hosting command data, can spawn the selected container based application on the selected compute node. At block403, manager nodes1410of clusters1400AA-1400ZZ distributed in computing environments140A-140Z can perform event detection. Event detection can be triggered by a container based application terminating. Manager node1410by running of API process1421can be monitoring of lower layer metrics data (such as CPU utilization data, memory utilization data, storage utilization data, I/O utilization data), and higher layer utilization data (latency utilization data, errors utilization data, traffic utilization data, and/or saturation utilization data), for determination of whether a termination condition is satisfied and can send a termination command to terminate a container in response to the condition being satisfied. The condition can include the condition, e.g., that one more of the noted metrics data items has traversed a threshold (exceeded a high threshold or fallen below a low threshold). In the case manager node1410as defined on a Kubernetes® container management system, manager node1410performing event detection can include manager node1410monitoring lower layer “keepalive” signals from an agent (known as a Kubelet® agent) running on compute node. In response to the event detection at block403, the certain computing environment of computing environments140A-140Z can, at send block404, send metrics data to orchestrator110. The metrics data sent at block404can include metrics data of the deployment period of the terminated container based application. Orchestrator110, in response to the metrics data of the terminated container based application, can update global availability registry2123and global application registry2124to reflect the current compute node and application status of system100. Further, in response to the metrics data received in response to the sending at block404, orchestrator110can perform training of the predictive models ofFIG.3A-3Eand predictive models4002and4004as described in connection with4A and4B at training block1107using the metrics data as training data. The metrics data sent at block404can include such metrics data as metrics data associated with the deployment period of a container based application just terminated. The metrics data can include metrics data defining an iteration of training data described in connection with predictive model4002and predictive model4004described in connection withFIGS.4A and4B. Another event that can be detected at block403can include termination of a compute node. System100can be configured so that in response to termination of a compute node training of the predictive models ofFIG.3A-3Ecan be commenced at training block1107using metrics data of the deployment period of the terminated container as training data. Orchestrator110at block1107can initiate training. That is, training can be initiated at block1107and can be performed in the background and parallel with subsequent actions performed by orchestrator110. Various available tools, libraries, and/or services can be utilized for implementation of predictive model3002, predictive model4002, and/or predictive model4004. For example, a machine learning service can provide access to libraries and executable code for support of machine learning functions. A machine learning service can provide access to a set of REST APIs that can be called from any programming language and that permit the integration of predictive analytics into any application. Enabled REST APIs can provide e.g. retrieval of metadata for a given predictive model, deployment of models and management of deployed models, online deployment, scoring, batch deployment, stream deployment, monitoring and retraining deployed models. According to one possible implementation, a machine learning service provided by IBM® WATSON® can provide access to libraries of APACHE0SPARK® and IBM® SPSS® (IBM® WATSON® and SPSS® are registered trademarks of International Business Machines Corporation and APACHE® and SPARK® are registered trademarks of the Apache Software Foundation. A machine learning service provided by IBM® WATSON® can provide access to a set of REST APIs that can be called from any programming language and that permit the integration of predictive analytics into any application. Enabled REST APIs can provide e.g. retrieval of metadata for a given predictive model, deployment of models and management of deployed models, online deployment, scoring, batch deployment, stream deployment, monitoring and retraining deployed models. Training predictive model3002, predictive model4002and/or predictive model4004can include use of e.g. support vector machines (SVM), Bayesian networks, neural networks and/or other machine learning technologies. The predictive models ofFIGS.3A-3E, and4A-4Bcan be trained with use of historical data stored in data repository108. Subsequent to event detection at block403, the cluster associated to the event detection at block405can perform an action decision. The action decision can be an action decision to respawn the container based application just terminated at block403. For performance of the action decision at block405, the certain cluster of clusters1400AA-1400ZZ can identify and select a compute node for respawning the just terminated container based application. The action decision at block405can be an action decision to identify and select a compute node for respawning the just terminated container. For performing the action decision of action decision block405, the certain manager node associated to the event detection at block403can examine data of its respective global availability registry1412and its global application registry1414to select an appropriate compute node for hosting of the just terminated container. According to one embodiment, global availability registry1412can store iteratively updated data specifying predicted availability characteristics for respective compute nodes of system100. The data can include, e.g., predicted CPU availability characteristics, predicted memory characteristics, predicted storage characteristics, and/or predicted I/O characteristics. According to one embodiment, global application registry1414can store iteratively updated data specifying predicted utilization characteristics for respective container based applications of system100. The data can include, e.g., predicted CPU utilization characteristics, predicted memory utilization characteristics, predicted storage utilization characteristics, predicted I/O characteristics, predicted latency characteristics, predicted error characteristics, predicted traffic characteristics, and/or predicted saturation characteristics. The iteratively updated data of global availability registry1412and global application registry1414can include, e.g., table data, lists, and/or trained predictive models. Manager node1410performing scheduling process1423can include manager node1410examining data of global availability registry1412and global application registry1414on the local cluster of manager node1410and based on the examining identifying and selecting a suitable compute node within any cluster of system100for hosting a respawned instance of the terminated container. Manager node1410examining data of global availability registry1412and global application registry1414can include, e.g., manager node110examining orchestrator and/or local cluster generated table data, and/or querying local cluster instances of the trained predictive models ofFIGS.3A-3E, andFIGS.4A-4B. For identification and selection of a compute node, manager node1410can identify for the just terminated container based application, the most significant utilization parameter for the just terminated container based application. The most significant predicted utilization parameter can be the predicted utilization parameter (e.g. CPU, memory, storage, I/O) that exceeds a baseline value by the largest percentage amount. For example, manager node1410can determine that the just terminated container based application is a CPU intensive application based on a predicted CPU utilization value exceeding a baseline value by the largest percentage amount relative to other performance characteristics. For example, manager node1410can determine that the just terminated container based application is a memory intensive application based on a predicted memory utilization value exceeding a baseline value by the largest percentage amount relative to other performance characteristics. Once manager node1410determines that the terminated container based application is a CPU intensive application, manager node1410can identify a suitable node having predicted CPU availability parameter value exceeding a threshold value. Once manager node1410provisionally matches an application to a compute node based on the most significant predicted utilization value for the terminated container based application manager node1410can then verify the selection by confirming that for remaining predicted utilization parameters, values (using predictive models4002,4004) are below the predicted availability value of the provisionally selected compute node, e.g., using the predictive models ofFIG.3A-3E. Manager node1410, according to one embodiment, can score each compute node12A-12Z of system100across all clusters for suitability for hosting a just terminated container based application. Manager node1410can score each candidate compute node for compatibility across a plurality of utilization and corresponding compute node availability parameters, and can select the highest scoring compute node as the respawn hosting compute node. Manager node1410can score each candidate compute node of system100according to the scoring formula of Eq. 1 according to one embodiment. SCN=F1W1+F2W1+F3W3+F4W4  (Eq. 1) Where SCN is a suitability score assigned to each candidate compute node of system100, F1-F4 are factors and W1-W4 are weights associated to the various factors. A respective factor can be suitability of a respective candidate compute node with reference to an availability parameter in relation to a utilization parameter associated to the availability parameter for the just terminated application. According to one embodiment, factor F1 can be a suitability value based on predicted CPU availability of a candidate compute node with reference to predicted CPU utilization for the just terminated container based application, factor F2 can be a suitability value based on predicted memory availability of a candidate compute node with reference to predicted memory utilization for the just terminated container based application, factor F3 can be a suitability value based on predicted storage availability of a candidate compute node with reference to predicted storage utilization for the just terminated container based application, factor F4 can be a suitability value based on predicted I/O availability of a candidate compute node with reference to predicted I/O utilization for the just terminated container based application. Manager node1410can apply weights to the set of factors, each factor defined by a utilization parameter and associated availability parameter. Manager node1410can apply the greatest weight to the factor associated to the most significant utilization parameter for the just terminated container based application, and can apply lesser weights to factors associated to remaining utilization parameters. Once manager node1410provisionally matches a just terminated container based application to a compute node based on the most significant predicted utilization value for the terminated container based application and/or using Eq. 1, manager node1410can further verify the selection by confirming that for predicted higher layer utilization parameter values (predicted latency characteristics, predicted errors characteristics, predicted traffic characteristics, and/or predicted saturation characteristics) are within specified ranges, e.g., predetermined ranges or dynamically determined ranges. Embodiments herein recognize that the most significant predicted utilization parameter for a container based application can be differentiated in dependence on the functions associated with the application. In a vacation planning service, there can be various different container based applications, e.g., a booking container based application, a payment container based application, and a customer rewards container based application. Latency can be critical in the booking application but less critical in the customer rewards application. Accordingly, predicted I/O utilization can be greater in the booking container based application than in the customer rewards container based application. In one scenario, manager node1410of cluster1400AA at block405can return an action decision to select an appropriate compute node for hosting a respawn of the just terminated container terminated within cluster1400AA. Based on features herein, the selected location for the respawned container can be a compute node within cluster1400AA or external to cluster1400AA such as a cluster1400BA of an alternative computing environment hosting cluster1400BA such as computing environment140B, or a cluster of another computing environment such as computing environment140Z. In response to the action decision to select a compute node for hosting the respawned container, manager node1410of cluster1400AA at block406can send hosting command data to orchestrator110. Orchestrator110, in response to the hosting command data sent at block406, can, at block1108, redirect and forward the hosting command data to the appropriate cluster of system100which hosts the selected compute node selected at block405. In one scenario, where the selected compute node for hosting a respawn is within cluster1400BA, manager node1410of cluster1400BA can receive the forwarded hosting command data sent at block1108. Manager node1410of cluster1400BA in the described scenario at block407can activate respawning process1424thereof to respawn the terminated container detected as terminated at event detection block403. For commanding respawning of a terminated container on a computing environment external compute node, manager node1410AA can communicate to the respawn cluster via orchestrator110as depicted in the flowchart ofFIG.2or through an alternate channel Manager node1410can send command data to orchestrator110, which can use an open service broker API to communicate with manager nodes of external computing environments. Manager node1410to manager node1410communication can be performed, e.g., through a dedicated network link or through VPN over the public Internet. For providing communication between a first manager node on a first cluster of a first computing environment and a second manager node on a second cluster of the second computing environment, cluster operators featuring YAML files can be configured. A cluster operator of manager node1410can include YAML files configured to facilitate communication with cluster external compute nodes including computing environment external compute nodes. According to one embodiment, a cluster operator can be termed a proxy operator. Such a proxy operator can include a YAML file configuring the proxy operator to: (a) provide an API for the external computing environment where external compute nodes are placed; (b) provide authentication keys to authenticate a local manager node with a manager node of the external computing environment where the external compute node is located; and (c) control external traffic through Egress and Ingress proxies. Egress and Ingress proxies can control which traffic can be allowed out and in, respectively. System100can have features to facilitate synchronized communication between a local manager node1410and an external computing environment manager node1410. To facilitate synchronized communication between manager nodes of different computing environments, a distributed key value store can be provided, which can be defined by data repository1408physically provided by storage systems34(FIG.5) associated to respective manager nodes1410of different computing environments. The distributed data store defined by respective data repositories1408of respective clusters1400AA-1400AZ can be configured in accordance with ETCD. ETCD is an open source distributed key-value store used to hold and manage critical information for maintaining operation of a distributed system. A key value store can be used to manage, e.g., container configuration data, state data, and metadata to support container processes such as termination, respawning, and messaging. For facilitation of synchronization, orchestrator110at data call send block1101can iteratively call for all manager nodes1410to report their current key value store to orchestrator110which responsively updates a master list and pushes an updated master list all manager nodes1410for storage respective data repositories1408so that every manager node1410in system has access to a replicated updated copy of distributed key value data store. The key value data store can configure manager nodes1410between different computing environments for synchronized scheduling, scaling, keepalive signaling, and respawning. For implementation of the distributed key value data store, every manager node1410in system100can have access to the full data store. System100can be configured to have no single point of failure, and system100can be configured so that every data ‘read’ returns the latest data ‘write’ across all clusters1400AA-1400ZZ of system100. System100can be configured to support automatic Transport Layer Security (TLS) and optional secure socket layer (SSL) client certificate authentication. System100can use a Raft consensus algorithm to ensure data store consistency across all nodes in a cluster and between different clusters. At action decision block405, the identification and selection by a manager node1410of a suitable compute node for respawning a terminated container based application can be conditioned on traffic data of the terminated container that indicates (a) a level of messaging between the terminated container and external containers of the local cluster (cluster1400AA if the terminated container is in cluster1400AA), and (b) a level of messaging between the terminated container and external containers of external computing node (clusters1400BA-1400ZZ if the terminated container is in cluster1400AA). According to one embodiment, manager node1410of each respective local cluster can run a traffic monitoring utility facilitating collection of monitoring data specifying instances of messaging between running containers of a current cluster. Monitoring data can be provided using an ISTIO service mesh layer available from IBM Cloud™. IMB Cloud™ is a trademark of International Business Machines Corporation. ISTIO is a configurable, open source service-mesh layer that connects, monitors, and secures containers in a container based cluster. Manager node1410, based on the collected traffic data, can assign traffic scores to a just terminated container and classifications to the just terminated container in dependence on the traffic scores. Traffic scores can be assigned based on e.g. a count of messages and/or a count of bits transferred. According to one embodiment, manager node1410can classify a just terminated container using the decision data structure of Table A. TABLE AExternal computingLocal clusterenvironment trafficTerminated containerRowtraffic score, Lscore, Eclassification1L > T1E > T2Neutral2L <= T1E > T2Global CommunicationContainer (GCC)3L > T1E <= T2Local CommunicationContainer (LCC)4L <= T1E <= T2Neutral Manager node1410can use the decision data structure of Table A to identify and select a compute node for hosting a respawned container. Referring to Table A, manager node1410can classify a terminated container as a Local Communication Container (LCC) where a local cluster traffic score, L, for the terminated container exceeds a first threshold, and the External computing environment traffic score, E, for the terminated container does not exceed a second threshold. Manager node1410can classify a terminated container as a Global Communication Container (GCC) where an External computing environment traffic score, E, for the terminated container exceeds a second threshold, and the Local cluster traffic score, L, for the terminated container does not exceed a first threshold. Manager node1410can be configured to restrict manager node1410from selecting of a compute node of an external computing environment cluster as the respawn host compute node where the classification of the terminated cluster is Local Communication Container (LCC). In the case manager node1410scores candidate compute nodes, and selects a highest scoring compute node as the respawn host, manager node1410can adjust a score for an external computing environment candidate compute node where the classification for the just terminated container is Global Communication Container (GCC). The adjusting can include biasing the score upward or removing a normally present negative bias. Manager node1410can permit external cluster respawning and external computing environment respawning where the classification of the just terminated container is Global Communication Container (GCC), or Neutral. At block1202, end user devices of UE devices120A-120Z can be sending service requests to hosted containers of system100which can responsively send service response data at block408which response data can be executed by the various end user UE devices at block1203. At block409, computing environments140A-140Z can return to a stage preceding block401to repeat the loop of blocks401-409. At block1109, orchestrator110can return to a stage prior to block1101to iteratively perform the loop of blocks1101-1109which can be iteratively performed until the deployment period ends. At block1204, UE devices120A-120Z can return to a stage prior to block1201to iteratively perform the loop of blocks1201-1204which can be iteratively performed throughout a deployment period. Embodiments herein recognize that in a containers cluster when containers are created on compute nodes, a replication controller and scheduler service on a manager node through an API can create multiple containers across the compute nodes within the cluster to ensure that the availability of the application inside the container is available, up and running. Embodiments herein recognize that in a hybrid cloud system, enterprises can have multiple container clusters running across, e.g., on-premises, off-premises, private and public clouds which may be of the same provider or different providers, technologies, and/or platforms. Embodiments herein recognize that in such a multi-container multi-computing environment system, developers and enterprise agent users do not have a choice and flexibility to decide on which compute node of a multiple computing environment application should provision their application, which application can be, e.g., CPU-intensive, memory-intensive, storage-intensive, or I/O intensive. Embodiments herein recognize that with existing systems, an administrator user placement of applications can become a tedious task in multi-computing environment systems and can delay code release cycle, create performance bottlenecks in a production environment, and at worse, it can even fail the application and thus impact business adversely. Embodiments herein recognize that administrator users are limited in their choice of a resourceful target environment for their applications. Embodiments herein recognize that in a container cluster, a manager node can control all activities on compute nodes of a cluster, can maintain a state within a container environment and can provide an API that tooling and systems interact with. A scheduler can be responsible for determining pod placement by taking current memory, CPU, and other environment utilization into account when placing pods on nodes and for application high availability, spread pod replicas between nodes. A replication controller can ensure that a specified number of pod replicas remain running at all times and if pods exit or are deleted, a replication controller can instantiate new replicas. There is set forth herein an orchestrator110which can be termed a farm controller engine (FCE). Orchestrator110can group master and worker nodes for CPU, Memory, Storage (and/or any other resource) and can store various data in a data store defined in data repository108. Data that can be stored can include, e.g., configuration data and metadata. Orchestrator110can run as a separate process with exposed API to communicate to APIs of multiple manager nodes associated to different clusters1400AA-1400ZZ. Orchestrator110allows users and developers to enter choices during provisioning of an application, e.g., on whether they prefer a CPU intensive compute node or a memory intensive compute node. Orchestrator110can send parameters/variables to a corresponding manager node1410through an API of the orchestrator110. Specific scheduler and replication controllers of a manager node1410can then communicate with appropriate compute nodes in a cluster. Orchestrator110can record overwritten preferences by the users through ‘keepalive’ mechanism and can iteratively update orchestrator110and corresponding schedulers and replication controller(s) of respective manager nodes1410in regular intervals. Orchestrator110can enable a unified system for cross-container placements and thus creates container based application farms based on the specific needs of developers and their applications/codes in terms of resources like CPU, memory, storage, I/O and the like. With configuration of orchestrator110, orchestrator110can run as a process on a hypervisor based virtual machine (VM) or a container based VM, or a physical computing node10in any computing environment. There can be provided an orchestrator API to manager node API communication layer. An orchestrator data store defined by data repository108can contain information about each cluster of clusters1400AA-1400AZ including its manager node1410, compute nodes12A-12Z, APIs, and the like. Orchestrator110can provide a user interface (UI) for authenticating/authorizing developers and users for the selection of a target container computing environment. Orchestrator110can create multiple application farms defined by compute nodes adapted to support a specified utilization in a computing environment. Orchestrator110can logically group compute nodes with the preferences in respect to resources such as CPU, Memory, Storage, I/O, and the like. An orchestrator110and manager nodes1410can use ping-pong keepalive mechanisms to provide a choice to developers/users to be able to select a specific container hosting computing environment. Orchestrator110can allow overwriting to the previous selections/choices and can push the new choice/selections to corresponding manager nodes through respective APIs of the orchestrator and manager nodes1410. Certain embodiments herein may offer various technical computing advantages involving computing advantages to address problems arising in the realm of computer networks. Embodiments herein can feature an orchestrator in communication with manager nodes of multiple clusters. The multiple clusters can be disposed in multiple computing environments. The orchestrator can gather metrics data for the various clusters and can orchestrate respawning of terminated containers. An orchestrator can iteratively push a global availability registry and a global application registry to a manager node of respective clusters in a multi-cluster multiple computing environment system. The global availability registry and a global application registry can include trained predictive models, that are previously trained at a time of arrival at a manager node. A manager node can therefore query a trained predictive model for reduced latency in rendering action decisions. Action decisions can include action decisions to identify a respawn host for hosting a terminated container to be respawned. A manager node can respond to a termination of a container by selecting a compute node for hosting a respawned container, and respawning the terminated container on the selected respawn compute node. The respawned container can be hosted within a computing environment external to a computing environment in which the container was terminated. A manager node can classify a terminated container in dependence on traffic of the container during a deployment period of the container. A manager node can select a compute node for hosting a respawn of the terminated container in dependence on the classification. Certain embodiments may be implemented by use of a cloud platform/data center in various types including a Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Database-as-a-Service (DBaaS), and combinations thereof based on types of subscription. FIGS.5-7depict various aspects of computing, including a computer system and cloud computing, in accordance with one or more aspects set forth herein. It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows:On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows:Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows:Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.5, a schematic of an example of a computing node is shown. Computing node10is only one example of a computing node suitable for use as a cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. Computing node10can be implemented as a cloud computing node in a cloud computing environment, or can be implemented as a computing node in a computing environment other than a cloud computing environment. In computing node10there is a computer system12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system12may be described in the general context of computer system-executable instructions, such as program processes, being executed by a computer system. Generally, program processes may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program processes may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.5, computer system12in computing node10is shown in the form of a computing device. The components of computer system12may include, but are not limited to, one or more processor16, a system memory28, and a bus18that couples various system components including system memory28to processor16. In one embodiment, computing node10is a computing node of a non-cloud computing environment. In one embodiment, computing node10is a computing node of a cloud computing environment as set forth herein in connection withFIGS.6-7. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program processes that are configured to carry out the functions of embodiments of the invention. One or more program40, having a set (at least one) of program processes42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program processes, and program data. One or more program40including program processes42can generally carry out the functions set forth herein. In one embodiment, orchestrator110can include one or more computing node10and can include one or more program40for performing functions described with reference to orchestrator110as set forth in the flowchart ofFIG.2. In one embodiment, respective manager nodes1410can be defined by a computing node10and can respectively include one or more program40for performing functions described with reference to respective manager nodes1410as set forth in the flowchart ofFIG.2. In one embodiment, a compute nodes12A-12Z can be defined by a computing node10and can include one or more program40for performing functions described with reference to a compute node12A-12Z as set forth in the flowchart ofFIG.2. In one embodiment, one or more client computer device120A-120Z can include one or more computing node10and can include one or more program40for performing functions described with reference to one or more client computer device120A-120Z as set forth in the flowchart ofFIG.2. In one embodiment, the computing node based systems and devices depicted inFIG.1can include one or more program for performing function described with reference to such computing node based systems and devices. Computer system12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system12; and/or any devices (e.g., network card, modem, etc.) that enable computer system12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. In addition to or in place of having external devices14and display24, which can be configured to provide user interface functionality, computing node10in one embodiment can include display25connected to bus18. In one embodiment, display25can be configured as a touch screen display and can be configured to provide user interface functionality, e.g. can facilitate virtual keyboard functionality and input of total data. Computer system12in one embodiment can also include one or more sensor device27connected to bus18. One or more sensor device27can alternatively be connected through I/O interface(s)22. One or more sensor device27can include a Global Positioning Sensor (GPS) device in one embodiment and can be configured to provide a location of computing node10. In one embodiment, one or more sensor device27can alternatively or in addition include, e.g., one or more of a camera, a gyroscope, a temperature sensor, a humidity sensor, a pulse sensor, a blood pressure (bp) sensor or an audio input device. Computer system12can include one or more network adapter20. InFIG.6computing node10is described as being implemented in a cloud computing environment and accordingly is referred to as a cloud computing node in the context ofFIG.6. Referring now toFIG.6, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.10are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.7, a set of functional abstraction layers provided by cloud computing environment50(FIG.6) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.7are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and processing components96for container orchestration as set forth herein. The processing components96can be implemented with use of one or more program40described inFIG.5. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”), and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a method or device that “comprises,” “has,” “includes,” or “contains” one or more steps or elements possesses those one or more steps or elements, but is not limited to possessing only those one or more steps or elements. Likewise, a step of a method or an element of a device that “comprises,” “has,” “includes,” or “contains” one or more features possesses those one or more features, but is not limited to possessing only those one or more features. Forms of the term “based on” herein encompass relationships where an element is partially based on as well as relationships where an element is entirely based on. Methods, products and systems described as having a certain number of elements can be practiced with less than or greater than the certain number of elements. Furthermore, a device or structure that is configured in a certain way is configured in at least that way, but may also be configured in ways that are not listed. It is contemplated that numerical values, as well as other values that are recited herein are modified by the term “about”, whether expressly stated or inherently derived by the discussion of the present disclosure. As used herein, the term “about” defines the numerical boundaries of the modified values so as to include, but not be limited to, tolerances and values up to, and including the numerical value so modified. That is, numerical values can include the actual value that is expressly stated, as well as other values that are, or can be, the decimal, fractional, or other multiple of the actual value indicated, and/or described in the disclosure. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description set forth herein has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of one or more aspects set forth herein and the practical application, and to enable others of ordinary skill in the art to understand one or more aspects as described herein for various embodiments with various modifications as are suited to the particular use contemplated.
95,793
11861406
DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. Referring now toFIG.1, a computing device100for secure I/O with an accelerator device includes a processor120and an accelerator device136, such as a field-programmable gate array (FPGA). In use, as described further below, a trusted execution environment (TEE) established by the processor120securely communicates data with the accelerator136. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator136decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator136may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. As yet another example, the TEE may configure the accelerator136to perform a DMA operation, and the accelerator136performs a memory transfer, performs a cryptographic operation (i.e., encryption or decryption), and forwards the result. As described further below, the TEE and the accelerator136generate authentication tags (ATs) for the transferred data and may use those ATs to validate the transactions. The computing device100may thus keep untrusted software of the computing device100, such as the operating system or virtual machine monitor, outside of the trusted code base (TCB) of the TEE and the accelerator136. Thus, the computing device100may secure data exchanged or otherwise processed by a TEE and an accelerator136from an owner of the computing device100(e.g., a cloud service provider) or other tenants of the computing device100. Accordingly, the computing device100may improve security and performance for multi-tenant environments by allowing secure use of accelerator devices. The computing device100may be embodied as any type of device capable of performing the functions described herein. For example, the computing device100may be embodied as, without limitation, a computer, a laptop computer, a tablet computer, a notebook computer, a mobile computing device, a smartphone, a wearable computing device, a multiprocessor system, a server, a workstation, and/or a consumer electronic device. As shown inFIG.1, the illustrative computing device100includes a processor120, an I/O subsystem124, a memory130, and a data storage device132. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory130, or portions thereof, may be incorporated in the processor120in some embodiments. The processor120may be embodied as any type of processor capable of performing the functions described herein. For example, the processor120may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. As shown, the processor120illustratively includes secure enclave support122, which allows the processor120to establish a trusted execution environment known as a secure enclave, in which executing code may be measured, verified, and/or otherwise determined to be authentic. Additionally, code and data included in the secure enclave may be encrypted or otherwise protected from being accessed by code executing outside of the secure enclave. For example, code and data included in the secure enclave may be protected by hardware protection mechanisms of the processor120while being executed or while being stored in certain protected cache memory of the processor120. The code and data included in the secure enclave may be encrypted when stored in a shared cache or the main memory130. The secure enclave support122may be embodied as a set of processor instruction extensions that allows the processor120to establish one or more secure enclaves in the memory130. For example, the secure enclave support122may be embodied as Intel® Software Guard Extensions (SGX) technology. The memory130may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory130may store various data and software used during operation of the computing device100such as operating systems, applications, programs, libraries, and drivers. As shown, the memory130may be communicatively coupled to the processor120via the I/O subsystem124, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor120, the memory130, and other components of the computing device100. For example, the I/O subsystem124may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, sensor hubs, host controllers, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the memory130may be directly coupled to the processor120, for example via an integrated memory controller hub. Additionally, in some embodiments, the I/O subsystem124may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor120, the memory130, the accelerator device136, and/or other components of the computing device100, on a single integrated circuit chip. Additionally, or alternatively, in some embodiments the processor120may include an integrated memory controller and a system agent, which may be embodied as a logic block in which data traffic from processor cores and I/O devices converges before being sent to the memory130. As shown, the I/O subsystem124includes a direct memory access (DMA) engine126and a memory-mapped I/O (MMIO) engine128. The processor120, including secure enclaves established with the secure enclave support122, may communicate with the accelerator device136with one or more DMA transactions using the DMA engine126and/or with one or more MMIO transactions using the MMIO engine128. The computing device100may include multiple DMA engines126and/or MMIO engines128for handling DMA and MMIO read/write transactions based on bandwidth between the processor120and the accelerator136. Although illustrated as being included in the I/O subsystem124, it should be understood that in some embodiments the DMA engine126and/or the MMIO engine128may be included in other components of the computing device100(e.g., the processor120, memory controller, or system agent), or in some embodiments may be embodied as separate components. The data storage device132may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, non-volatile flash memory, or other data storage devices. The computing device100may also include a communications subsystem134, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device100and other remote devices over a computer network (not shown). The communications subsystem134may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, 3G, 4G LTE, etc.) to effect such communication. The accelerator device136may be embodied as a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a coprocessor, or other digital logic device capable of performing accelerated functions (e.g., accelerated application functions, accelerated network functions, or other accelerated functions), GPUs, etc. Illustratively, the accelerator device136is an FPGA, which may be embodied as an integrated circuit including programmable digital logic resources that may be configured after manufacture. The FPGA may include, for example, a configurable array of logic blocks in communication over a configurable data interchange. The accelerator device136may be coupled to the processor120via a high-speed connection interface such as a peripheral bus (e.g., a PCI Express bus) or an inter-processor interconnect (e.g., an in-die interconnect (IDI) or QuickPath Interconnect (QPI)), or via any other appropriate interconnect. The accelerator device136may receive data and/or commands for processing from the processor120and return results data to the processor120via DMA, MMIO, or other data transfer transactions. As shown, the computing device100may further include one or more peripheral devices138. The peripheral devices138may include any number of additional input/output devices, interface devices, hardware accelerators, and/or other peripheral devices. For example, in some embodiments, the peripheral devices138may include a touch screen, graphics circuitry, a graphical processing unit (GPU) and/or processor graphics, an audio device, a microphone, a camera, a keyboard, a mouse, a network interface, and/or other input/output devices, interface devices, and/or peripheral devices. Referring now toFIG.2, an illustrative embodiment of a field-programmable gate array (FPGA)200is shown. As shown, the FPGA200is one potential embodiment of an accelerator device136. The illustratively FPGA200includes a secure MMIO engine202, a secure DMA engine204, one or more accelerator functional units (AFUs)206, and memory/registers208. As described further below, the secure MMIO engine202and the secure DMA engine204perform in-line authenticated cryptographic operations on data transferred between the processor120(e.g., a secure enclave established by the processor) and the FPGA200(e.g., one or more AFUs206). In some embodiments, the secure MMIO engine202and/or the secure DMA engine204may intercept, filter, or otherwise process data traffic on one or more cache-coherent interconnects, internal buses, or other interconnects of the FPGA200. Each AFU206may be embodied as logic resources of the FPGA200that are configured to perform an acceleration task. Each AFU206may be associated with an application executed by the computing device100in a secure enclave or other trusted execution environment. Each AFU206may be configured or otherwise supplied by a tenant or other user of the computing device100. For example, each AFU206may correspond to a bitstream image programmed to the FPGA200. As described further below, data processed by each AFU206, including data exchanged with the trusted execution environment, may be cryptographically protected from untrusted components of the computing device100(e.g., protected from software outside of the trusted code base of the tenant enclave). Each AFU206may access or otherwise process stored in the memory/registers208, which may be embodied as internal registers, cache, SRAM, storage, or other memory of the FPGA200. In some embodiments, the memory208may also include external DRAM or other dedicated memory coupled to the FPGA200. Referring now toFIG.3, in an illustrative embodiment, the computing device100establishes an environment300during operation. The illustrative environment300includes a trusted execution environment (TEE)302and the accelerator136. The TEE302further includes a host cryptographic engine304, a transaction dispatcher306, a host validator308, and a direct memory access (DMA) manager310. The accelerator136includes an accelerator cryptographic engine312, an accelerator validator314, a memory mapper316, an authentication tag (AT) controller318, and a DMA engine320. The various components of the environment300may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment300may be embodied as circuitry or collection of electrical devices (e.g., host cryptographic engine circuitry304, transaction dispatcher circuitry306, host validator circuitry308, DMA manager circuitry310, accelerator cryptographic engine circuitry312, accelerator validator circuitry314, memory mapper circuitry316, AT controller circuitry318, and/or DMA engine circuitry320). It should be appreciated that, in such embodiments, one or more of the host cryptographic engine circuitry304, the transaction dispatcher circuitry306, the host validator circuitry308, the DMA manager circuitry310, the accelerator cryptographic engine circuitry312, the accelerator validator circuitry314, the memory mapper circuitry316, the AT controller circuitry318, and/or the DMA engine circuitry320may form a portion of the processor120, the I/O subsystem124, the accelerator136, and/or other components of the computing device100. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another. The TEE302may be embodied as a trusted execution environment of the computing device100that is authenticated and protected from unauthorized access using hardware support of the computing device100, such as the secure enclave support122of the processor120. Illustratively, the TEE302may be embodied as one or more secure enclaves established using Intel SGX technology. The TEE302may also include or otherwise interface with one or more drivers, libraries, or other components of the computing device100to interface with the accelerator136. The host cryptographic engine304is configured to generate an authentication tag (AT) based on a memory-mapped I/O (MMIO) transaction and to write that AT to an AT register of the accelerator136. For an MMIO write request, the host cryptographic engine304is further configured to encrypt a data item to generate an encrypted data item, and the AT is generated in response to encrypting the data item. For an MMIO read request, the AT is generated based on an address associated with MMIO read request. The transaction dispatcher306is configured to dispatch the memory-mapped I/O transaction (e.g., an MMIO write request or an MMIO read request) to the accelerator136after writing the calculated AT to the AT register. An MMIO write request may be dispatched with the encrypted data item. The host validator308may be configured to verify that an MMIO write request succeeded in response dispatching the MMIO write request. Verifying that the MMIO write request succeeded may include securely reading a status register of the accelerator136, securely reading a value at the address of the MMIO write from the accelerator136, or reading an AT register of the accelerator136that returns an AT value calculated by the accelerator136, as described below. For MMIO read requests, the host validator308may be further configured to generate an AT based on an encrypted data item included in a MMIO read response dispatched from the accelerator136; read a reported AT from a register of the accelerator136; and determine whether the AT generated by the TEE302matches the AT reported by the accelerator136. The host validator308may be further configured to indicate an error if those ATs do not match, which provides assurance that data was not modified on the way from the TEE302to the accelerator136. The accelerator cryptographic engine312is configured to perform a cryptographic operation associated with the MMIO transaction and to generate an AT based on the MMIO transaction in response to the MMIO transaction being dispatched. For an MMIO write request, the cryptographic operation includes decrypting an encrypted data item received from the TEE302to generate a data item, and the AT is generated based on the encrypted data item. For an MMIO read request, the cryptographic operation includes encrypting a data item from a memory of the accelerator136to generate an encrypted data item, and the AT is generated based on that encrypted data item. The accelerator validator314is configured to determine whether the AT written by the TEE302matches the AT determined by the accelerator136. The accelerator validator314is further configured to drop the MMIO transaction if those ATs do not match. For MMIO read requests, the accelerator validator314may be configured to generate a poisoned AT in response to dropping the MMIO read request, and may be further configured to dispatch a MMIO read response with a poisoned data item to the TEE302in response to dropping the MMIO read request. The memory mapper316is configured to commit the MMIO transaction in response to determining that the AT written by the TEE302matches the AT generated by the accelerator136. For an MMIO write request, committing the transaction may include storing the data item in a memory of the accelerator136. The memory mapper316may be further configured to set a status register to indicate success in response to storing the data item. For an MMIO read request, committing the transaction may include reading the data item at the address in the memory of the accelerator136and dispatching an MMIO read response with the encrypted data item to the TEE302. The DMA manager310is configured to securely write an initialization command to the accelerator136to initialize a secure DMA transfer. The DMA manager310is further configured to securely configure a descriptor indicative of a host memory buffer, an accelerator136buffer, and a transfer direction. The transfer direction may be host to accelerator136or accelerator136to host. The DMA manager310is further configured to securely write a finalization command to the accelerator136to finalize an authentication tag (AT) for the secure DMA transfer. The initialization command, the descriptor, and the finalization command may each be securely written and/or configured with an MMIO write request. The DMA manager310may be further configured to determine whether to transfer additional data in response to securely configuring the descriptor, the finalization command may be securely written in response to determining that no additional data remains for transfer. The AT controller318is configured to initialize an AT in response to the initialization command from the TEE302. The AT controller318is further configured to finalize the AT in response to the finalization command from the TEE302. The DMA engine320is configured to transfer data between the host memory buffer and the accelerator136buffer in response to the descriptor from the TEE302. For a transfer from host to accelerator136, transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator136buffer in response to decrypting the encrypted data. For a transfer from accelerator136to host, transferring the data includes copying plaintext data from the accelerator136buffer and forwarding encrypted data to the host memory buffer in response encrypting the plaintext data. The accelerator cryptographic engine312is configured to perform a cryptographic operation with the data in response to transferring the data and to update the AT in response to transferring the data. For a transfer from host to accelerator136, performing the cryptographic operation includes decrypting encrypted data to generate plaintext data. For a transfer from accelerator136to host, performing the cryptographic operation includes encrypting plaintext data to generate encrypted data. The host validator308is configured to determine an expected AT based on the secure DMA transfer, to read the AT from the accelerator136in response to securely writing the finalization command, and to determine whether the AT from the accelerator136matches the expected AT. The host validator308may be further configured to indicate success if the ATs match and to indicate failure if the ATs do not match. FIG.4illustrates one embodiment of a system400having a computing device420employing a container orchestration controller (or controller)410. In one embodiment, container orchestration enables automated deployment, configuration, coordination and management of multi-container workloads in a containerized architecture. As shown inFIG.4, computing device420includes a host server computer serving as a host machine for employing controller410to facilitate a provisioning of cluster life-cycles (e.g., public and private) accessible by customer organizations421via a platform as a service (PaaS) or infrastructure as a service (IaaS). Computing device420may include (without limitation) server computers (e.g., cloud server computers, etc.), desktop computers, cluster-based computers, set-top boxes (e.g., Internet-based cable television set-top boxes, etc.), etc. Computing device420includes an operating system (“OS”)406serving as an interface between one or more hardware/physical resources of computing device420and one or more client devices430A-430N, etc. Computing device420further includes processor(s)402, memory404, input/output (“I/O”) sources408, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, etc. In one embodiment, host organization101may further employ a production environment that is communicably interfaced with client devices430A-N through host organization101. Client devices430A-N may include (without limitation) customer organization-based server computers, desktop computers, laptop computers, mobile computing devices, such as smartphones, tablet computers, personal digital assistants, e-readers, media Internet devices, smart televisions, television platforms, wearable devices (e.g., glasses, watches, bracelets, smartcards, jewelry, clothing items, etc.), media players, global positioning system-based navigation systems, cable setup boxes, etc. In one embodiment, the illustrated database(s)140store (without limitation) information and underlying database records having customer and user data therein on to process data on behalf of customer organizations421A-N. In some embodiments, host organization101receives input and other requests from a plurality of customer organizations421A-N over one or more networks435; for example, incoming data, or other inputs may be received from customer organizations421A-N to be processed using database system140. In one embodiment, each customer organization421A-N is an entity selected from a group consisting of a separate and distinct remote organization, an organizational group within host organization101, a business partner of host organization101, a customer organization421A-N that subscribes to cloud computing services provided by host organization101, etc. In one embodiment, requests are received at, or submitted to, a web server within host organization101. Host organization101may receive a variety of requests for processing by host organization101. For example, incoming requests received at the web server may specify services from host organization101are to be provided. Further, host organization101may implement a request interface via the web server or as a stand-alone interface to receive requests packets or other requests from the client devices430A-N. The request interface may further support the return of response packets or other replies and responses in an outgoing direction from host organization101to one or more client devices430A-N. In one embodiment, computing device420may include a server computer that may be further in communication with one or more databases or storage repositories, such as database(s)140, which may be located locally or remotely over one or more networks, such as network(s)435(e.g., cloud network, Internet, proximity network, intranet, Internet of Things (“IoT”), Cloud of Things (“CoT”), etc.). Computing device420is further shown to be in communication with any number and type of other computing devices, such as client computing devices430A-N, over one or more networks, such as network(s)435. In one embodiment, computing device420may serve as a service provider core for hosting and maintaining controller410as a SaaS or IaaS, and be in communication with one or more client computers430A-N, over one or more network(s)435, and any number and type of dedicated nodes. In such an embodiment, host organization101implements orchestration controller410to operate as a control plane during deployment and at runtime, to perform tasks such as carving out infrastructure resources needed for microservices to run and allocate the tasks to the different microservices based on their specific need or adapting to different load conditions. FIG.5illustrates one embodiment of a data center. As shown inFIG.5, the data center configuration includes traditional servers, racks of FPGAs, GPUs and storage devices, all of which are connected by infrastructure processing unit (IPUs). In one embodiment, IPUs comprise smart network interface cards (NICs) that not only perform traditional networking functions, but also has additional responsibilities in the control and management of infrastructure. Block501represents a represents a single workload spanning disaggregated compute resources within the data center. As defined herein, a workload comprises services and resources (e.g., storage, network, compute, etc.) implemented to execute an application. Another major trend in computing has been the growth of microservices based applications replacing monolithic applications. A microservice architecture loosely defines coupled services that collaborate to perform a larger function and are developed, deployed and managed independently. For ease of development, deployment and management of microservices, technologies such as Containers and Orchestrators (such as Kubernetes) are widely used. FIG.6illustrates one embodiment of a Kubernetes cluster. Kubernetes provides a cluster management platform implemented for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes systems include various object types that define a set of primitives (e.g., containers, pods and clusters). Containers are packages that rely on virtual isolation to deploy and run applications that access a shared OS. Pods provide a higher level of abstraction that includes a group of containers that are guaranteed to be co-located on the same host machine to share resources. Containers within a pod can reference all other containers in the pod. A cluster includes two or more pods, in which each pod is assigned a unique pod identifier (ID). Although described herein with regards to a Kubernetes system, other embodiments may feature an implementation of different types of container orchestration architectures (e.g., Docker, Mesos, etc.). Currently, an orchestrator has prior knowledge of available hardware resources through initial static provisioning steps, and upon demand carves out requested resources from a static pool of resources for use by a given microservice. Additionally, the orchestrator maintains a static inventory of worker machines (e.g., that were provisioned) and allocates the worker machines from a static pool whenever a microservice requests resources. However, multiple problems exist in disaggregated computing in which the compute resources are distributed, and the availability is dynamic. One problem is that current orchestrators cannot dynamically compose a platform of disaggregated hardware resources per customer requirement, or be provisioned to have knowledge of available pool of resources (e.g., CPUs, GPUs, FPGAs, storage, memory), where the resources are located, how to allocate the resources, how to setup communications amongst the resources, etc. Another problem is that the orchestrator is not currently enabled to dynamically create a worker machine that is composed of disaggregated hardware resources as requested by a microservice.FIG.7Aillustrates a conventional platform in which an orchestrator statically composes. According to one embodiment, an infrastructure manager is provided to enable dynamic platform composition for allocation to a microservices cluster. In such an embodiment, the infrastructure manager dynamically constructs the platform during a provisioning phase via IPUs attached to the disaggregated resources. Dynamic composability enables a cloud service provider (CSP) to construct a platform on the fly based on available resources in a data center.FIG.7Billustrates one embodiment of a dynamically composed platform. As shown inFIG.7B, the platform includes a mix and match of resources, as opposed to the fixed resources shown inFIG.7A. In a further embodiment, runtime orchestration by orchestrion controller110enables dynamic composing/configuration of a worker node. In this embodiment, orchestration controller110schedules a microservice on a suitable worker node during deployment based on the worker node requirements provided by the microservice. In a further embodiment, a microservice includes a manifest file describing resource requirements (e.g., 4 GPUs, 2 CPU cores, 1 GB of storage, etc.). Thus, orchestration controller110may construct a worker node by combining network connected resources in many different ways, which provides enhanced flexibility to use the resources most efficiently. A worker node is defined as an infrastructure resource on which a microservice is operating. FIG.8illustrates another embodiment of a platform800including orchestration controller110, IPU810and a plurality of data center resources850(e.g.,850A-850C). According to one embodiment, platform800comprises a microservice control plane and data plane. As used herein, a control plane refers to a combined role of the orchestration controller110and IPU810in performing resource discovery, worker node configuration, composition of resources, establishing routing and communication, etc., while the data plane refers to the movement of between various resources in a cluster data during runtime. In one embodiment, IPU810enables discovery of resources, and performs management, scheduling and configuration functions. Additionally, IPU810reports information associated with a resource (or the resources information), such as type, capabilities, security features, availability etc., to a central infrastructure manager at orchestration controller110. As shown inFIG.8, IPU810includes coordination logic812, resource manager814, platform health logic816, network817, storage818and security engine819. Coordination logic812provides coordination with orchestration controller110. In one embodiment, coordination logic812coordinates resource discovery, allocation, scheduling, load balancing, performance management, etc. with orchestration controller110. Resource manager814facilitates the management of resources at resources850. Platform health logic816maintains platform health statistics (e.g., key performance indicators (KPIs) usage status, etc.) via monitoring and telemetry. Security engine819provides attestation for platform (e.g., including IPU810and one or more resources850). FIG.9illustrates another embodiment of IPU810. As shown inFIG.9, the security architecture of IPU810provides isolation of a customer's control and data plane, via tenant security910, from being accessed by infrastructure management920. Additionally, the infrastructure management920control and data is protected from networking components associated with a tenant. In a further embodiment, IPU810includes a root of trust930that protects infrastructure management920to secure startup and attest to the entire platform800environment. IPU810also includes microservices orchestration940that provides for orchestration of resource850resources. As a result, orchestration occurs at IPU810, rather than at a CPU. In yet a further embodiment, microservices orchestration940may logically partition each resource850into sub-accelerators. Referring back toFIG.8, resources850provide acceleration resource services98(e.g.,856A-856C), such as GPUs, CPUs, FPGAs, storage, etc.). In one embodiment, resources850each include a telemetry engine854(e.g.,854A-854C) to perform telemetry services to collect measurement data associated with the use of acceleration services856. Resources850also provide a standard set of interfaces to enable running microservices securely at arbitrary granularity and with QoS assurance. Thus, each resource850includes a security engine852(e.g.,852A-852C) that provides for attestation to prove the authenticity and integrity of the resource850. Additionally, security engine852creates a trusted isolation of arbitrary granularity to match the resources requested by a microservice, such as an acceleration service856. Security engine852also facilitates trusted peer-to-peer communication to enable larger microservice that span resources850. FIG.10is a flow diagram illustrating one embodiment of a microservices cluster setup process. At processing block1010, a cluster administrator introduces and provisions new resources in one or more clusters. In one embodiment, this process comprises setting up one or more resources (e.g., GPU, FPGA, CPU, storage, etc.) within a rack and interfacing the resources with IPU810. At processing block1020, IPU810discovers and enumerates the resources. In one embodiment, IPU810also authenticates and attests the resources via security engine819and a security engine825at the resources. In a further embodiment, IPU810sets up a long-term secure communication session with a manager at each resource850and assigns unique internet protocol (IP) address endpoints. At processing block1030, a report of the resource capabilities, long-term secure communication sessions and IP address endpoints are transmitted to orchestration controller410. Subsequently, orchestration controller410updates its state to reflect the presence of the new resources within the cluster. In one embodiment, orchestration controller410may have network (e.g., out-of-band or in-band management) through which it works together with various IPUs810to track how many resources are in use, as well as their health. At processing block1040, identity and certificates provisioning of the resources850is performed by interacting with a secure processing element within a resource850. FIG.11is a flow diagram illustrating one embodiment of a process for composing a node. At processing1110, a developer (e.g., microservice developer) provides a worker node configuration to orchestration controller410in the form of a manifest. In one embodiment, the manifest lists a type of resources that is needed, attributes related to the resource, details regarding the workload that will execute on the resources, as well as other metadata. In current implementations, manifests include information regarding a containerized application image and where the image may be located (e.g., in a registry or a local store). According to one embodiment, registries are provided within each accelerator to store configuration information (e.g., bitstreams of FPGAs, compute kernels for GPUs, etc.). At processing block1120, orchestration controller410finds available resources within the platform. In one embodiment, orchestration controller410examines the available resources based on a persistent cluster state, and schedules the corresponding resources by interacting with a node agent813within coordination logic812of IPU810. IPU node agents813are control plane components that communicate with orchestration controller410. In one embodiment, node agents813operate as endpoints with which orchestration controller410may communicate for management related functions. In such an embodiment, a node agent813may listen for new requests from the orchestration controller410(e.g., via out of band or in-band management). In a further embodiment, orchestration controller410assigns an identifier (or composed platform ID) to a resource and creates a mapping to individual resource IDs. Further, orchestration controller410removes the resource IDs from an available resources pool. Accordingly, orchestration controller410returns a failure message in instances in which a resource requested by a manifest is not available. At processing block1130, a node agent813having a corresponding platform ID and resource ID to be allocated receives a configuration file including configuration information from orchestration controller410during a scheduling process. In one embodiment, the configuration file provides details (e.g., on how to reach the other endpoint, like an IP address, port number) regarding each IPU node agent813involved in configuring the composable platform. In a further embodiment, IPU810managing CPU resources operates as a master, and establishes mutually authenticated secure channels with the IPUs having the other resource850resources. In yet a further embodiment, this master IPU810requests for virtualized resource850endpoint objects from the other IPUs810.FIG.12Aillustrates one embodiment of the platform after receipt of the worker node configuration request at an IPU810from orchestration controller410. At processing block1140, the master IPU810exposes the virtualized resource850endpoint as a hot-pluggable PCIe device that is enumerated on a CPU platform. In one embodiment, the actual translation (e.g., from CPU platform ←PCIe→ Custom protocol (such as accelerator over fabric) ←PCIe→ accelerator) is handled transparently by the IPUs. It's designed as a protocol similar to NVMe over Fabric—XPU over Fabric that encapsulates the underlying transfer mechanisms.FIG.12Billustrates one embodiment of the platform after a virtualized accelerator endpoint gas been exposed. At processing block1150, an IPU810transmits a message to orchestration controller410informing that the worker node has been successfully composed. At processing block1160, an IPU810receives the specification for an execution environment for a microservice from orchestration controller410. At processing block1170, an IPU810communicates with a registry to retrieve one or more images associated with the configuration information included in the configuration file. In one embodiment, an image comprises container images, bitstreams, configuration information, etc. At processing block1180, IPU verifies the image. In one embodiment, the IPU verifies the image by verifying the image signature, and decrypting and inspecting the image for potentially malicious code.FIG.12Cillustrates one embodiment of the platform after images have been pulled by each IPU810. At processing block1190, an IPU810transfers the respective images to the resource850management bitstream, where the resource850creates an execution environment based on the provided image. Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below. Example 1 includes an apparatus comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster. Example 2 includes the subject matter of Example 1, further comprising an orchestration controller, communicatively coupled to the IPU, to compose the platform via the IPU during a provisioning phase. Example 3 includes the subject matter of any of Examples 1-2, wherein the orchestration controller schedules a microservice at one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice. Example 4 includes the subject matter of any of Examples 1-3, wherein the IPU discovers and performs management of the plurality of disaggregated data center resources. Example 5 includes the subject matter of any of Examples 1-4, wherein the IPU reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller. Example 6 includes the subject matter of any of Examples 1-5, wherein the IPU authenticates and attests the plurality of disaggregated data center resources. Example 7 includes the subject matter of any of Examples 1-6, wherein the IPU establishes a communication session with each of the plurality of disaggregated data center resources. Example 8 includes the subject matter of any of Examples 1-7, wherein the IPU receives a configuration file including configuration information from the orchestration controller during a scheduling process. Example 9 includes the subject matter of any of Examples 1-8, wherein the IPU exposes a virtualized resource endpoint at a disaggregated data center resource. Example 10 includes the subject matter of any of Examples 1-9, wherein the IPU transmits a message to the orchestration controller indicating that a disaggregated data center resource has been composed and receives a specification for an execution environment for a microservice from the orchestration controller. Example 11 includes the subject matter of any of Examples 1-10, wherein the IPU retrieves one or more images associated with the configuration information included in the configuration file from a registry and transfers the one or more images to a disaggregated data center resource. Example 12 includes a method comprising performing provisioning at an infrastructure processing unit (IPU) to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster and performing orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice Example 13 includes the subject matter of Example 12, wherein performing the provisioning comprises the IPU discovering and managing of the plurality of disaggregated data center resources. Example 14 includes the subject matter of any of Examples 12-13, wherein performing the provisioning further comprises the IPU reporting information associated with each of the plurality of disaggregated data center resources to the orchestration controller. Example 15 includes the subject matter of any of Examples 12-14, wherein performing the provisioning further comprises the IPU authenticating the plurality of disaggregated data center resources, the IPU attesting the plurality of disaggregated data center resources and the IPU establishing a communication session with each of the plurality of disaggregated data center resources. Example 16 includes the subject matter of any of Examples 12-15, wherein performing the orchestration comprises scheduling a microservice at one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice. Example 17 includes the subject matter of any of Examples 12-16, wherein performing the orchestration further comprises the IPU receiving a configuration file including configuration information from an orchestration controller, transmitting a message to the orchestration controller indicating that a disaggregated data center resource has been composed; and receiving a specification for an execution environment for a microservice from the orchestration controller. Example 18 includes the subject matter of any of Examples 12-17, wherein performing the orchestration further comprises the IPU retrieving one or more images associated with the configuration information included in the configuration file from a registry and transferring the one or more images to a disaggregated data center resource. Example 19 includes a method comprising wherein performing the orchestration further comprises the IPU retrieving one or more images associated with the configuration information included in the configuration file from a registry and transferring the one or more images to a disaggregated data center resource. Example 20 includes the subject matter of Example 19, wherein the resource management circuitry discovers and performs management of the plurality of disaggregated data center resources. Example 21 includes the subject matter of any of Examples 19-20, wherein the resource management circuitry reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller. Example 22 includes the subject matter of any of Examples 19-21, wherein the resource management circuitry establishes a communication session with each of the plurality of disaggregated data center resources. Example 23 includes the subject matter of any of Examples 19-22, wherein the coordination circuitry receives a configuration file including configuration information from the orchestration controller during a scheduling process. Example 24 includes at least one computer readable medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform provisioning at an infrastructure processing unit (IPU) to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster and perform orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice. The above Detailed Description includes references to the accompanying drawings, which form a part of the Detailed Description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, also contemplated are examples that include the elements shown or described. Moreover, also contemplated are examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. Publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) are supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In addition, “a set of” includes one or more elements. In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” “third,” etc. are used merely as labels, and are not intended to suggest a numerical order for their objects. The terms “logic instructions” as referred to herein relates to expressions which may be understood by one or more machines for performing one or more logical operations. For example, logic instructions may comprise instructions which are interpretable by a processor compiler for executing one or more operations on one or more data objects. However, this is merely an example of machine-readable instructions and examples are not limited in this respect. The terms “computer readable medium” as referred to herein relates to media capable of maintaining expressions which are perceivable by one or more machines. For example, a computer readable medium may comprise one or more storage devices for storing computer readable instructions or data. Such storage devices may comprise storage media such as, for example, optical, magnetic or semiconductor storage media. However, this is merely an example of a computer readable medium and examples are not limited in this respect. The term “logic” as referred to herein relates to structure for performing one or more logical operations. For example, logic may comprise circuitry which provides one or more output signals based upon one or more input signals. Such circuitry may comprise a finite state machine which receives a digital input and provides a digital output, or circuitry which provides one or more analog output signals in response to one or more analog input signals. Such circuitry may be provided in an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). Also, logic may comprise machine-readable instructions stored in a memory in combination with processing circuitry to execute such machine-readable instructions. However, these are merely examples of structures which may provide logic and examples are not limited in this respect. Some of the methods described herein may be embodied as logic instructions on a computer-readable medium. When executed on a processor, the logic instructions cause a processor to be programmed as a special-purpose machine that implements the described methods. The processor, when configured by the logic instructions to execute the methods described herein, constitutes structure for performing the described methods. Alternatively, the methods described herein may be reduced to logic on, e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or the like. In the description and claims, the terms coupled and connected, along with their derivatives, may be used. In particular examples, connected may be used to indicate that two or more elements are in direct physical or electrical contact with each other. Coupled may mean that two or more elements are in direct physical or electrical contact. However, coupled may also mean that two or more elements may not be in direct contact with each other, but yet may still cooperate or interact with each other. Reference in the specification to “one example” or “some examples” means that a particular feature, structure, or characteristic described in connection with the example is included in at least an implementation. The appearances of the phrase “in one example” in various places in the specification may or may not be all referring to the same example. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. Although examples have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
55,230
11861407
DETAILED DESCRIPTION The same technical features in the various figures are indicated by the same reference symbols, wherein a technical feature is normally described only once. In some examples, a mobile end device, such as a vehicle, may be configured for computation offloading from the mobile end devices (e.g., from the vehicles), to at least one edge computer and/or at least one cloud computer, comprising the following steps: 1) obtaining resource information from the at least one edge computer (e.g. available resources, latencies, availabilities, power consumption, connection quality, costs, etc., 2) obtaining resource information from the at least one cloud computer (e.g. available resources, latencies, availabilities, power consumption, connection quality, costs, etc.), 3) obtaining application information from at least one system application in the mobile end device, such as the vehicle (resource demand, location, application information, latency requirements, data quantities, data types (private, local, global, etc.), and 4) assigning a computing capacity for the at least one system application in the mobile end device, such as the vehicle, to the at least one edge computer and/or the at least one cloud computer, preferably on the basis of the resource information and application information that has been obtained. In some examples, an edge computer as may be configured as a computing node in an edge node network, which has a processor with computing power and a communication unit for exchanging data with other edge computers and/or an end devices. An edge computer can provide computing power at the edge of the edge node network. The edge computer can receive data from the end device according to the invention, e.g., a vehicle, via the communication unit, which it can then process with its own computing power on its processor. The results of this processing can be sent back to the vehicle by the edge computer via the communication unit. The data can comprise sensor data, for example, which can be evaluated on the edge computer. It is also conceivable for the edge computer to assume different computing tasks for different system applications in the mobile end device, e.g., in the vehicle. The mobile end device, or some of the systems in the mobile end device, can send data to at least one edge computer and/or at least one cloud computer, which may be processed with external computing, if, for example, there is no computing power available for this in the mobile end device, or the computing power is insufficient, or is being used elsewhere. Of the potentially numerous edge and cloud computers, at least one near edge computer and/or at least one remote cloud computer may be selected, which is best suited to executing a desired system application in the mobile end device. In some examples, an edge computer as set forth in the present disclosure can be understood to be a stationary edge computer as well as a mobile edge computer. A stationary edge computer may be configured as a base station for a mobile service provider and/or a network provider. A mobile edge computer may be installed at different locations where computing power is needed in mobile end devices, in particular vehicles, e.g., at intersections, parking lots, etc. It is also conceivable for a mobile edge computer as set forth herein to be in motion while it provides the computing power. By way of example, a vehicle, e.g., a vehicle belonging to a fleet of vehicles, a drone, or a mobile base station can be used as a mobile edge computer. It is conceivable within the framework of the present disclosure that the computing power that is offloaded from the mobile end device for a system application, is entirely offloaded or only partially offloaded. Decentralized computing capacities as disclosed herein may be advantageous as resources for system applications and/or computing operations of system applications in mobile end devices, in particular vehicles. In some examples, edge computers can be located in close proximity, or within a specific distance (e.g. 100 to 500 meters) to a mobile end device, such as a vehicle (e.g. at intersections or on other vehicles). Cloud computers can be located in remote computing centers. As a result, it may be necessary to address the decentralized resources, (e.g., the edge computer and/or cloud computer) in accordance with the requirements for this. Specific system applications or computing operations for a specific system application may be assigned to an edge computer or a cloud computer based on existing, decision-relevant resource information and application information, as well as properties of transmission paths. Advantageously, decision-relevant variables in the external resources (e.g., data transfer rate, latency, power consumption, costs, etc.) may be taken into account within the framework of the present disclosure, both from the perspective of the system applications that are to be executed, and from the perspective of the network, such as the available edge computers and/or cloud computers in the network. Each system application in the mobile end device, in particular in the vehicle, can in turn be assigned different application information that describes the various requirements or properties of the respective system application, e.g., priority, resource requirements, acceptable waiting times, file size, data type, power requirements and/or costs. The evaluation of all of the information and the assignment of optimal computing capacities based on the given requirements may be carried out by a control unit in a mobile end device, e.g., a vehicle, which functions as a type of orchestrator for all of the possible system applications for computation offloading from the mobile end device, in particular a vehicle, to at least one near edge computer and/or at least one remote cloud computer, of the potentially numerous edge and cloud computers. In the case of a vehicle serving as a mobile end device as set forth in the present disclosure, the control unit can be part of the central control unit for the vehicle, or it can be a separate control unit, designed to monitor the network (edge computer and/or cloud computer) within the reception range of the vehicle; monitor the system applications within the vehicle; manage the computing power within the vehicle, and offload computing power from the vehicle to at least one near edge computer and/or at least one remote cloud computer (of the potentially numerous edge and cloud computers). With mobile end devices, such as vehicles that are in motion, the availability of edge computers and/or cloud computers may change. The present disclosure advantageously makes it possible to dynamically take available edge computers and/or cloud computers and their resource information into account. If the application information is known for system applications in the mobile end device, in particular in a vehicle, computational offloading may be configured to be dynamic (e.g., temporally and/or locally variable, and/or variable with regard to data management and/or data transfer rates) from the perspective of the mobile end device (e.g., vehicle). The requirements for time-critical system applications in the mobile end device, e.g. a vehicle, may also be taken into account for the applications in an optimal manner. In some examples, computation offloading from the mobile end device (e.g., vehicle) to at least one edge computer and/or at least one cloud computer, may be configured such that the resource information from the at least one edge computer or the at least one cloud computer includes at least one of the following:position, e.g. geographic location,properties of the connection, e.g. connection quality,services offered,data transfer rate,computing power and/or memory (volatile, non-volatile),workload,reception range (seen geographically, in relation to people, in relation to the computer, in relation to content, etc.),temporal availability,available computing capacity,reliability,response time,power consumption,costs. As a result, both current and changing aspects can be taken into account that are specific to the available edge computer and/or cloud computer. These aspects may also be relevant to decisions regarding the suitability of the edge computer and/or cloud computer for providing computing power for specific system applications in the mobile end device, in particular a vehicle. In some examples, computation offloading from the mobile end device to at least one edge computer and/or at least one cloud computer may be configured such that the application information from at least one system application in the mobile end device includes at least one of the following:priority,resource requirements,acceptable response time,file size,data type,power requirements,costs. The priority makes it possible to check how important the respective system application is for the mobile end device at the moment, or will be while the mobile end device is in motion, e.g., while the vehicle is travelling. Advantageously, safety-relevant system applications can be assigned a higher priority than entertainment functions. “Resource requirements” are those resources needed for executing the respective system application. “Acceptable response time” refers to the maximum (reasonable) latency for the system application. “File size” indicates the size of the data file that is to be sent. “Data type” refers more precisely to the data that are to be sent. Possible properties can include data confidentiality. If private data are transferred, the data are from local users. If global data are transferred, these data can come from superordinate users, e.g., for other mobile end devices, e.g., vehicles. “Power consumption” relates to information regarding the energy needed to execute the system application. “Costs” relate to information regarding the costs involved in the use of the respective computing capacity. All of this application information may be stored in a control unit under some examples. The control unit can also allow the possibilities externally available to the mobile end device to execute one or more system applications in a decentralized manner, and send the data for the system application back to the mobile end device. The control unit may also know the relevant resource information for the network. Based on the requirements for the system application, as well as the knowledge regarding relevant resource information for the network, the control unit makes a decision regarding where the system application in question can be executed. This can then take place either locally, in the mobile edge computer, and/or in an external computing center, such as a cloud computer. In some examples, computation offloading from the mobile end device to at least one edge computer and/or at least one cloud computer may be configured for at least one of the following system applications:navigation,position determination,streaming,data processing,gesture recognition,sensor data evaluation,sensor data fusion,driving maneuver calculation,driver assistance functions,driving modes in accordance with one of the numerous possible degrees of automation when operating the vehicle, if the mobile end device is a vehicle,highly automated and/or autonomous driving, if the mobile end device is a vehicle. In such configurations, performance within the mobile end device, in particular in the form of a vehicle, can advantageously be improved and expanded. Customer convenience can be also increased in this manner. In a method for offloading from the mobile end device to at least one edge computer and/or at least one cloud computer, the steps 1) to 4) described above may be repeated dynamically, based on the speed, and/or route, and/or desired system applications in the mobile end device. This enables a dynamic offloading of the system applications while the mobile end device is in motion, if there is a change in the system applications that are needed, and if there is a change in networks in the (geographic) reception range of the mobile end device. Furthermore, when utilizing computation offloading from the mobile end device to at least one edge computer and/or at least one cloud computer, the present disclosure can also provide that the at least one edge computer, in the form of a stationary and/or mobile edge node for a network, may be configured as a base station for a mobile service provider, and/or a network provider, a mobile telephone, smartphone, tablet, vehicle, drone, or some other wireless connection end device. This results in a flexible network with extended functions and better coverage and connectivity. In some examples, a control unit is disclosed for a mobile end device, in particular a vehicle, for computation offloading from the mobile end device, in particular from the motor vehicle, to at least one edge computer and/or at least one cloud computer, that may include a communication unit for acquiring resource information from the at least one edge computer and/or resource information from the at least one cloud computer, wherein the communication unit is designed to obtain application information from at least one system application in the mobile end device, in particular a vehicle, and a computer for assigning a computing capacity for the at least one system application in the mobile end device, in particular in the vehicle, to at least one edge computer and/or at least one cloud computer, preferably based on the acquired resource information and application information. Advantageously, the control unit can be configured to execute a method that can run as described herein. According to another example, the control unit can be part of a central control unit for the mobile end device, in particular the vehicle, or it can be a separate control unit. A central control unit can be incorporated in modern mobile end units, in particular vehicles, by the manufacturer. With a separate control unit, the functions of existing mobile end devices, e.g. vehicles, can be expanded. Furthermore, with a control unit for a mobile end device, e.g., a vehicle, configured to offload computing power from the mobile end device to at least one edge computer and/or at least one cloud computer, the present disclosure can provide a memory in which a dynamic list is stored that includes at least one edge computer and/or at least one cloud computer within the reception range of the mobile end device. This list can indicate the available external edge computers and/or cloud computers that are currently and/or will be within the reception range of, and/or along the route taken by, the mobile end device, in particular the vehicle. As a result, a better selection of edge computers and/or cloud computers that are being passed can be obtained, which can reliably provide the computing power while the mobile end device is in motion. The list can also be used to determine the areas where there is poor coverage by the edge computer and/or cloud computer, in order to avoid these regions as desired. Furthermore, with a control unit for a mobile end device for computation offloading from the mobile end device to at least one edge computer and/or at least one cloud computer, the present disclosure can provide that the computer is configured to dynamically update the mobile end device based on the speed and/or route and/or desired or required system applications, e.g. for a degree of driving automation. It is possible to determine which edge computers and/or cloud computers are currently available in this manner. FIG.1illustrates a method according to the present disclosure for a mobile end device1, e.g. a vehicle, for offloading computing power CI5from the mobile end device1, e.g. from the vehicle, to at least one near edge computer ECC and/or at least one remote cloud computer CC (of potentially numerous edge and cloud computers), that may include the following steps:1) obtaining resource information CI from the at least one edge computer ECC,2) obtaining resource information CI from the at least one cloud computer CC,3) obtaining application information I from at least one system application APP in the mobile end device1, e.g. a vehicle,4) assigning a computing capacity for the at least one system application APP in the mobile end device1, e.g. in a vehicle, to at least one edge computer ECC and/or at least one cloud computer CC. An edge computer ECC can be located at the edge of a network and have a processor that has a computing power CI5and a communication unit101for exchanging data, for example, with the mobile end device1according to the invention, e.g., the vehicle. The edge computer ECC can provide computing power CI5at the edge of the edge node network. The edge computer ECC can receive data from the mobile end device1, e.g., a vehicle, via the communication unit101, which it can process with its own computing power CI5on its processor. The results of the processing can be sent back to the mobile end device1, e.g., a vehicle, from the edge computer ECC via the communication unit101. These data can be sensor data, which can be evaluated on the edge computer ECC. It is also conceivable that the edge computer ECC can assume various computing tasks for different system applications APP in the mobile end device1, e.g., a vehicle. The mobile end device1, e.g., a vehicle, or some systems within the mobile end device1, e.g., the vehicle, can send data to at least one edge computer ECC and/or at least one cloud computer CC within the framework of the invention, which are to be processed externally there, e.g., if the computing power CI5in the mobile end device1is not available, not sufficient, or otherwise used. According to some examples, at least one edge computer ECC and/or at least one cloud computer CC (of potentially numerous edge and cloud computers) can be selected that is capable of executing a desired system application APP in the mobile end device1. An edge computer ECC can be a stationary edge computer ECC, e.g., in the form of a base station for a mobile service provider and/or network provider, or it can be a mobile edge computer ECC. A mobile edge computer ECC as set forth in the present disclosure can be placed at different locations where computing power CI5is needed in mobile end devices1, e.g., vehicles, for example, at intersections, parking lots, etc. The mobile edge computer ECC as set forth in the present disclosure can also be in motion while it is providing the computing power CI5. A vehicle1, e.g., a vehicle from a fleet, a drone, or a mobile base station can be used as a mobile edge computer ECC as set forth in the present disclosure. In some examples, decentralized computing capacities may be used as resources for system applications APP in mobile end devices, e.g., a vehicle. The edge computers ECC can be near (e.g., 100 to 500 meters) the mobile end device1, e.g. a vehicle (e.g. at intersections or in other vehicles). The cloud computers CC can be in remote computer centers. The edge computer ECC and/or cloud computer CC may be addressed in accordance with system requirements. The following decision-relevant resource information CI from the external resources may be taken into account from the perspective of the available edge computer ECC and/or cloud computer CC for this:position, e.g. geographical location CI1,properties of the connection, e.g. connection quality CI2,services CI3offered,data transfer rate CI4,computing power and/or memory (volatile/non-volatile) CI5,workload CI6,reception range (seen geographically, in relation to people, in relation to the computer, in relation to content, etc.) CI7,temporal availability CI8,available computing capacity CI9,reliability CI10,response time CI11,power consumption CI12,costs CI13. Each system application APP in the mobile end device1, e.g. a vehicle, can in turn be assigned different application information I, which describe different requirements or properties of the respective system application APP, e.g.:priority I1,resource requirements I2,acceptable response time I3,file size I4,data type I5,power requirements I6,costs I7. The evaluation of all of the resource information CI and application information I and the assignment of an optimal computing power CI5on the mobile end device, e.g., the vehicle1, and/or on the at least one edge computer ECC and/or at least one cloud computer CC take place on a control unit10. The assignment takes place on the basis of the given requirements for the system application PP and the given resource information CI for the network. The control unit10functions as an orchestrator for all of the possible system applications APP for offloading computing CI5from the mobile end device1, e.g., the vehicle1, to at least one near edge computer ECC and/or at least one remote cloud computer CC (of potentially numerous edge and cloud computers). The orchestrator decides whether and to where an application is offloaded. The control unit10can be in the central control unit10for the mobile end device1, e.g. the vehicle, or it can be a separate control unit10, designed tomonitor the network (edge computer ECC and/or cloud computer CC) within the reception range of the mobile end device1, e.g. the vehicle, (i.e. within the range of the respective communication units11,101of the communication partner),monitor the system applications APP within the mobile end device1, e.g., the vehicle,manage the computing power CI5within the mobile end device1, e.g., the vehicle, andoffload computing power CI5from the mobile end device1, e.g., the vehicle, to at least one near edge computer ECC and/or at least one remote cloud computer CC (of potentially numerous edge and cloud computers). Because there is a change in available edge computers ECC and/or cloud computers CC with mobile end devices1, e.g. vehicles, that are in motion, the present disclosure advantageously enables the available computing sources to be taken into account dynamically. If the application information I for system applications APP in the mobile end device1, e.g., the vehicle, is known, the method according to the present disclosure enables a dynamic offloading of computing power CI5from the mobile end device1, e.g. the vehicle, even for time-critical system applications APP. In some examples, system applications APP can be configured as one or more of the following applications:navigation,position determination,streaming,data processing,gesture recognition,sensor data evaluation,sensor data fusion,driving maneuver calculation,driver assistance functions,driving modes in accordance with one of the numerous possible degrees of automation when operating the vehicle1,highly automated and/or autonomous driving. The control unit10according to the invention is shown by way of example inFIG.2. The control unit10contains a communication unit11for obtaining resource information CI from the at least one edge computer ECC and/or resource information CI from the at least one cloud computer CC. The communication unit11is designed to obtain application information I from at least one system application APP in the mobile end device1. The control unit10also contains a computer12for processing information and assigning computing capacity for the at least one system application APP in the mobile end device1, e.g., the vehicle1, to the at least one edge computer ECC and/or the at least one cloud computer. The control unit10can also include a memory14in which a dynamic list is stored that contains at least one edge computer ECC and/or at least one cloud computer CC within the reception range of the mobile end device1, e.g. the vehicle. The reception range for the mobile end device1can be determined by the data communication range between the communication unit11in the mobile end device1and the respective communication unit101in the network. It is also conceivable for the control unit10to use a suitable communication unit in the mobile end device1for communicating with the network. A mobile end device1, e.g. a vehicle, that has a control unit10, likewise forms an aspect of the present disclosure. A network100, e.g. a vehicle network100that has numerous networked mobile end devices1, e.g. vehicles, forms another aspect of the present disclosure, wherein at least of the numerous networked mobile end devices1, can contain the control unit10. An improved network can be obtained with the network100, in particular a vehicle network100, in which optimal use is made of the computing resources in the mobile end device1, e.g. the vehicle, and the network. With a mobile end device1in the form of a vehicle, it is conceivable that a vehicle in the vehicle network100has already evaluated certain street signs. Subsequent vehicles then do not have to make this evaluation, if the results of the evaluation are distributed among the networked vehicles. Sensor data fusion and/or distribution between the vehicles for certain sensor data is also conceivable. The above description of the figures describes the present invention exclusively in the framework of examples. Individual features of the embodiments can be freely combined with one another, if this is reasonable from a technological perspective, without abandoning the framework of the present disclosure. LIST OF REFERENCE SYMBOLS 1mobile end device10control unit11communication unit12computer14memory100network, vehicle network101communication unitAPP system applicationCC cloud computerCI resource informationCI1positionCI2connection qualityCI3services offeredCI4data transfer rateCI5computing powerCI6workloadCI7reception rangeCI8temporal availabilityCI9available computing capacityCI10reliabilityCI11response timeCI12power consumptionCI13costsECC edge computerI application informationI1priorityI2resource requirementsI3acceptable response timeI4file sizeI5data typeI6power consumptionI7costs
26,069
11861408
DETAILED DESCRIPTION The technology is directed to discovering hardware acceleration services provided by hardware (HW) accelerator cards connected to a computing device via communication interfaces. In this regard, a processor may communicate a request for a listing of the functions and capabilities of the HW accelerator card connected to the computing device via the communication interface. A listing of the functions and capabilities of the HW accelerator card, hereinafter referred to as “acceleration services,” may be stored in the memory of the HW accelerator card. In response to receiving the request from the processor, the HW accelerator card may retrieve the listing from memory and provide a response to the processor that includes a listing of the HW acceleration services provided by the HW accelerator card. To overcome the deficiencies of discovering acceleration services the technology described herein uses a standardized listing of identifiers that correspond to acceleration services that can be provided by the accelerators on HW accelerator cards. In this regard, each HW accelerator card may store a listing of identifiers that correspond to the acceleration services provided by the accelerators on that card. As the identifiers can provide more granularity than the device classes and subclasses currently used, processors which retrieve the listings from the HW accelerator cards will be able to determine and leverage more accelerator services offered by the accelerators on the HW accelerator cards. As used herein, the term “acceleration services” refers to the capabilities and functionalities offered by accelerators of a HW accelerator card. References to “acceleration services” of a HW accelerator card refers to the acceleration services of the accelerators on that HW accelerator card. Acceleration services may include capabilities and functionalities that an accelerator can leverage to control the processing of data, referred to herein as control-plane acceleration services. Acceleration services may also include capabilities and functionalities that an accelerator can leverage to process the data, referred to herein as data-plane acceleration services. For example, an accelerator can support acceleration services that provide controls and/or policies for sharing memory between memory on the host (the computing device) and the accelerator. This control-plane acceleration service can be identified and communicated as an acceleration service. As each HW accelerator card may have many accelerators, each HW accelerator may provide many acceleration services having the same and/or different capabilities and functionalities. Further, each accelerator may include more than one function and capability. Example Systems FIG.1depicts an example architecture of a computing device110in which the features described herein may be implemented. This example should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. Computing device110may be a server, personal computer, or other such systems. The architecture of the computing device110includes a processor112, memory114, and a hardware accelerator card118. The processor112may include one or more general purpose processors, such as a Central Processing Unit (CPU), and/or one or more special purpose processors, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), etc. The processor112may be of any type including but not limited to one or more microprocessor (uP), one or more microcontroller (uC), one or more digital signal processor (DSP), or any combination thereof. The processor may include one or more levels of caching, one or more processor cores, and one or more registers. Each processor core may include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. The processor112may be configured to execute computer-readable program instructions that may be contained in a data storage, such as instruction117stored in memory114, and/or other instructions as described herein. The memory114can store information accessible by the processor112, including instructions117that can be executed by the processor112. Memory can also include data116that can be retrieved, manipulated, or stored by the processor112. The memory114may be a type of non-transitory computer readable medium capable of storing information accessible by the processor112, such as a hard-drive, solid state drive, tape drive, optical storage, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. The instructions117can be a set of instructions executed directly, such as machine code, or indirectly, such as scripts, by the processor112. In this regard, the terms “instructions,” “steps,” and “programs” can be used interchangeably herein. The instructions117can be stored in object code format for direct processing by the processor112, or other types of computer language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The data116can be retrieved, stored, or modified by the processor112in accordance with the instructions117or other such instructions. For instance, although the system and method are not limited by a particular data structure, the data116can be stored in computer registers, in a distributed storage system as a structure having a plurality of different fields and records, or documents, or buffers. The data116can also be formatted in a computer-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data116can include information sufficient to identify relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories, including other network locations, or information that is used by a function to calculate relevant data. AlthoughFIG.1functionally illustrates the processor112and memory114as being within the same block, the processor112and memory114may actually include multiple processors and memories that may or may not be stored within the same physical housing. For example, some of the instructions117and data116can be stored on a removable CD-ROM and others within a read-only DRAM chip. Some or all of the instructions and data can be stored in a location physically remote from, yet still accessible by, the processor112. AlthoughFIG.1illustrates computing device110as including only one processor112, memory,114, and HW accelerator card118, the computing device110may include any number of processors, memory, and HW accelerator cards. Similarly, the processor120can actually include a collection of processors, which may or may not operate in parallel. The computing device may further include a hardware (HW) accelerator card118. The hardware accelerator card118may be any device configured to efficiently process particular types of tasks. Some examples of HW accelerator cards include network accelerator cards, video transcoding accelerator cards, security function accelerator cards, cryptography accelerator cards, sound processing accelerator cards, artificial intelligence accelerator cards, etc. Each of these HW accelerator cards may be configured to provide particular acceleration services such as compression, encryption, transcoding, hash generation, graphic processing, simulation, etc. Some HW accelerator cards may be configured to provide multiple acceleration services such as compression and encryption, or any other combination of acceleration services. Referring toFIG.2, the HW accelerator card may include a compute complex212, memory214, and accelerators228a,228b, and228c. The compute complex may be comprised of one or more processors. The one or more processors may control the general operation of the other components of the hardware accelerator, such as by distributing processing tasks amongst the accelerators228a-228cand communicating with other devices in the computing device110, such as processor112. The one or more processors of the compute complex212may be comprised of one or more general purpose processors and/or special purpose processors. Typically, the compute complex of a hardware accelerator card is comprised of one or more special purpose processors, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), etc., capable of executing ARM-based instruction sets, although other instruction sets may be used. The accelerators228a-228cmay each be comprised of one or more processors capable of providing particular acceleration services. For example, each accelerator may be configured to provide particular acceleration services such as compression, encryption, transcoding, hash generation, graphic processing, simulation, etc. Some HW accelerator cards may be configured to provide multiple acceleration services such as compression and encryption, or any other combination of acceleration services. The one or more processors of the accelerators may be one or more special purpose processors, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), specialized processors, etc. used. Although three accelerators are illustrated inFIG.2, including accelerators228a-228c, HW accelerator cards may include any number of accelerators. As previously explained, each individual accelerator may be configured to provide more than one acceleration service (e.g., more than one function and/or capability). Referring again toFIG.2, the HW accelerator card includes memory214. The memory214may be compared to memory114in that it may be any type of non-transitory computer readable medium capable of storing information accessible by the processor120, such as a hard-drive, solid state drive, tape drive, optical storage, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. Memory214may store information accessible by the compute complex212and/or accelerators228a-228c, including instructions217that can be executed by the compute complex212and/or accelerators228a-228c. Although not shown, each accelerator228-228cmay have its own memory and/or a pool of shared memory for storing data and instructions for execution of tasks assigned by the compute complex212. The data216within memory214can be retrieved, stored or modified by the compute complex212and/or accelerators228a-228cin accordance with the instructions217or other such instructions. As further illustrated inFIG.2, the data216may include one or more accelerations service listing218. The acceleration service listing218may contain a list of the acceleration services provided by each accelerator228a-228c. The listings of acceleration services may be in a standardized form. In this regard, each particular acceleration service may be assigned a particular, unique identifier. All accelerators that have a certain acceleration service would include the unique identifier associated with that certain acceleration service in the listing of acceleration services. FIG.3illustrates example listings328a-328cwhich correspond to accelerators228a-228c, respectively, as stored within memory214. In this regard, memory214includes a listing for each accelerator on the HW accelerator card118. As illustrated, listing328aidentifies the acceleration services provided by accelerator228a, as identified by unique identifiers including Function1, Capability1, Function3, and Function5. Similarly, accelerator228bis capable of providing three acceleration services, and each acceleration service is identified within listing328bby its unique identifier, including Function1, Function5, and Capability9. Accelerator228cis capable of providing two acceleration services. Each of these two acceleration services is identified in listing328cby unique identifiers including Capability9and Function5. As further illustrated inFIG.3, accelerators that provide a common acceleration service may be associated with the same unique identifier in their respective listings. For instance, Function1is the unique identifier associated with a particular function capable of being performed by accelerators228aand228b. Thus, listings328aand328bcontain the same unique identifier Function1. Similarly, Capability9is the unique identifier associated with a particular capability of accelerators228band228c. Thus, listings328band32ccontain the same unique identifier of Capability9. The unique identifiers inFIG.3are merely examples of possible identifiers. Identifiers may include any value or other such indicator, including numbers, letters, symbols, etc. The listings328a-328care examples of a possible format for listing unique identifiers associated with accelerators of the HW accelerator card118. In some examples, the listings of accelerators may be stored in a combined listing, such as a spreadsheet or database. For example, the combined listing may identify each accelerator and the unique identifiers associated with the acceleration services provided by that accelerator. Similarly, the listing may be grouped according to accelerators. For instance, a first listing may include a combined listing for a first set of accelerators and a second listing may include a combined listing for a second set of accelerators. Other data may also be included in the listings. AlthoughFIGS.2and3illustrate the listings as being stored on memory216, the listings may be stored on the memory of one or more accelerators. Although not illustrated, a manager may maintain a repository of acceleration services and associated unique identifiers for the acceleration services. The manager may be an individual(s), a company, a collection of companies, a standards organization(s), etc. In addition to maintaining the repository, the manager may also assign the unique identifiers to each acceleration service and add additional acceleration services and corresponding unique identifiers when developed, received, or otherwise requested. By providing a repository of acceleration services and associated unique identifiers, the identifiers used to indicate acceleration services may be consistent across HW accelerator cards, even when the HW accelerator cards are manufactured by different vendors. Referring toFIG.2, the processor112may communicate directly with the hardware accelerator card118using a communication interface and protocol. For example, the processor(s)112may communicate with the hardware accelerator card(s) using PCIe interface260. AlthoughFIG.2illustrates a PCIe interface260, other communication interfaces and protocols may be used. For example, the processor(s)112may communicate with the HW accelerator card(s)118using one or more of a CAN interface and protocol, an SPI interface and protocol, a USB interface and protocol, an eSPI interface and protocol, an Ethernet interface and protocol, an IDE interface and protocol, or any other such interface and protocol. Communication between devices over the communication interface, such as processor112and HW accelerator card118over PCIe interface260may be controlled via an operating system executing on the computing device110. In this regard, the operating system may setup a handle to provide a communication channel between devices attached to the PCIe interface260. In some instances, the operating system may also close communication channels between different devices connected to the PCIe interface260. Although not shown inFIGS.1and2, the computing device110may include other components normally found in a personal computer and/or server such as a display device, for example, a monitor having a screen, a projector, a touch-screen, a small LCD screen, a television, or another device such as an electrical device that can be operable to display information processed by processor112. Computing device110may also include speakers, network interface devices, such as one or more modems and/or network interface cards. Computing device110may also include one or more user input devices, such as a mouse, keyboard, touch screen, microphone, etc. The computing device110may also include hardware for connecting some or all of the aforementioned components together with one another. Example Methods FIG.4is a flow diagram illustrating the process of discovering acceleration services provided by a HW accelerator card, such as HW accelerator card118connected to a processor, such as processor112via a communication interface, such as PCIe bus260. The processor112may request to communicate with the HW accelerator card118(shown in dashed line) via the PCIe interface. The operating system executing on the computing device may provide a communication channel over the PCIe bus between the HW accelerator card and processor112. Using the communication channel, the processor112may transmit a request a listing of acceleration services provided by the accelerators on the HW accelerator card118, as shown by line423. In response to receiving the request from the processor118, the compute complex212of the HW accelerator card118may query and receive a listing of acceleration services from memory214of the HW accelerator card (or memory of the accelerators), as illustrated by arrows425and427, respectively. In this regard, the HW accelerator card may aggregate the acceleration services of all accelerators. In certain instances, the HW accelerator card118may query only some accelerators. In some instances, the HW accelerator card118may aggregate the acceleration services of the accelerators in a hierarchical manner. In this regard, acceleration services may be hierarchical, in that one acceleration service may rely on or be dependent on another acceleration service. This hierarchical relationship between acceleration services may be identified and stored in this listing. In some instances, each level in the hierarchical relationship may identify the capabilities and functionalities of the levels underneath. The compute complex212may provide the listing of acceleration services to the processor112via the PCIe bus260, as shown by line429. Once the processor receives the listing of acceleration services the communication channel may be closed. In the event the processor can leverage one or more acceleration services, the processor112may request the HW accelerator card complete one or more tasks using one of the provided acceleration services offered by the accelerators on the HW accelerator card118.FIG.5illustrates a processor112requesting information regarding the acceleration services of a HW accelerator card118connected via PCIe bus260. In this regard, steps521-529correspond to steps421-429described above. As illustrated by arrow529, the HW accelerator indicates that it is capable of providing compression services. Upon receiving the acceleration services, the processor112may provide a workload instruction including an indication of a location storing data and an instruction to the HW accelerator card118to compress the data, as shown by arrow531. The compute complex212of the HW accelerator card may then confirm the instruction and provides an ID that the processor212may communicate with to get status updates on the compression by the HW accelerator card118as shown by arrow533. The processor212may then request and receive a status of the compression as shown by arrows535and537, respectively. Once a polling request indicates that compression is complete, communication between the processor112and HW accelerator card118may cease or further tasks may be sent from the processor112to the HW accelerator card. AlthoughFIG.5illustrates a compression service, the processing performed by the HW accelerator can be any type of operation or combination of operations. Unless otherwise stated, the foregoing alternative examples are not mutually exclusive but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
20,784
11861409
DETAILED DESCRIPTION The present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for efficiently distributing across multiple computing resources the processing of satisfiability modulo theories (SMT) queries expressed in first-order logic and including theory variables (e.g., variables associated with the theory of strings, the theory of integers, the theories of data structures, etc.). According to some embodiments, as part of the computing-related services provided by a cloud provider network, many cloud providers also offer identity and access management services, which generally help users to control access and permissions to the services and resources (e.g., compute instances, storage resources, etc.) obtained by users via a cloud provider network. By using identity-based and resource-based policies, for example, users can granularly control which identities are able to access specific resources associated with the users' accounts and how those identities can use the resources. The configuration of such policies however can often become quite complex and it can quickly become challenging for users to understand all the security-related implications of such policies and their interrelationships. To alleviate some of these concerns, a cloud provider network may provide various analysis tools that help users analyze the security-related characteristics of the resources and associated policies within their accounts. One example of such a security tool is an access analyzer, which can be used to help users understand which identities can access particular resources associated with their account and, for example, help users identify whether their current policy configurations potentially provide unintended access to users outside of their organization. In this example, an access analyzer service may perform such analyses in part by translating a users' or organizations' stored policies into equivalent logical statements (e.g., statements expressed in a first order logic) and use a suite of general-purpose and specialized logical solvers (e.g., SMT solvers) to verify whether certain security-related behaviors are possible or not. In this context, the logical solvers may reason about propositional logic statements including various string variables, e.g., corresponding to aspects of policies such as account identifiers, resource identifiers, and the like. In addition to identity and access management services, some cloud provider networks also provide source code review and optimization services, program or computer network verification services, among other types of services that utilize automated reasoning to help analyze the correctness of various types of computing systems. The SMT solvers described above generally attempt to prove or disprove formulae expressed in first-order logic with combinations of theories such as Presburger arithmetic, uninterpreted functions, or strings. Existing SMT solvers are generally monolithic, single process applications and no successful method exists for efficiently distributing the search for proofs or disproofs of such formulas across multiple computing resources. As the size and complexity of users' and organizations' policies and computing-related resources increases, the resources needed to reason and provide information about these resources in a timely manner using existing solvers can quickly outgrow the resources available on individual computing resources. These challenges, among others, are addressed by techniques described herein for efficiently distributing the analysis of SMT queries expressed in first-order logic and including theory variables among any number of separate computing resources (e.g., among separate processes, compute instances, containers, etc.). According to embodiments described herein, for example, a service of a cloud provider network receives a request to determine whether a formula is satisfiable (e.g., to verify some expected behavior of a users' or organizations' set of policies or other such automated reasoning-based analysis). The service identifies a set of predicates in the formula based on a type of theory associated with the formula, where each predicate is a binary-valued function of at least one theory variable contained in the formula. In some embodiments, a search space associated with the formula is then partitioned into a set of sub-formulas, where each sub-formula is defined by a union of the formula with an assumption that a respective predicate of the set of predicates is either true or false. In some embodiments, a respective sub-formula of the set of sub-formulas is sent to an SMT solver running on each of a plurality of separate computing resources. Once an indication is received from the SMT solver running any of the computing resources that its respective sub-formula is satisfiable, the policy analysis service can cause display of information indicating that the formula is satisfiable; otherwise, the policy analysis service can cause display of or otherwise transmit information indicating the formula is unsatisfiable. Among other benefits, the described analysis techniques enable efficient computation of SMT queries expressed in first order logic and including theory variables, thereby also helping to improve the security posture of organizations' computing resources provided by cloud provider networks and other operating environments. FIG.1is a diagram illustrating an environment in which security policy analyses are performed relative to users' accounts, organizations (e.g., including sets of users, roles, and policies), and policies defined by an identity and access management service102of a cloud provider network100according to some embodiments. A cloud provider network100(sometimes referred to simply as a “cloud”) refers to a pool of network-accessible computing resources (such as compute, storage, and networking resources, applications, and services), which may be virtualized or bare-metal. The cloud can provide convenient, on-demand network access to a shared pool of configurable computing resources that can be programmatically provisioned and released in response to customer commands. These resources can be dynamically provisioned and reconfigured to adjust to variable load. Cloud computing can thus be considered as both the applications delivered as services over a publicly accessible network (e.g., the Internet, a cellular communication network) and the hardware and software in cloud provider data centers that provide those services. A provider network100(or “cloud” provider network) provides users with the ability to utilize one or more of a variety of types of computing-related resources160such as compute resources (e.g., executing virtual machine (VM) instances and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (e.g., object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (e.g., configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (e.g., databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc. These and other computing resources may be provided as services106, such as a hardware virtualization service118that can execute compute instances, a storage service110that can store data objects, etc. The users (or “customers”) of provider networks100may utilize one or more user accounts that are associated with a customer account, though these terms may be used somewhat interchangeably depending upon the context of use. Users may interact with a provider network100across one or more intermediate networks104(e.g., the internet) via one or more interface(s), such as through use of application programming interface (API) calls, via a console implemented as a website or application, etc. An API refers to an interface and/or communication protocol between a client and a server, such that if the client makes a request in a predefined format, the client should receive a response in a specific format or initiate a defined action. In the cloud provider network context, APIs provide a gateway for customers to access cloud infrastructure by allowing customers to obtain data from or cause actions within the cloud provider network, enabling the development of applications that interact with resources and services hosted in the cloud provider network. APIs can also enable different services of the cloud provider network to exchange data with one another. The interface(s) may be part of, or serve as a front-end to, a control plane of the provider network100that includes “backend” services supporting and enabling the services that may be more directly offered to customers. A cloud provider network100can be formed as a number of regions, where a region is a geographical area in which the cloud provider clusters data centers. Each region includes multiple (e.g., two or more) availability zones (AZs) connected to one another via a private high-speed network, for example a fiber communication connection. An AZ (also known as an availability domain, or simply a “zone”) provides an isolated failure domain including one or more data center facilities with separate power, separate networking, and separate cooling from those in another AZ. A data center refers to a physical building or enclosure that houses and provides power and cooling to servers of the cloud provider network. Preferably, AZs within a region are positioned far enough away from one another so that a natural disaster (or other failure-inducing event) should not affect or take more than one AZ offline at the same time. Customers can connect to AZ of the cloud provider network100via a publicly accessible network (e.g., the Internet, a cellular communication network), e.g., by way of a transit center (TC). TCs are the primary backbone locations linking customers to the cloud provider network and may be collocated at other network provider facilities (e.g., Internet service providers (ISPs), telecommunications providers) and securely connected (e.g., via a VPN or direct connection) to the AZs. Each region can operate two or more TCs for redundancy. Regions are connected to a global network which includes private networking infrastructure (e.g., fiber connections controlled by the cloud provider) connecting each region to at least one other region. The cloud provider network may deliver content from points of presence (or “POPs”) outside of, but networked with, these regions by way of edge locations and regional edge cache servers. This compartmentalization and geographic distribution of computing hardware enables the cloud provider network to provide low-latency resource access to customers on a global scale with a high degree of fault tolerance and stability. Generally, the traffic and operations of a provider network may broadly be subdivided into two categories: control plane operations carried over a logical control plane and data plane operations carried over a logical data plane. While the data plane represents the movement of user data through the distributed computing system, the control plane represents the movement of control signals through the distributed computing system. The control plane generally includes one or more control plane components distributed across and implemented by one or more control servers. Control plane traffic generally includes administrative operations, such as system configuration and management (e.g., resource placement, hardware capacity management, diagnostic monitoring, system state information). The data plane includes customer resources that are implemented on the provider network (e.g., computing instances, containers, block storage volumes, databases, file storage). Data plane traffic generally includes non-administrative operations such as transferring customer data to and from the customer resources. The control plane components are typically implemented on a separate set of servers from the data plane servers, and control plane traffic and data plane traffic may be sent over separate/distinct networks. To provide these and other computing resource services, provider networks100often rely upon virtualization techniques. For example, virtualization technologies may be used to provide users the ability to control or utilize compute resources (e.g., a “compute instance” such as a VM using a guest operating system (O/S) that operates using a hypervisor that may or may not further operate on top of an underlying host O/S, a container that may or may not operate in a VM, a compute instance that can execute on “bare metal” hardware without an underlying hypervisor), where one or multiple compute resources can be implemented using a single electronic device. Thus, a user may directly utilize a compute resource (e.g., provided by a hardware virtualization service) hosted by the provider network to perform a variety of computing tasks. Additionally, or alternatively, a user may indirectly utilize a compute resource by submitting code to be executed by the provider network (e.g., via an on-demand code execution service), which in turn utilizes one or more compute resources to execute the code—typically without the user having any control of or knowledge of the underlying compute instance(s) involved. For example, in various embodiments, a “serverless” function may include code provided by a user or other entity—such as the provider network itself—that can be executed on demand. Serverless functions may be maintained within provider network100by an on-demand code execution service and may be associated with a particular user or account or be generally accessible to multiple users/accounts. A serverless function may be associated with a Uniform Resource Locator (URL), Uniform Resource Identifier (URI), or other reference, which may be used to invoke the serverless function. A serverless function may be executed by a compute resource, such as a virtual machine, container, etc., when triggered or invoked. In some embodiments, a serverless function can be invoked through an application programming interface (API) call or a specially formatted HyperText Transport Protocol (HTTP) request message. Accordingly, users can define serverless functions that can be executed on demand, without requiring the user to maintain dedicated infrastructure to execute the serverless function. Instead, the serverless functions can be executed on demand using resources maintained by the provider network100. In some embodiments, these resources may be maintained in a “ready” state (e.g., having a pre-initialized runtime environment configured to execute the serverless functions), allowing the serverless functions to be executed in near real-time. The hardware virtualization service118(referred to in various implementations as an elastic compute service, a virtual machines service, a computing cloud service, a compute engine, or a cloud compute service) can enable users of the provider network100to provision and manage compute resources such as virtual machine instances. Virtual machine technology can use one physical server to run the equivalent of many servers (each of which is called a virtual machine), for example using a hypervisor, which may run at least on an offload card of the server (e.g., a card connected via PCI or PCIe to the physical CPUs and other components of the virtualization host may be used for some virtualization management components. Such an offload card of the host can include one or more CPUs that are not available to customer instances, but rather are dedicated to instance management tasks such as virtual machine management (e.g., a hypervisor), input/output virtualization to network-attached storage volumes, local migration management tasks, instance health monitoring, and the like). Virtual machines are commonly referred to as compute instances or simply “instances.” As used herein, provisioning a virtual compute instance generally includes reserving resources (e.g., computational and memory resources) of an underlying physical compute instance for the client (e.g., from a pool of available physical compute instances and other resources), installing or launching required software (e.g., an operating system), and making the virtual compute instance available to the client for performing tasks specified by the client. In some embodiments, the provider network100includes a container service. The container service can be a container orchestration and management service (referred to in various implementations as a container service, cloud container service, container engine, or container cloud service) that allows users of the cloud provider network to instantiate and manage containers. In some embodiments the container service may be a Kubernetes-based container orchestration and management service (referred to in various implementations as a container service for Kubernetes, Azure Kubernetes service, IBM cloud Kubernetes service, Kubernetes engine, or container engine for Kubernetes). A container, as referred to herein, packages up code and all its dependencies so an application (also referred to as a task, pod, or cluster in various container services) can run quickly and reliably from one computing environment to another. A container image is a standalone, executable package of software that includes everything needed to run an application process: code, runtime, system tools, system libraries and settings. Container images become containers at runtime. Containers are thus an abstraction of the application layer (meaning that each container simulates a different software application process). Though each container runs isolated processes, multiple containers can share a common operating system, for example by being launched within the same virtual machine. In contrast, virtual machines are an abstraction of the hardware layer (meaning that each virtual machine simulates a physical machine that can run software). While multiple virtual machines can run on one physical machine, each virtual machine typically has its own copy of an operating system, as well as the applications and their related files, libraries, and dependencies. Some containers can be run on instances that are running a container agent, and some containers can be run on bare-metal servers, or on an offload card of a server. In some embodiments, an identity and access management service102is a service that enables users to securely control access to cloud provider network resources (e.g., resources160associated with various provider network services106, such as storage objects108associated with a storage service110, databases112associated with a database service114, compute instances116associated with a hardware virtualization service118, and the like). The identity and access management service102is used to control who is permitted to authenticate (e.g., sign in) with the cloud provider network100and who is authorized (e.g., has permissions) to use resources provided by the cloud provider network. In general, a resource is a concept used to capture the domain of items that can be created, read, modified, or deleted by customers in a cloud provider network100. Examples of resources also include principals (e.g., principals120, including example users122A-122N and roles124A-124N) and policies126(e.g., including identity-based policies128, resource-based policies130, and other policies132).FIG.1further illustrates the concept of an organization134, which can include any number of associated accounts136A-136N, which in turn can include any number of users and roles (e.g., role(s)138associated with account136B and role(s)140associated with account136N). In some embodiments, when a person initially creates an account with the cloud provider network100, the person begins with a single sign-in identity that has complete access to all cloud provider network services and resources associated with the account (e.g., a root user of principals120). For example, the root user identity may be accessed by signing in with a username (e.g., an email address) and a password used to create the account. Cloud provider networks100often advise users not to use a root user for most tasks and instead to create additional user accounts with defined permissions (e.g., including one or more of user accounts122A-122N). In some embodiments, a user can grant different permissions to different user accounts for different resources. For example, a user account might be configured to allow some users complete access to a hardware virtualization service118, a storage service110, and other cloud provider network100resources. For other users, a user account might allow read-only access to some storage buckets, or permission to administer some instances116, etc. In some embodiments, an account includes identity-related objects stored as part of the identity and access management service102including, for example, users122A-122N, groups (not illustrated), roles124A-124N, policies126, and the like. These resources can be added, edited, and removed by users of the cloud provider network100with sufficient privileges, e.g., using a web-based console, API, CLI, or other interface provided by the identity and access management service102. In some embodiments, a principal120represents a person or application that can make a request for an action or operation on a resource of the cloud provider network100(e.g., a resource160or a resource of the identity and access management service102). The set of principals120associated with an account136A can include any number of users122A-122N and roles124A-124N. A cloud provider network request occurs when a principal (e.g., a user or a role) sends a request for an action or operation on a resource. A request can include some or all of the following information: the action or operations that the principal wants to perform, the resource object upon which the actions or operations are performed, the person or application that used an entity (e.g., a user or role) to send the request, environment data (e.g., information about the IP address, user agent, SSL enabled status, time of day, etc.), and resource data (e.g., data related to the resource that is being requested, such as a resource identifier, or a tag name). The identity and access management service102gathers the information contained in a request into a request context, where the request context is used to evaluate and authorize the request. In some embodiments, for a request to be completed, the identity and access management service102determines whether the requesting principal is authorized (e.g., permitted) to complete the request. During authorization, the identity and access management service102uses values included in the request context to check for policies that apply to the request (e.g., one or more of policies126). The identity and access management service102uses the policies126to determine whether to allow or deny the request. In some embodiments, the policies126are stored in the identity and access management service102as JavaScript Object Notation (JSON) documents (or using any other data format) and specify the permission statements applicable to principal entities, resources, or combinations thereof. In some embodiments, there are several types of policies126that can affect whether any given request is authorized including, e.g., identity-based policies128, resource-based policies130, among other possible types of policies132. For example, to provide users with permissions to access resources in their own account, identity-based policies can be configured, while resource-based policies may be used for granting cross-account access to resources. In some embodiments, the identity and access management service102checks each policy that applies to the context of a request. If a single permissions policy includes a denied action, the identity and access management service102may deny the entire request. In some embodiments, an identity and access management service102denies requests by default, such that a request is authorized only if every part of a request is allowed by applicable permissions policies. In some embodiments, once a request is authenticated and authorized, the identity and access management service102approves the actions or operations in the request. Operations are defined by a service and include actions that can be performed on or relative to a resource, such as viewing, creating, editing, and deleting that resource. For example, the identity and access management service102may support actions such as CreateUser, DeleteUser, CreateRole, and AssumeRole, among many other possible actions. To allow a principal to perform an operation, the action is included in a policy that applies to the principal or the affected resource. In some embodiments, identity-based policies128are permissions policies that are attached to an identity, such as a user, group, or role in an account. In some embodiments, resource-based policies are permissions policies that are attached to a resource such as a storage object108or a role trust policy. A resource-based policy controls what actions a specified principal can perform on that resource and under what conditions. In some embodiments, the identity and access management service102further supports trust policies, which can be attached to a role (e.g., one or more of roles124A-124N). Because a role is both an identity and a resource that supports resource-based policies, in some embodiments, both a trust policy and an identity-based policy is attached to a role. Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role. In some embodiments, a role is an identity that a user creates in an account that has specific permissions. A role is similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role may not have standard long-term credentials such as a password or access keys associated with it. Instead, when an entity assumes a role, it is provided with temporary security credentials for a role session. Roles can be used to delegate access to users, applications, or services that do not normally have access to the resource. For example, a person might want to grant users in an account access to resources those users do not usually have access to or grant users in one account access to resources in another account. As indicated above, users may often desire to obtain assurance that their configured policies are configured in a way that helps protect their data and resources. In some embodiments, a policy analysis service146uses various types of automated reasoning to perform such analyses and to present policy findings to users based on the analyses. At a high level, automated reasoning is a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems (e.g., to analyze policies and the future consequences of policies). As indicated above, policies dictate who can (or cannot) perform particular actions relative to particular resources, and a policy analysis service146can use automated reasoning to check properties of the policies. Although some of the examples herein relate to the analysis of policies managed by an identity and access management service102, similar automated reasoning techniques can be used to analyze the correctness of source code, analyze network configurations, or generally perform any type of analysis related to various types of computing resources or computing systems. In some embodiments, to perform such analyses, a policy analysis service146translates policies into equivalent logical statements and runs a suite of general-purpose and specialized logical solvers (e.g., SMT solvers) against the problem. In general, an SMT solver uses a mix of numbers, strings, regular expressions, dates, and IP addresses, etc., to prove and disprove logical formulas. A policy analysis service146may not examine, for example, access logs to determine whether an external entity accessed a resource within your zone of trust. Rather, it can generate a finding when a resource-based policy allows access to a resource, even if the resource has not yet been accessed by any external entity. Furthermore, to perform such analyses, the service may not consider the state of any external accounts when making its determination. InFIG.1, the numbered circles labeled “1”-“6” illustrate a process of one or more users configuring accounts, principals, policies, etc., via an identity and access management service102and a policy analysis service146performing an analysis of one or more of the users' policies, as described above. In particular, the illustrated process involves distributing the processing of one or more SMT queries expressed in propositional logic with string variables across multiple computing resources (e.g., across computing device(s)150), as described in more detail herein after. In some embodiments, at circle “1” inFIG.1, one or more users associated with an organization134use electronic device(s)144to generate account and policy configuration request(s)142to configure a set of accounts136A-136N, principals120associated with an organization (e.g., an organization represented by organization134), etc., and to further configure policies126associated with some or all of those principals and resources160associated with an organization. These principals, for example, may be created to provide authentication for users and processes within accounts (e.g., account136A-136N) of the cloud provider network100. As indicated above, identities represent a user and can be authenticated and then authorized to perform actions in the cloud provider network100and each identity can be associated with one or more policies126to determine what actions a user or role can do with which cloud provider network resources and under what conditions. The collection of accounts, principals, and policies may be created, for example, by an organization that intends to use various services106of the cloud provider network100for various purposes. Furthermore, the collection of accounts, principals, and policies comprising an organization may be modified over time as desired by the organization. In some embodiments, at circle “2,” responsive to the account and policy configuration request(s)142, the identity and access management service102creates and stores data representing the accounts, principals, and policies. As further indicated above, these principals and policies can be added, edited, and removed by external users of the cloud provider network100with sufficient privileges, e.g., using a web-based console, API, CLI, or other interface provided by the identity and access management service102, and data representing the principals and policies can be stored using various types of storage resources managed by the identity and access management service102. Once a user or organization has created one or more policies, the users may desire to analyze the policies to obtain assurance that the configured policies are configured in a way that helps protects their data and resources (e.g., to help ensure that resources are not accessible to undesirable entities, to help ensure that users are not inadvertently permitted to perform undesirable actions, etc.). In some embodiments, at circle “3,” a user optionally requests158to perform one or more analyses on their policies, e.g., using a web-based console or other interface. In other embodiments, a policy analysis service146automatically performs one or more analyses, e.g., in response to requests to view more general information about various types of resources160associated with one or more user accounts. At circle “4,” the policy analysis service146obtains one or more policies148relevant to the requested analysis. For example, if the requested analysis involves determining whether any external entity is permitted to access one or more resources associated with a user account, the policy analysis service146may obtain one or more resource-based policies associated with the appliable resources160. In other examples, identity-based policies or other policies132may be obtained depending on the type of analysis to be performed. At circle “5,” the policy analysis service146generates an encoded version of the one or more policies148and uses one or more SMT solvers152to check one or more properties of the policies. In some embodiments, the encoded policy154is generated by translating the permission statements contained in one or more policies126(e.g., expressed in a JSON-based format or other syntax) into constraints expressed using first-order logic (e.g., expressed using the SMT-LIB format or other formal syntax). At a high level, the encoded policy includes a set of set of constraints that, when analyzed by a SMT solver152, generates an output indicating that the associated formula (e.g., the formulation of the properties to be checked) is satisfiable if there is an assignment of values to the variables of the constraints for which the formula satisfied; otherwise, if no such assignment of values to the variables exists, then the formula is unsatisfiable. FIG.2is a diagram illustrating a process for encoding permissions defined by a policy into propositional logic statements including string variables according to some embodiments. InFIG.2, a resource policy (e.g., including a resource policy fragment200) is provided as input to a first-order language encoder202to generate an encoded policy (e.g., including an example encoded policy fragment204). In some embodiments, the first-order language encoder202implements the Satisfiability-Modulo-Theory Library and Standard (SMT-LIB) or any other syntax for formally specifying formula related to policies of a cloud provider network100. As shown, the encoded policy includes a number of variables (e.g., including a string variable206), Boolean connectives (e.g., the Boolean connective208corresponding to the logical AND connective), which together can form various constraints (e.g., including a constraint210). In some embodiments, a collection of one or more constraints define a formula that can be passed to a solver, where the solver checks the satisfiability of the formula by determining whether a satisfying assignment for the variables exists. Although in this example, the formula includes string variables, in other examples, other types of theory variables can be included in the encoded representation depending on a type of information to be analyzed. For example, in other embodiments, an encoded representation of information to be analyzed can include variables associated with a theory of integers, a theory of real arithmetic, a theory of bit vectors, a theory of arrays, a theory of list structures, etc. In some embodiments, to distribute the processing of SMT queries expressed in first-order logic, as described above, the policy analysis service146partitions the formula defined by an encoded policy into a plurality of sub-formulas. Each of the sub-formulas can be processed by a SMT solver running on an independent computing resource (e.g., as a separate process, an independently executable thread, VM instance, container, on-demand executable function, etc.). In this manner, the plurality of sub-formulas divides the total search space for a satisfying assignment for the formula among the set of sub-formulas, the processing of which can be parallelized as described above, thereby significantly reducing the time needed to solve the formula in most cases. Some of the examples provided herein illustrate the partitioning of a formula according to string variables identified in the formula; in general, the described techniques can be used to partition formulas in a theory-based manner, e.g., depending on a type or types of theory variables contained in the SMT query. FIG.3is a diagram illustrating a process for partitioning a search space associated with a formula expressed in first-order logic and including string variables into a set of sub-formulas, the execution of which can be distributed across multiple computing resources314A-314N, according to some embodiments. InFIG.3, a coordinator300process of a policy analysis service146takes as input one or more encoded policies (e.g., illustrated by encoded policy fragments204) and causes a formula partitioner302to identify a set of predicates306used to partition the formula represented by the encoded policies into a set of sub-formulas308(e.g., including sub-formula310A-310N). One illustrative algorithm for partitioning a formula in a set of sub-formulas308is as follows: def pulp (f: Formula) → Set(Formula):pulped = Øfor p ∈ predicates(f):pulped = pulped ∪ {f {circumflex over ( )} p} ∪ {f {circumflex over ( )} ¬p}return pulped In the example above, the pulp routine takes as input a formula f (e.g., defined by the encoded policy) and returns a set of sub-formula, represented by the set pulped. The predicates (f) routine generates predicates over strings in formula f which mention the most mentioned string variable. For example, in the encoded policy fragment204, the string variable csp:crn is the most mentioned string variable (e.g., it is mentioned more time than the csp:crn_prefix and csp:crn_region string variables). In this example, the predicates (f) routine splits the formula containing the string variable into a set of predicates that contain an instance of the most mentioned string variable csp:crn (e.g., “(=“arn:aws:sts::111:AAA”|aws:arn|)”, “(str.prefixof “arn:aws:sts::111:BBB/”|aws:arn|)”, “(=“arn:aws:sts::111:CCC”|aws:arn|)”, “(=“arn:aws:sts::111:DDD”|aws:arn|)”, etc.). In some embodiments, each predicate is thus a binary-valued function of at least one string variable contained in the formula. In other embodiments, other predicate generating techniques can be used, for example, by splitting the formula into predicates containing the top N most frequently occurring string variables, into predicates of approximately equal computational complexity, etc. For example, in some embodiments, predicates can be categorized based on an estimated computational complexity of reasoning about the theory variable or variables contained in each predicate. A formula may then be partitioned by grouping predicates in to a plurality of predicate groups based on their estimated computational complexity (e.g., where multiple lower complexity predicates may be grouped in a single partition while higher complexity predicates may be separately partitioned, etc.) The routine further iterates through the obtained list of predicates (f) and, for each predicate, adds a sub-formula to the pulped set, where the sub-formula includes the formula with an assumption that the predicate either True or False. In this manner, each sub-formula restricts the search space associated with the original formula to a partition of the search space where one of the predicates in predicates(f) either True or False. Each of these sub-formulas represents an independent instance that can be analyzed by a SMT solver152, which can be distributed amongst a set of independent computing resources (e.g., separate threads of a multi-threaded execution environment, separate compute instances, containers, on-demand executable functions provided by various services of a cloud provider network, etc.) each executing a SMT solver. In some embodiments, if any of the SMT solvers returns an indication that its sub-formula is satisfiable, then it can be determined that the formula is satisfiable. Otherwise, if all SMT solvers return an indication that the respective sub-formulas are unsatisfiable, then it can be determined that the formula is unsatisfiable. FIG.4is a diagram illustrating the use of a SMT solver executing across multiple computing resources to analyze a set of sub-formulas derived from a formula expressed in first-order logic and including string variables according to some embodiments. As shown, each of a plurality of computing resources314A-314N executes a SMT solver152and is assigned by the coordinator300a respective sub-formula of a set of sub-formula308(e.g., computing resource314A is assigned sub-formula310A, computing resource314B is assigned sub-formula310B, and computing resource314N is assigned sub-formula310N). In general, the sub-formulas of the set of sub-formulas308can be distributed across the multiple computing resources in any manner, for example, depending on a number of sub-formulas and a number of available computing resources. In some embodiments, an example algorithm executed by a coordinator300process to determine the satisfiability of a formula f using a set of computing resources is illustrated below: def worker (s: Solver, g: Formula) → ( ):match s(g) with| SAT => sat = true,| UNSAT => completed = completed ∪ {g}def distribute_solver (f: Formula, s: Solver) → {SAT,UNSAT} :pulped = pulp(f)sat = falsecompleted = ØtotalCubes = |pulped|for p ∈ pulped:run worker(s, p)while |completed| < totalCubes:if sat == true:return SATreturn UNSAT In the example above, a worker routine is defined that takes as input a Solver s and a Formula g and returns an indication that the formula f is either satisfiable (e.g., if a satisfying assignment of values to the variables of the sub-formula exists) or unsatisfiable (e.g., if it is determined that no satisfying assignment of values to the variables of the sub-formula exists). In some embodiments, each of the computing resources314A-314N includes a process that implements a routine similar to the worker routine illustrated above using a SMT solver152. In some embodiments, the distribute_solver routine takes as input a formula f and solver s and returns an indication of whether the formula is satisfiable or unsatisfiable. In particular, the distribute_solver routine generates the pulped set of sub-formula, described above with respect toFIG.3, and distributes the sub-formulas to a plurality of separate workers (e.g., a plurality of separate computing resources314A, which can include separate processes, threads, compute instances, containers, on-demand executable functions, etc.). In some embodiments, once any of the workers returns an indication that a respective sub-formula is satisfiable, the distribute_solver routine returns an indication that the formula is satisfiable. Otherwise, if none of the workers returns an indication that the formula is satisfiable (e.g., all workers return an indication that their respective sub-formula is unsatisfiable), the routine returns an indication that the formula is unsatisfiable. In some embodiments, if an indication is received from respective workers that a sub-formula is unsatisfiable under both the assumption its predicate is true and the assumption its predicate is false, then the routine can return an indication that the formula is unsatisfiable (e.g., even before all workers return an indication that their respective sub-formula are unsatisfiable). InFIG.4, for example, the computing resource314A-314N each process a respective sub-formula from sub-formulas310A-310N. At time406A along the total execution time410of the workers, the computing resource314N returns an indication that its sub-formula310N is unsatisfiable as unsatisfiable result400. At time406B, the computing resource314A returns a satisfiable result402indicating that its sub-formula310A is satisfiable. As indicated above, because the sub-formula310A is satisfiable, the coordinator300process can determine that the overall formula f is satisfiable. In some embodiments, the coordinator300process optionally can terminate the processing of other sub-formula by other worker computing resources (e.g., as illustrated by the analysis termination request404sent to computing resource314B). Thus, whereas solving the formula as a whole might take at least until time406N to determine whether the formula is satisfiable, the distributed processing of the formula only takes until time406B when one of the sub-formula is determined to be satisfiable. Returning toFIG.1, in some embodiments, based on the analysis performed by the SMT solvers152, at circle “5,” the policy analysis service146generates policy findings156. The policy findings156can generally include any information that is obtained based on the reasoning performed relative to the one or more policies148. For example, the policy findings can include an indication that one or more resources160are accessible to one or more entities outside of a defined zone of trust, that a policy permits one or more unintended operations to be performed relative to one or more resources160, that a user can assume a role that the user is not intended to be able to assume, and the like. In some embodiments, more generally, based on the analysis performed by the SMT solvers152, the coordinator300can transmit a message indicating whether the formula is satisfiable or unsatisfiable. The transmitted message can result in the display of information associated with the result, can be sent to one or more downstream SMT solvers or other automated reasoning tools for further analysis, or used by any other processes. FIG.5is a diagram illustrating a graphical user interface (GUI) displaying policy findings derived from an analysis of one or more policies according to some embodiments. The GUI500, for example, illustrates a console interface displaying a list of storage resources (e.g., “example-1-resource”, “example-ab-resource”, etc.). In some embodiments, the interface further includes at least one policy finding502indicating information about a resource that derived from an automated reasoning-based analysis as described above. In this example, the finding502indicates that one of the storage resources is accessible to users outside of a defined zone of trust, which may prompt a user to further analyze and modify policies associated with the resource to mitigate the unintended access to the resource. In general, the presentation of such policy findings can be presented in other types of interfaces (e.g., CLIs, standalone application interfaces, etc.) and relate to other types of policy analyses, as described herein. FIG.6is a flow diagram illustrating operations600of a method for using reasoning techniques to analyze formulas expressed in a propositional logic and including string variables according to some embodiments according to some embodiments. Some or all of the operations600(or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations600are performed by a policy analysis service146of the other figures. The operations600include, at block602, receiving, by a policy analysis service of a cloud provider network, a request to determine whether a formula is satisfiable, wherein the formula relates to an analysis of policies attached to one or more computing resources associated with a user of the cloud provider network. The operations600further include, at block604, identifying a set of predicates in the formula based on a type of theory associated with the set of predicates, wherein each predicate of the set of predicates is a binary-valued function of at least one theory variable contained in the formula. For example, the type of theory may be at least one of: a theory of strings, a theory of integers, a theory of real arithmetic, a theory of bit vectors, a theory of arrays, a theory of list structures, etc. In some embodiments, the type of theory is a theory of strings and the theory variable is a string variable, and the operations further include identifying a most frequently occurring theory variable in the formula, and where each predicate of the set of predicates includes an instance of the most frequently occurring theory variable. The operations600further include, at block606, partitioning a search space associated with the formula into a plurality of sub-formulas, wherein each sub-formula includes the formula with an assumption that a respective predicate of the set of predicates is either true or false. The operations600further include, at block608, sending, to a SMT solver running on each of a plurality of computing resources, a respective sub-formula of the plurality of sub-formulas. In some embodiments, each computing resource of the plurality of computing resources is an independently executable thread of a plurality of threads, and where the plurality of threads executes on one more computing devices. In some embodiments, each computing resource of the plurality of computing resources is one of: a compute instance provided by a hardware virtualization service of a cloud provider network, a container provided by a container service of the cloud provider network, or an on-demand executable function provided by an on-demand executable code service of the cloud provider network. The operations600further include, at block610, receiving, from the SMT solver running on a computing resource of the plurality of computing resources, an indication that the respective sub-formula analyzed by the SMT solver running on the computing resource is satisfiable. The operations600further include, at block612, transmitting a message indicating that the formula is satisfiable. For example, the message may be used to cause display of information indicating that the formula is satisfiable, can be sent to one or more downstream SMT solvers or other automated reasoning tools for further analysis, or used by any other processes. In some embodiments, the formula relates to at least one of: an analysis of policies applicable to one or more computing resources associated with a user of a cloud provider network, an analysis of correctness of a computer program, or an analysis of correctness of a computer network configuration. In some embodiments, satisfiability of the formula determines whether a computing resource associated with an account or organization defined by a cloud provider network is accessible to an entity external to the account or organization, and wherein the information indicating that the formula is satisfiable indicates that the computing resource is accessible to an entity external to the account or organization. In some embodiments, the request identifies a policy managed by an identity and access management service of a cloud provider network, and wherein the method further comprises generating the formula by encoding the policy into a first-order logic format. In some embodiments, the computing resource of the plurality of computing resource is a first computing resource and the indication is a first indication, and wherein the first indication that the formula is satisfiable is received from the SMT solver running on the first computing resource before a second indication is received from a second computing resource of the plurality of computing resources. In some embodiments, the operations further include receiving a second request to determine whether a second formula expressed in first-order logic is satisfiable; generating a second set of predicates based on the second formula; partitioning a search space associated with the second formula into a second plurality of sub-formulas; sending, to the SMT solver running on each of the respective plurality of second computing resources, a respective sub-formula of the second plurality of sub-formulas; receiving, from each SMT solver running on a computing resource of the plurality of computing resources, an indication that the respective sub-formula is not satisfiable; and transmitting a message indicating that the second formula is unsatisfiable. In some embodiments, the operations further include receiving a second request to determine whether a second formula expressed in first-order logic is satisfiable; generating a second set of predicates based on the second formula; partitioning a search space associated with the second formula into a second plurality of sub-formulas; sending, to the SMT solver running on each of the respective plurality of second computing resources, a respective sub-formula of the second plurality of sub-formulas; receiving a first indication that a particular sub-formula of the second plurality of sub-formulas is unsatisfiable under an assumption that its respective predicate is true and a second indication that the particular sub-formula of the second plurality of sub-formulas is unsatisfiable under an assumption that its respective predicate is false; and transmitting a message indicating that the second formula is unsatisfiable. In some embodiments, the operations further include grouping the set of predicates into a plurality of predicate groups based on an estimated computational complexity associated with each predicate of the set of predicates, wherein the search space is partitioned based on the plurality of predicate groups. In some embodiments, the operations further include causing display of a graphical user interface (GUI) including the information indicating that the formula is satisfiable, wherein the information indicates a value for the at least one string variable that causes the formula to be satisfiable. In some embodiments, the message indicating that the formula is satisfiable is used as input to another SMT solver. FIG.7illustrates an example provider network (or “service provider system”) environment according to some embodiments. A provider network700may provide resource virtualization to customers via one or more virtualization services710that allow customers to purchase, rent, or otherwise obtain instances712of virtualized resources, including but not limited to computation and storage resources, implemented on devices within the provider network or networks in one or more data centers. Local Internet Protocol (IP) addresses716may be associated with the resource instances712; the local IP addresses are the internal network addresses of the resource instances712on the provider network700. In some embodiments, the provider network700may also provide public IP addresses714and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that customers may obtain from the provider700. Conventionally, the provider network700, via the virtualization services710, may allow a customer of the service provider (e.g., a customer that operates one or more client networks750A-750C including one or more customer device(s)752) to dynamically associate at least some public IP addresses714assigned or allocated to the customer with particular resource instances712assigned to the customer. The provider network700may also allow the customer to remap a public IP address714, previously mapped to one virtualized computing resource instance712allocated to the customer, to another virtualized computing resource instance712that is also allocated to the customer. Using the virtualized computing resource instances712and public IP addresses714provided by the service provider, a customer of the service provider such as the operator of customer network(s)750A-750C may, for example, implement customer-specific applications and present the customer's applications on an intermediate network740, such as the Internet. Other network entities720on the intermediate network740may then generate traffic to a destination public IP address714published by the customer network(s)750A-750C; the traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the local IP address716of the virtualized computing resource instance712currently mapped to the destination public IP address714. Similarly, response traffic from the virtualized computing resource instance712may be routed via the network substrate back onto the intermediate network740to the source entity720. Local IP addresses, as used herein, refer to the internal or “private” network addresses, for example, of resource instances in a provider network. Local IP addresses can be within address blocks reserved by Internet Engineering Task Force (IETF) Request for Comments (RFC) 1918 and/or of an address format specified by IETF RFC 4193 and may be mutable within the provider network. Network traffic originating outside the provider network is not directly routed to local IP addresses; instead, the traffic uses public IP addresses that are mapped to the local IP addresses of the resource instances. The provider network may include networking devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to local IP addresses and vice versa. Public IP addresses are Internet mutable network addresses that are assigned to resource instances, either by the service provider or by the customer. Traffic routed to a public IP address is translated, for example via 1:1 NAT, and forwarded to the respective local IP address of a resource instance. Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses. In some embodiments, the mapping of a standard IP address to a local IP address of a resource instance is the default launch configuration for all resource instance types. At least some public IP addresses may be allocated to or obtained by customers of the provider network700; a customer may then assign their allocated public IP addresses to particular resource instances allocated to the customer. These public IP addresses may be referred to as customer public IP addresses, or simply customer IP addresses. Instead of being assigned by the provider network700to resource instances as in the case of standard IP addresses, customer IP addresses may be assigned to resource instances by the customers, for example via an API provided by the service provider. Unlike standard IP addresses, customer IP addresses are allocated to customer accounts and can be remapped to other resource instances by the respective customers as necessary or desired. A customer IP address is associated with a customer's account, not a particular resource instance, and the customer controls that IP address until the customer chooses to release it. Unlike conventional static IP addresses, customer IP addresses allow the customer to mask resource instance or availability zone failures by remapping the customer's public IP addresses to any resource instance associated with the customer's account. The customer IP addresses, for example, enable a customer to engineer around problems with the customer's resource instances or software by remapping customer IP addresses to replacement resource instances. FIG.8is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers, according to some embodiments. Hardware virtualization service820provides multiple compute resources824(e.g., compute instances825such as VMs) to customers. The compute resources824may, for example, be rented or leased to customers of the provider network800(e.g., to a customer that implements customer network850). Each computation resource824may be provided with one or more local IP addresses. Provider network800may be configured to route packets from the local IP addresses of the compute resources824to public Internet destinations, and from public Internet sources to the local IP addresses of compute resources824. Provider network800may provide a customer network850, for example coupled to intermediate network840via local network856, the ability to implement virtual computing systems892via hardware virtualization service820coupled to intermediate network840and to provider network800. In some embodiments, hardware virtualization service820may provide one or more APIs802, for example a web services interface, via which a customer network850may access functionality provided by the hardware virtualization service820, for example via a console894(e.g., a web-based application, standalone application, mobile application, etc.). In some embodiments, at the provider network800, each virtual computing system892at customer network850may correspond to a computation resource824that is leased, rented, or otherwise provided to customer network850. From an instance of a virtual computing system892and/or another customer device890(e.g., via console894), the customer may access the functionality of storage service810, for example via one or more APIs802, to access data from and store data to storage resources818A-818N of a virtual data store816(e.g., a folder or “bucket”, a virtualized volume, a database, etc.) provided by the provider network800. In some embodiments, a virtualized data store gateway (not shown) may be provided at the customer network850that may locally cache at least some data, for example frequently-accessed or critical data, and that may communicate with storage service810via one or more communications channels to upload new or modified data from a local cache so that the primary store of data (virtualized data store816) is maintained. In some embodiments, a user, via a virtual computing system892and/or on another customer device890, may mount and access virtual data store816volumes via storage service810acting as a storage virtualization service, and these volumes may appear to the user as local (virtualized) storage898. While not shown inFIG.8, the virtualization service(s) may also be accessed from resource instances within the provider network800via API(s)802. For example, a customer, appliance service provider, or other entity may access a virtualization service from within a respective virtual network on the provider network800via an API802to request allocation of one or more resource instances within the virtual network or within another virtual network. In some embodiments, a system that implements a portion or all of the techniques described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media, such as computer system900illustrated inFIG.9. In the illustrated embodiment, computer system900includes one or more processors910coupled to a system memory920via an input/output (I/O) interface930. Computer system900further includes a network interface940coupled to I/O interface930. WhileFIG.9shows computer system900as a single computing device, in various embodiments a computer system900may include one computing device or any number of computing devices configured to work together as a single computer system900. In various embodiments, computer system900may be a uniprocessor system including one processor910, or a multiprocessor system including several processors910(e.g., two, four, eight, or another suitable number). Processors910may be any suitable processors capable of executing instructions. For example, in various embodiments, processors910may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, ARM, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors910may commonly, but not necessarily, implement the same ISA. System memory920may store instructions and data accessible by processor(s)910. In various embodiments, system memory920may be implemented using any suitable memory technology, such as random-access memory (RAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above are shown stored within system memory920as policy analysis service code925(e.g., executable to implement, in whole or in part, the policy analysis service146) and data926. In one embodiment, I/O interface930may be configured to coordinate I/O traffic between processor910, system memory920, and any peripheral devices in the device, including network interface940or other peripheral interfaces. In some embodiments, I/O interface930may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory920) into a format suitable for use by another component (e.g., processor910). In some embodiments, I/O interface930may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface930may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface930, such as an interface to system memory920, may be incorporated directly into processor910. Network interface940may be configured to allow data to be exchanged between computer system900and other devices960attached to a network or networks950, such as other computer systems or devices as illustrated inFIG.1, for example. In various embodiments, network interface940may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface940may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks (SANs) such as Fibre Channel SANs, or via I/O any other suitable type of network and/or protocol. In some embodiments, a computer system900includes one or more offload cards970A or970B (including one or more processors975, and possibly including the one or more network interfaces940) that are connected using an I/O interface930(e.g., a bus implementing a version of the Peripheral Component Interconnect-Express (PCI-E) standard, or another interconnect such as a QuickPath interconnect (QPI) or UltraPath interconnect (UPI)). For example, in some embodiments the computer system900may act as a host electronic device (e.g., operating as part of a hardware virtualization service) that hosts compute resources such as compute instances, and the one or more offload cards970A or970B execute a virtualization manager that can manage compute instances that execute on the host electronic device. As an example, in some embodiments the offload card(s)970A or970B can perform compute instance management operations such as pausing and/or un-pausing compute instances, launching and/or terminating compute instances, performing memory transfer/copying operations, etc. These management operations may, in some embodiments, be performed by the offload card(s)970A or970B in coordination with a hypervisor (e.g., upon a request from a hypervisor) that is executed by the other processors910A-910N of the computer system900. However, in some embodiments the virtualization manager implemented by the offload card(s)970A or970B can accommodate requests from other entities (e.g., from compute instances themselves), and may not coordinate with (or service) any separate hypervisor. In some embodiments, system memory920may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system900via I/O interface930. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g., SDRAM, double data rate (DDR) SDRAM, SRAM, etc.), read only memory (ROM), etc., that may be included in some embodiments of computer system900as system memory920or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface940. Various embodiments discussed or suggested herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general-purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and/or other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of widely-available protocols, such as Transmission Control Protocol/Internet Protocol (TCP/IP), File Transfer Protocol (FTP), Universal Plug and Play (UPnP), Network File System (NFS), Common Internet File System (CIFS), Extensible Messaging and Presence Protocol (XMPP), AppleTalk, etc. The network(s) can include, for example, a local area network (LAN), a wide-area network (WAN), a virtual private network (VPN), the Internet, an intranet, an extranet, a public switched telephone network (PSTN), an infrared network, a wireless network, and any combination thereof. In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including HTTP servers, File Transfer Protocol (FTP) servers, Common Gateway Interface (CGI) servers, data servers, Java servers, business application servers, etc. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, PHP, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, IBM®, etc. The database servers may be relational or non-relational (e.g., “NoSQL”), distributed or non-distributed, etc. Environments disclosed herein can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and/or at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc-Read Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. In the preceding description, various embodiments are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) are used herein to illustrate optional operations that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments. Reference numerals with suffix letters (e.g.,818A-818N) may be used to indicate that there can be one or multiple instances of the referenced entity in various embodiments, and when there are multiple instances, each does not need to be identical but may instead share some general traits or act in common ways. Further, the particular suffixes used are not meant to imply that a particular amount of the entity exists unless specifically indicated to the contrary. Thus, two entities using the same or different suffix letters may or may not have the same number of instances in various embodiments. References to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Moreover, in the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B, and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.
79,472
11861410
In the drawings, reference numbers may be reused to identify similar and/or identical elements. DETAILED DESCRIPTION Burst Instance Management System FIG.1illustrates an exemplary system100, in which one or more aspects of the present disclosure may be implemented. Although the system100is presented in one arrangement inFIG.1, other embodiments may include systems arranged otherwise depending, for example, on a manner in which components of the network communicate with one another, locations and numbers of the components of the network, etc. In the illustrated embodiment, the system100generally includes a first cloud computing resource102configured to perform at least one cloud computing task108(e.g., via one or more processors and memories of the first cloud computing resource102, etc.). The system100also includes a second cloud computing resource104, and one or more data networks106connecting the first cloud computing resource102and the second cloud computing resource104. The first cloud computing resource102monitors one or more leading indicator parameters associated with operation of the first cloud computing resource102while performing the at least one cloud computing task108. In response to the one or more leading indicator parameters satisfying a first burst criteria, the first cloud computing resource102is configured to provision a task instance110on the second cloud computing resource104for performing at least one portion112of the cloud computing task108. Example pseudo code for determining when the burst criteria are met is provided below. In particular, below are example loops for checking all resources, and an example resource check function. // Description: checks all resources, and updated allocations, loops through all resources void monitorResources( ){int resourceStatus;for(int i=0; i<resourceCount; i++){// iterate through all the resourcesresourceStatus=resources[i].checkResource( ); // check if resource is good and combine with status if one variable is false then all is falseif(resourceStatus==0){// need to allocate resourcesnewresource=resources[i].allocResource( );if (Need to shard){// check if we need to split the taskShard( ); // split the task}else{updateTask( )// update the task with new resources}}else if(resourceStatus==−1){// can deallocate resourcesresources[i].deallocResource( );}} }// end of void monitorResources( ) function // Description: the check resource checks the status of its resource and returns if there is a need to change allocations of the resource // Return: 1—all is good and there is no need for allocations/deallocations // 0—allocation needed for the resource // −1—deallocation is needed, allocation is much higher than needed int checkResource( ){float currentTime; // current timefloat changeUtil; // change in utilization since last checkfloat changeTime; // elapse time since last checkfloat resourceUtil; // variable for storing the current resource utilizationfloat predicted; // prediction for next time utilizationcurResourceUtil=getResourceUtil(rType); // gets the current resource utilization from the system of type rType this will have to be implemented or connected to systemcurrentTime=getCurrentTime( ); // get the current time for the systemchangeUtil=curResourceUtil−utilHist[history]; // calculate the change in utilization since last checkchangeTime=currentTime−lastTime; // calculate the elapse time since last checkutilHist[history]=curResourceUtil; // set lastUtil for next time a check is preformed this is needed for predictive calculationshistory++; // increment the number of items in history to analyzelastTime=currentTime; // set lastTime for next time a check is preformed this is needed for predictive calculationsif(curResourcUtil>thresholdMax){// check if the current resource utilization exceeds the maximum thresholdreturn 0; // return 0, resource is above maximum threshold and an allocation of additional resource needs to be preformed}else if (curResourceUtil<thresholdMin){// check if current resource utilization falls below minimum utilization and can be partially deallocatedreturn −1; // return −1, resource is below minimum threshold and can be at least partially deallocated}// check if we are most likely going to exceed maximum utilization // this is a simple check using the previous utilization numbers and current utilization, this could use AI, historical, predictive algorithms, or other methods available to improve performanceif(changeUtil<=0){// utilization is the same or less as last checkreturn 1; // return 1 to indicate allocation is good no change increase or lower allocation is needed}// estimate if threshold is going to be reached or exceeded before the next check of the resource, is the time to allocate the resource going to exceed the threshold next checkpredicted=(changeUtil/changeTime*(changeTime+allocTime))+curResourceUtil; // calculate the increase be next check added to increaseif(predicted>thresholdMax){// check if the predicted utilization is in excess of the threshold maxreturn 1; // return 1 to increase allocation the prediction is the system will exceed max}return 0; // if the function gets to this return there is no need for allocation or deallocation} /// end of int checkResource( ) function For example, cloud computing task108may be any suitable process performed by the first cloud computing resource102, such as a graphics processing unit (GPU) task, a rendering task, a cryptographic task, a homomorphic encryption task, a blockchain task, a notarized ledger task, a distributed ledger task, a gaming task, an entertainment task, a storage task, an application specific task, an operating system task, a virtual machine task, a container task, a machine learning task, a simulation task, a data transmission task, a genomic analysis task, a data storage task, a data processing task, a communication task, a client service task, a server task, a load balancing task, another other type of computing task, etc. In various implementations, each type of task may correspond to appropriate hardware resources. For example, storage tasks may correspond to available storage devices, such as a hard disk drive (HDD), a solid-state drive (SSD), a redundant array of independent disks (RAID), memory, flash, etc. Additional examples of cloud computing resources are virtual machines and virtual environments such as VMWare's ESXi, Oracle's VirtualBox, Parallels, Microsoft's Hyper-V, XenSource's XenServer, KVM (Kernel-based Virtual Machine), Nutaniz, etc. The additional cloud resources may be take the form of adding an additional virtual server, or resources being assigned to specific virtual servers such as memory, virtual disks, Ethernet, CPU cores, etc. This may be accomplished, for example, with real-time resources being allocated or deallocated to virtual servers, or as a new server getting allocated and transferring the old server to the newly allocated server and deallocating the old server. The first cloud computing resource102may transfer (as shown by114inFIG.1) the at least one portion112of the cloud computing task108to the provisioned task instance110on the second cloud computing resource104, in response to the one or more leading indicator parameters satisfying a second burst criteria. Example pseudo code for transferring a task to a second resource when burst criteria are met is provided below. int resourceStatus;for(int i=0;i<resourceCount;i++){// iterate through all the resourcesresourceStatus=resources[i].checkResource( ); // check if resource is good and combine with status if one variable is false than all is falseif(resourceStatus==0){// need to allocate resourcesnewresource=resources[i].allocResource( );if (Need to shard){// check if we need to split the taskShard( ); // split the task}else{updateTask( ) // update the task with new resources} The second cloud computing resource104may perform the at least one portion112of the cloud computing task108(e.g., using the provisioned task instance110). AlthoughFIG.1illustrates a portion112of the cloud computing task108being transferred to the second cloud computing resource104, in other embodiments the entire cloud computing task108may be transferred to the second cloud computing resource104(e.g., with the provisioned task instance110performing the entire cloud computing task108). The first burst criteria and/or the second burst criteria may include any suitable individual criterion depending on one or more factors that may be indicative of the cloud computing task108approaching, trending toward, exceeding, etc., an operating capacity or a defined operating limit of the first cloud computing resource102. For example, if the first cloud computing resource is experiencing a graphic processing unit (GPU) processing overload, a just-in-time or near-in-time offload may be performed to deal with the capacity issues. Each criterion may include a specified threshold, range, metric, parameter, etc., which may be compared with measured internal features of the system100, measured external features of the system100, etc. In various implementations, the defined operating limit may be defined by a contract that must be fulfilled. For example, an entitlement may be applied such as a data per unit time that must be maintained. The first burst criteria and/or the second burst criteria may include any suitable criterion The leading indicator parameter(s) may include any suitable metrics, such as a bandwidth metric, a latency, a GPU processing metric, a processor metric, a storage metric, etc. The leading indicator parameter(s) may include a multi-dimensional set of leading indicator parameters, where the set of parameters is indicative of an expected subsequent burst within a specified time period when the set of parameters satisfies the first and/or second burst criteria. The leading indicators values may be compared against the burst criteria, where the indicator values could include attribute values, parameter values, the existence of a parameter, etc. For example, a leading indicator CPU usage percentage may be compared against criteria of, e.g., a 95% utilization to trigger provisioning of a resource. A latency of greater than a specified number of milliseconds may trigger provisioning of a resource, etc. Preferably, although not necessarily required, leading indicators may include parameters or metrics indicating that a need for resources will be necessary at a future time, but the need has not yet arisen. For example, if a bandwidth drops below a threshold value (where the bandwidth threshold value is one possible criterion of the burst criteria), if a latency increases above a threshold value, if a central processing unit (CPU) or GPU utilization increases above a specified threshold (or any combination of these or other suitable leading indicator parameters), there may be an indication that an expected burst is about to occur in a predicted amount of time (e.g., within the next seconds, minutes, etc.). For example, there may be an indication that the first cloud computing resource102will likely experience a burst of task demand that exceeds, is predicted to exceed, etc., an operating capacity of the first cloud computing resource102or impairs performance of the first cloud computing resource102). As mentioned above, any suitable metrics of the system may be measured, such as a bandwidth metric, a latency, a GPU processing metric, a processor metric, a storage metric, etc., and these metrics may be measured using any suitable computer components, processes, etc. In some implementations, a multi-dimensional set of metrics may be measured, where the set of metrics is indicative of an expected subsequent burst within a specified time period when the set of parameters satisfies the first and/or second burst criteria. In various implementations, the system100may also monitor for leading indicators of downtime, in order to let resources go as appropriate in order to avoid additional costs. For example, the system100may monitor leading indicators that suggest a burst is about to end, or has ended, and then release the provisioned task instance110. The leading indicators for determining when a burst is ending may be the same or different as the leading indicators used to determine when a burst is about to occur. The type of leading indicators used to predict a burst (or the end of a burst), may depend on a workload definition. The workload definition may specify important performance characteristics that correspond to the cloud computing task instance108. For example, different cloud computing task instances may use CPUs, GPUs, TPUs, ASIC circuits, FPGAs, etc., and the workload definition may specify operating parameters of the components specifically used by the cloud computing task instance108that are included as leading indicators. As explained further below, in some implementations the system100may use historical data, such as vector-based anomaly detection, in order to determine which parameters are the most useful leading indicators based on past performance. The type of GPU used by a task instance108may be important where one GPU is less expensive (although less powerful) to perform the same work. In that case, the system100may specify the workload definition to include the type of GPU needed by the task instance108, and look for the type of GPU when searching additional cloud computing resources for provisioning additional task instances in the event of a burst. Different types of GPUs may be needed for different types of tasks, such as rendering tasks, ray tracing tasks, machine learning tasks, modeling tasks, simulation tasks, hashing tasks, blockchain or distributed ledger tasks, etc. Provisioning the task instance110on the second cloud computing resource104may include allocating one or more components of the second cloud computing resource104, without actually performing the at least one portion112of the cloud computing task108with the provisioned task instance110on the second cloud computing resource104until the one or more leading indicator parameters satisfy the second burst criteria. For example, when the first burst criteria are satisfied, resources may be pre-provisioned at a remote facility (e.g., the second cloud computing resource104). Pre-provisioning may not incur costs, or may incur reduced costs, while the first cloud computing resource102continued monitoring the leading indicator(s) to determine whether a transfer to the second cloud computing resource104is needed or desired. Once the second burst criteria are satisfied, the provisioned task instance110may be fully utilized. For example, the second burst criteria may be indicative that a cost of performing the at least one portion112of the cloud computing task108using the provisioned task instance110on the second cloud computing resource104is lower than a cost associated with performing the at least one portion112of the cloud computing task108using the first cloud computing resource102. The cost differences may be determined by comparing a cost of losing or dropping performance of the portion112of the cloud computing task108, versus a cost to provision the new task instance110. For example, if the cost of losing performance would be ten dollars and the cost of provisioning the new task instance is nine dollars, it would be cost effective to provision the new task instance110and the second burst criteria may be satisfied. The first burst criteria and the second burst criteria may be modifiable according to a peak demand schedule of the second cloud computing resource104. It should be appreciated that techniques disclosed herein may give rise to supporting a for-fee service for managing cloud computing resources based on “bursting”. Thus, subject matter described herein may include Bursting as a Service (BaaS), where bursting functionality, capabilities, etc., can be purchased according to a fee schedule. For example, when there are multiple cloud instances from different paying clients running on a hardware system, the clients can purchase, bid on, etc., bursting priorities. Such an approach is advantageous because it permits the cloud vendor to prioritize which clients are able to burst into the limited hardware resources. In such embodiments, the concept of a “burst” can be quantified according to various attributes or parameters, including but not limited to hardware allocations, software provisioning, memory utilization, etc. Such attributes can be used to quantify a cost of a burst, insurance, etc., that will be guaranteed to a client. More specifically, a burst can be considered as a manageable object in the cloud computing space, where the manageable object is an object in a computer science sense. Burst objects can be instantiated, constructed, deconstructed, indexed, monitored, logged, inventoried, otherwise managed, etc. As shown inFIG.1, the system100may include a BaaS user interface (UI) as part of a user device124for displaying one or more values associated with burst instance management in the system100. For example, the user interface may display one or more expected performance values for performing the at least one portion112of the cloud computing task108using the first cloud computing resource102, one or more expected performance values for performing the at least one portion112of the cloud computing task108using the provisioned task instance110on the second cloud computing resource104, a comparison of performing the at least one portion112of the cloud computing task108using the first cloud computing resource102and performing the at least one portion112of the cloud computing task108using the second cloud computing resource104, etc. In various implementations, application programming interfaces (APIs) may be used to obtain cost estimates from cloud computing resources, such as a snapshot of current instance costs, or an expected cost to reserve an instance for a specified future time period. The system100may purchase reserve instances that give rise to secondary markets for licensing the reserve instances to others. For example, the user device124may use APIs to obtain expected cost values from the first cloud computing resource102, the second cloud computing resource104, and the third cloud computing resource116, for future time periods. If the expected cost values satisfy cost reservation criteria (such as the cost being less than a specified cost threshold or less than an average cost by a specified percentage), the user device124may be used to reserve instances on the first cloud computing resource102, the second cloud computing resource104, and the third cloud computing resource116. After the instances have been reserved, the system100may allow for auctioning off of reserve instances to other systems or users. For example, the system100may operate an auction service where other systems or users can bid on available instances that have been reserved by the system100on the first cloud computing resource102, the second cloud computing resource104, and the third cloud computing resource116. The reserve instances may be licensed to the other system or users at a higher price than the initial reservation cost by the system100. In various implementations, the system100may selectively determine which reserved instances to provide for auction to other users, and which reserved instances to hold for use by the system100. The determination may be made based on specified reservation usage criteria, which may include tradeoffs between the purchase price for the reserved instances, the latency for the reserved instances, other definitions of workloads for cloud computing tasks, etc. The user device124may provide a monitoring tool for observing the leading indicator parameters (e.g., in real-time). One example monitoring tool is OpenNMS (See URL www.opennms.com), which may provide an event-driven architecture that allows flexible workflow integration in monitoring and management stacks. The monitoring tool may normalize device-specific messages, vendor-specific messages, protocol-specific performance measurements, etc. Returning to the concept of managing bursts as manageable objects, each type of burst object can be assigned a type indicator (e.g., GUID, UUID, MIB value, etc.), which can then be integrated into the monitoring tool. The monitoring tools, such as OpenNMS, can then treat the burst object in a similar manner as any other device or object in a network system. Thus, the monitoring tool can provide management services to the user (e.g., notification, alarm forwarding, ticking information, billing, etc.). Data from the OpenNMS network management solution may be fed into bursting features, for use as leading indicator parameters. For example, some or all of the network parameters that are monitored by the OpenNMS system may be automatically supplied to the system100to enhance the ability to detect burst objects, by providing a more efficient and robust leading indicator set. The user device124may provide one or more dashboards for displaying relevant leading indicator values, for selecting which monitored parameters of the OpenNMS system will be used as leading indicators, etc. The parameters, metrics, or other information related to management of burst behaviors may be managed via MIB. Examples of techniques that may be adapted for mapping burst behaviors to MIBs or other network-like management structures are described in U.S. Pat. No. 10,594,549 to Soon-Shiong et al., titled “Fine Grained Network Management to Edge Devices Features”, issued Mar. 17, 2020. This patent is hereby incorporated by reference in its entirety. The user device124may send alerts to on-call system engineers using a variety of notification strategies, may be used to determine when to transfer performance of the cloud computing task108to the second cloud computing resource104, may be used to modify the first and second burst criteria based on the leading indicator parameters, etc. The monitoring tool of the user device124may extend the platform by using native Java API or running scripts on an underlying operating system. For example, e-mails may be generated with an SMTP/s protocol, events may be monitored through SNMP (See U.S. Pat. No. 10,594,549 referenced above), etc. The cloud computing task108may be any suitable process performed by the first cloud computing resource102, such as a graphics processing unit (GPU) task, a rendering task, a cryptographic task, a blockchain task, a gaming task, an entertainment task, a storage task, an application specific task, an operating system task, a virtual machine task, a machine learning task, a simulation task, a data transmission task, a genomic analysis task, a data storage task, a data processing task, a communication task, a client service task, a server task, a load balancing task, another type of computing task, etc. Provisioning of resources may depend on the type of resources that correspond to the type of the cloud computing task108. For example, different cloud computing tasks108may require different resources, such as tensor processing units (TPUs), hard disk drives (HDDs), graphics processing units (GPUs), computer processing units (CPUs), various types of memories, specific software applications or agents, etc. When additional task instances are provisioned, the system100may first identify a type of the task instance, and then search for the best cloud computing resources that offer the specific resources needed by the identified type of task instance. As mentioned above, any suitable type of cloud computing task108may be implemented by the system100. For example, in a gaming application the user may desire to have a lowest latency connection to the game. In that case, the system100may search for cloud computing resources that offer low latency for the user as a highest priority. This may be particularly important where the cloud computing task108includes a competition, such as eSports provisioning. In various implementations, user may desire to use Kubernetes for machine learning applications. The system100may instantiate Kubernetes for developers to work on machine learning applications, and provision additional Kubernetes containers on other cloud computing resources when burst instances occur. In addition to determining when to instantiate a new instance on another cloud computing resource, the system100may also monitor the additional cloud computing resource to determine when it is no longer in use. For example, if the provisioned resource usage declines or stops altogether, the system100may release or deallocate the additional resource to reduce costs. The system100may allow for monitoring of activities by the cloud computing task instance108and/or the provisioned task instance110, to determine whether the cloud computing task instance108and/or the provisioned task instance110are causing conflicts with other system applications. For example, the system100may have various teams or applications that work simultaneously, and the system100may continuously monitor the provisioning of instances across cloud computing resources for the various teams or applications to make sure they do not conflict with one another. Teams or applications may create new leading indicators for other cloud computing task instances that belong to the same system100, and the system100may balance between the instances of the different teams or applications to avoid incurring extra costs for unnecessary bursting. For example, the system100may manage a quality of service (QoS) among multiple different workloads that belong to the system100, in order to avoid burst situations. The system100may determine which instances have priority over other instances, according to the type of processes being performed by each instance. For example, urgent tasks such as live-streaming may take priority over less time-sensitive tasks performed by other applications of the same system100(such as post-production processing or machine learning processing). In some embodiments, the first cloud computing resource102may be located geographically remote from the second cloud computing resource104. Each cloud computing resource may include any suitable devices, services, implementations, etc. for performing cloud computing tasks. For example, each cloud computing device may include an AMAZON WEB SERVICES (AWS) resource, a GOOGLE CLOUD resource, a proprietary cloud resource, a MICROSOFT AZURE resource, etc. Thus, in some embodiments, the provisioning capability can be performed based on differences in operating parameters among a heterogeneous mix of cloud computing resources. For example, if a burst provisioning must take place from AWS to a different provider, the burst provisioning can depend on factors of the target cloud system, such as cost, incurred latency, available resources, bandwidth, geo-location, power costs, other factors, etc. In various implementations, the system100may manage a chain of bursts between multiple cloud computing resources. For example, the first cloud computing resource102may transfer the cloud computing task portion112to the provisioned task instance110of the second cloud computing resource104as a burst occurs at the first cloud computing resource102. If resources run low at the second cloud computing resource104(e.g., an incoming burst is determined at the second cloud computing resource104), the provisioned task instance110may be transferred to the additional provisioned task instance118at the third cloud computing resource116. The transfer of the cloud computing task portion from the second cloud computing resource104to the third cloud computing resource116may be controlled by the first cloud computing resource102(or a chain management module of the system100). In this manner, the first cloud computing resource102may manage a chain of bursts from one cloud computing resource to the next, and so on. AlthoughFIG.1illustrates three cloud computing resources, other embodiments may include chain management of more than three cloud computing resources. In some embodiments, the provisioned task instance110may be monitored, and may replicate a state of the first cloud computing resource102, the cloud computing task108, etc. The provisioned instance110may be a stateful instance or a stateless instance. The provisioned task instance110may be monitored and controlled to avoid bottlenecks, to implement an input buffer, for parity checking, etc. For example, a genomic analysis workflow may require provisioning of a stateful instance, in order to maintain information across multiple states of the genomic analysis. A scheduled gaming event may use a stateful instance to obtain information about players involved in the scheduled gaming event. A stateless instance example may include rendering a frame, where the rendering process does not require information about any other frames, etc. The provisioned task instance110may be selected according to whether the task108needs to be able to access other state information. The system100may include additional cloud computing resources, such as the third cloud computing resource116illustrated inFIG.1. The first cloud computing resource102may be configured to provision another task instance118on the third cloud computing resource116for performing at least one other portion120of the cloud computing task108, in response to the one or more leading indicator parameters satisfying the first burst criteria. The first cloud computing resource102may be configured to provision different portions (e.g., portions112and120) of the cloud computing task108to the second cloud computing resource104and to the third cloud computing resource116, according to defined performance capabilities of the second cloud computing resource104and to the third cloud computing resource116that correspond to a defined property of each portion (e.g., portions112and120) of the cloud computing task108. For example, a portion of the cloud computing task108that is primarily directed to large data storage sizes may be provisioned to a cloud computing resource that offers cheaper data storage costs or greater data storage availability. Similarly, a portion of the cloud computing task108that is primarily directed to GPU processing may be provisioned to a cloud computing resource that offers cheaper GPU processing costs or faster GPU processing capabilities. In various implementations, a time to store or retrieve data may be used as a parameter. For example, tape storage may have a large volume while providing slow read and write times, especially in scenarios where tapes are retrieved via robots, then placed in a tape reader. Cloud computing tasks may be provisioned according to the importance of how quickly data for the task should be stored or retrieved, where time-sensitive read and write tasks are provisioned to resources having faster memory read and write times. The system100may be used to implement blockchain related tasks. For example, the first cloud computing resource102may determine other cloud computing resources that are optimized for energy requirements to perform blockchain tasks, distributed ledger related tasks, etc. Such an approach may provide multiple technical advantages. From the perspective of monitoring the system, bursting activities (e.g., defined bursts, predicted bursts, actual bursts, provisional bursts, trending bursts, etc.) can be recorded to a ledger for tracking purposes. Such a ledger provides an authoritative source of data that can be leveraged for billing practices as well as for data analytics. This can be achieved by storing the events in “notarized” fashion on a ledger (e.g., blockchain, hash graph, distributed ledger, notarized ledger, etc.). For example, events can be compiled into a data block. The block can be linked to other previously recorded blocks by taking a hash (e.g., SHA256, scrypt, etc.) of the current block with the hash value of the previous block as well, as any other desirable metadata. The burst data can be recorded in a private ledger, public ledger (e.g., IOTA, TRON, Ethereum, BitCoin, Dogecoin, etc.), semi-private ledger, etc., which may be formed among for-fee participants. In various implementations, each burst may be quantified as a block of data, and then archived in a notarized ledger based on a previous burst. This enables creation of an immutable log of burst activity which can be accessed later, such as for forensic purposes. In various implementations, the system100may experience different types of bursts. These different types of bursts may be managed as a group, or individually. A burst blockchain block may include various information that facilitates management of individual or group bursts, such as a time of the burst, resources required by the type of burst, an owner of the computing task instance, actual data associated with the computing task instance, etc. The system100may provision tasks for each type of burst according to the data associated with the burst blockchain block. Different available cloud computing devices may be monitored according to radio frequency identification (RFID). For example, the user interface may generate a virtual representation visible to a user of the available cloud computing resources. The user interface may include a user control to enable the user to interact with the cloud computing resources. Radio frequency identification may be used to locate and track the cloud computing resources, and select a subset of the cloud computing resources to display and control based on filter parameters (e.g., leading indicator parameters, status parameters of the different cloud computing resources, etc.). One should appreciate that the cloud resources may include any suitable types of real physical resources, virtual resources, etc. For example, physical cloud resources may include ports, cables, racks, servers, processors, memory (e.g., RAM, Flash, etc.), storage (e.g., disks, HDD, SDD, etc.), other types of physical resources, etc. Virtual resources may include resources that represent abstractions and can include computing power, bandwidth, latency, virtual machines, object instances, other types of virtual items, etc. Example techniques that can be adapted for mapping between physical resources and virtual resources are described in U.S. Pat. No. 10,346,654 to Hochhalter et al., titled “RFID-Based Rack Inventory Management System”, issued on Jul. 9, 2019, which is herein incorporated by reference in its entirety. For example, when a burst is about to occur, the burst can be digitally linked, bound, etc., to the physical resources via RFID tags. The approach provides an advantage that administrators can monitor where, how, etc., the burst impacts real-world systems, down to a fine level of granularity. Vector-based anomaly detection may be used to monitor the status of data transmission speeds for different cloud computing resources, to monitor access speed of different cloud computing resources, etc. For example, a network fabric (e.g., the network(s)106) can include many fungible networking nodes (e.g., cloud computing resources). Example vector-based anomaly detection techniques include those disclosed in U.S. Pat. No. 10,218,732 to Wittenschlaeger, titled “Vector-Based Anomaly Detection”, filed on Feb. 26, 2019, which is herein incorporated by reference. A nominal behavior can be established for the fabric and represented by a baseline vector of behavior metrics. Anomaly detection criteria can be derived as a function of a variation from the baseline vector based on measured vectors of behavior metrics. Nodes in the fabric can provide a status for one or more anomaly criterion, which can be aggregated to determine if an anomalous behavior has occurred, is occurring, or is about to occur. Further to this end, rather than regarding the behaviors as anomalous, observed behaviors may be considered as leading indicators. An adaptation from U.S. Pat. No. 10,218,732 includes defining one or more leading indicator thresholds around defined behavior metrics, vectors, etc. For example, a specific vector might include a first threshold indicating that a corresponding behavior exceeds nominal behavior by 10%, a second threshold indicating that the corresponding behavior is between 10% and 50%, any other type of threshold, etc. Thus, satisfaction criteria can also include conditions that depend on such vector-based leading indicator thresholds. In various implementations, network resources may be provisioned according to the leading indicators and burst criteria. Another example of a leading indicator that can be coupled with a vector-based leading indicator includes leading indicators that represent a rate of change over time (e.g., first order derivative, second order derivative, third order derivative, forth order derivative, etc.). Such rate-based leading indicators may provide for predicting if, when, etc., a burst might occur. In some embodiments, the rate-based indicators may be related to a multi-dimensional vector as described above. Even though a current vector might not satisfy an “anomaly” criteria, its rate of change in one or more dimensions might indicate that it could satisfy a burst criteria in a certain amount of time. Thus, a new instance of the computing task can be created in anticipation of the new instance being needed. While rates of change with respect to time represent one possible set of higher order derivatives, other non-time-based derivatives may be used in various implementations, where a first parameter varies with a second parameter. For example, storage costs per unit of storage may vary as a function of a geo-location or a facility. Looking at historical data may allow for determining which parameters of a vector of metrics are most useful in predicting burst occurrences. For example, historical vectors of metrics may be used to determine characteristics of instantiation of a provisioned task instance. The system100may determine parameters that were needed for a cloud computing task108(e.g., according to a vector of metrics for the historical cloud computing task), and then assign the parameters to the provisioned task instance110. The first cloud computing resource102may be configured to transfer (as shown by122inFIG.1) the additional portion120of the cloud computing task108to the additional provisioned task instance118on the third cloud computing resource116, in response to the one or more leading indicator parameters satisfying the second burst criteria. AlthoughFIG.1illustrates three cloud computing resources102,104and116, other embodiments may include more or less cloud computing resources, more or less provisioned task instances, etc. In some embodiments, multiple cloud computing resources may be used to implement application striping for the cloud computing task108. For example, interconnected networking nodes (e.g., cloud computing resources) may offer available computing resources from a network fabric (e.g., via the one or more networks106). The computing resources can be allocated from the networking nodes, including available processing cores or memory elements located on the networking nodes. A software application can be stored in a system memory including memory elements allocated from the nodes. The software application (e.g., cloud computing task108) can be disaggregated into multiple of executable portions (e.g., the portions112and120) that are striped across the allocated processing cores by assigning each core a portion to execute. When the cores are authenticated with respect to their portions, the cores are allowed to execute the portions by accessing the system memory over the fabric. While executing the software application, the networking nodes having the allocated cores concurrently forward packets through the fabric. Example techniques for application striping can be found in U.S. Pat. No. 8,364,744 to Wittenschlaeger, titled “Software Application Striping”, issued on Jan. 29, 2013, which is herein incorporated by reference. The concepts disclosed in U.S. Pat. No. 8,364,744 can be adapted to the disclosed approach by allocating resources when the burst criteria are satisfied. The system100may include network infrastructure and/or a peering relationship between the first cloud computing resource102to the second cloud computing resource104that reduces or eliminates the cost of bandwidth for transferring data from the first cloud computing resource102to the second cloud computing resource104. This may allow bursts to be transferred to MICROSOFT AZURE cloud resources, GOOGLE cloud resources, AMAZON WEB SERVICES (AWS) cloud resources, etc., during peak demand periods on a proprietary cloud network, while permitting use of the proprietary cloud network servers during off-peak times. Some cloud resources may have very cheap off-peak spot instances, which can be resold and brokered along with the proprietary cloud network. This may allow a client to implement a cloud computing task108while being agnostic as to which cloud computing resource is processing the cloud computing task108. Cloud Computing Resource As mentioned above,FIG.2illustrates a block diagram of an example cloud computing resource102of the system100ofFIG.1. The resource102includes a processor216(which may be referred to as a central processor unit or CPU) that is in communication with memory214including optional read only memory (ROM)218and optional random access memory (RAM)220, and optional secondary storage222(such as disk drives, solid state drives, etc.). The processor216may be implemented as one or more CPU chips. The resource102further includes optional input/output (I/O) devices224, and network connectivity devices (e.g., a communication interface)212. The secondary storage222typically includes of one or more disk drives or tape drives. The secondary storage222may be used for non-volatile storage of data and as an over-flow data storage device if RAM220is not large enough to hold all working data. The secondary storage222may be used to store programs which are loaded into RAM220when such programs are selected for execution. In this embodiment, the secondary storage222has a processing component222aincluding non-transitory instructions operative by the processor216to perform various operations of the methods of the present disclosure. The ROM218is used to store instructions and perhaps data which are read during program execution. The secondary storage222, the memory214, the RAM220, and/or the ROM218may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media. The optional I/O devices224may include printers, video monitors, liquid crystal displays (LCDs), plasma displays, touch screen displays, keyboards, keypads, switches, dials, mice, track balls, voice recognizers, card readers, paper tape readers, or other suitable input devices. The network connectivity devices212may take the form of modems, modem banks, Ethernet cards, universal serial bus (USB) interface cards, serial interfaces, token ring cards, fiber distributed data interface (FDDI) cards, wireless local area network (WLAN) cards, radio transceiver cards, etc. The devices212may promote radio communications using protocols, such as code division multiple access (CDMA), global system for mobile communications (GSM), long-term evolution (LTE), worldwide interoperability for microwave access (WiMAX), near field communications (NFC), radio frequency identity (RFID), and/or other air interface protocol radio transceiver cards, and other suitable network devices. These network connectivity devices212may enable the processor216to communicate with the Internet and/or one or more intranets. With such a network connection, it is contemplated that the processor216might receive information from the network, might output information to the network in the course of performing the above-described method operations, etc. Such information, which is often represented as a sequence of instructions to be executed using processor216, may be received from and outputted to the network, for example, in the form of a computer data signal embodied in a carrier wave. The processor216executes instructions, codes, computer programs, scripts which it accesses from hard disk, floppy disk, optical disk (these various disk based systems may all be considered secondary storage222), flash drive, memory214, ROM218, RAM220, the network connectivity devices212, etc. While only one processor216is shown, multiple processors may be present. Thus, while instructions may be discussed as executed by a processor, the instructions may be executed simultaneously, serially, or otherwise executed by one or multiple processors. Although the resource102is described with reference to a computer, it should be appreciated that the system may be formed by two or more computers in communication with each other that collaborate to perform a task. For example, but not by way of limitation, an application may be partitioned in such a way as to permit concurrent and/or parallel processing of the instructions of the application. Alternatively, the data processed by the application may be partitioned in such a way as to permit concurrent and/or parallel processing of different portions of a data set by the two or more computers. For example, in some embodiments, the computers may include a mobile device, a smart phone, a personal computer, etc. When the smart phone's capabilities are about to be exceeded, an administrative application can provision resources on the personal computer for use by the smart phone. In an embodiment, virtualization software may be employed by the resource102to provide the functionality of a number of servers that is not directly bound to the number of computers in the system resource102. In an embodiment, the functionality disclosed above may be provided by executing an application and/or applications in a cloud computing environment. Cloud computing may include providing computing services via a network connection using dynamically scalable computing resources. A cloud computing environment may be established by an enterprise and/or may be hired on an as-needed basis from a third party provider. It is understood that by programming and/or loading executable instructions onto the resource102, at least one of the CPU216, the memory214, the ROM218, and the RAM220are changed, transforming the resource102in part into a specific purpose machine, apparatus, etc., having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Burst Instance Management Process An example computer-implemented method300of managing burst instances in a cloud computing system is illustrated inFIG.3, according to another example embodiment of the present disclosure. The system may include a first cloud computing resource having a first memory and a first processor for performing at least one cloud computing task, a second cloud computing resource having a second processor and a second memory, and one or more data networks connecting the first cloud computing resource and the second cloud computing resource, such as the system100illustrated inFIG.1. As shown inFIG.3, the method300includes, at block301, monitoring, by the first cloud computing resource, one or more leading indicator parameters associated with operation of the first cloud computing resource while performing the at least one cloud computing task. In response to the one or more leading indicator parameters satisfying a first burst criteria, the method300includes, at block303, provisioning, by the first cloud computing resource, a task instance on the second cloud computing resource for performing at least one portion of the cloud computing task. At304, the method includes determining whether component allocation is enabled. For example, the system may be configured to allocate components of a second cloud computing device without immediately performing the cloud computing task on the second cloud computing device. If component allocation is enabled, the method proceeds to309to provision the task instance on the second cloud computing resource by allocating one or more components of the second cloud computing resource, and not performing the at least one portion of the cloud computing task using the provisioned task instance on the second cloud computing resource until the one or more leading indicator parameters satisfy the second burst criteria. The method300includes, at block305, transferring, by the first cloud computing resource, said at least one portion of the cloud computing task to the provisioned task instance on the second cloud computing resource in response to the one or more leading indicator parameters satisfying a second burst criteria. At block307, the method300includes performing, by the second cloud computing resource, the at least one portion of the cloud computing task. In some embodiments, the first burst criteria and/or the second burst criteria is indicative of the cloud computing task exceeding, about to exceed, predicted to exceed, etc., an operating capacity of the first cloud computing resource. The second burst criteria is indicative that a cost of performing the at least one portion of the cloud computing task using the provisioned task instance on the second cloud computing resource is lower than a cost associated with performing the at least one portion of the cloud computing task using the first cloud computing resource. The cloud computing task may include any suitable task, such as a graphics processing unit (GPU) task, a rendering task, a cryptographic task, a homomorphic encryption task, a blockchain task, a notarized ledger task, a distributed ledger task, a gaming task, an entertainment task, a storage task, an application specific task, an operating system task, a virtual machine task, a container task, a machine learning task, a simulation task, a data transmission task, a genomic analysis task, a data storage task, a data processing task, a communication task, a client service task, a server task, a load balancing task, another other type of computing task, etc. In some embodiments, the one or more leading indicator parameters may include a bandwidth metric, a latency, a GPU processing metric, a processor metric, a storage metric, a rate-based metric, a predication metric, etc. The one or more leading indicator parameters may comprise a multi-dimensional set of leading indicator parameters. The one or more leading indicator parameters may be indicative of a subsequent burst within a specified time period. At block311, the method300optionally includes displaying one or more values on a user interface. For example, the user interface may display one or more expected performance values for performing the at least one portion of the cloud computing task using the first cloud computing resource, one or more expected performance values for performing the at least one portion of the cloud computing task using the second cloud computing resource, a comparison of performing the at least one portion of the cloud computing task using the first cloud computing resource and performing the at least one portion of the cloud computing task using the second cloud computing resource, etc. In some embodiments, the system may include a third cloud computing resource, and the method300may include provisioning, by the first cloud computing resource, another task instance on the third cloud computing resource for performing at least one other portion of the cloud computing task in response to the one or more leading indicator parameters satisfying the first burst criteria. The first and second burst criteria may be modified according to a peak demand schedule of the second cloud computing resource. Another example computer-implemented method400of managing burst instances in a cloud computing system including at least three cloud computing resources is illustrated inFIG.4, according to another example embodiment of the present disclosure. As shown inFIG.4, the method400includes, at block401, monitoring, by the first cloud computing resource, one or more leading indicator parameters associated with operation of the first cloud computing resource while performing the at least one cloud computing task. In response to the one or more leading indicator parameters satisfying a first burst criteria, the method400includes, at block403, provisioning, by the first cloud computing resource, a task instance on the second cloud computing resource for performing at least one portion of the cloud computing task. The method400includes, at block405, transferring, by the first cloud computing resource, said at least one portion of the cloud computing task to the provisioned task instance on the second cloud computing resource in response to the one or more leading indicator parameters satisfying a second burst criteria. At block407, the method400includes performing, by the second cloud computing resource, the at least one portion of the cloud computing task. At block409, the method400monitors leading indicator parameter(s) associated with operation of the second cloud computing resource while performing the cloud computing task. At411, the method determines whether the first burst criteria are satisfied on the second cloud computing resource. If not, the method deallocates the second cloud computing resource at413, which may include transferring the cloud computing task back to the first cloud computing device (or ending performance of the cloud computing task). The process then returns to401to monitor the leading indicators on the first cloud computing resource. If the first burst criteria are satisfied at411, the method proceeds to415to determine whether the second burst criteria are satisfied on the second cloud computing resource. If not, the method returns to409to continue monitoring the leading indicator parameters on the second cloud computing resource. If the second burst criteria are satisfied at415, the method provisions a task instance on a third cloud computing resource at417, to perform the at least one portion of the cloud computing task. In this manner, the method400may facilitate management of transferring the cloud computing task amongst different cloud computing resources as the cloud computing resources become overwhelmed with burst instances, while allowing for deallocation when additional cloud computing resources are no longer needed. AlthoughFIG.4illustrates use of the same first and second burst criteria at411and415, as the burst criteria at403and405, in other embodiments the decision to deallocate at411and the decision to provision a third cloud computing resource at417may use other suitable burst criteria that is different than the criteria of403and405. According to another example embodiment of the present disclosure, a non-transitory computer readable medium includes computer-executable instructions that are executable by one or more processors. The instructions include performing, by a first cloud computing resource having a first memory and a first processor, at least one cloud computing task. The instructions further include monitoring, by the first cloud computing resource, one or more leading indicator parameters associated with operation of the first cloud computing resource while performing the first cloud computing resource, and in response to the one or more leading indicator parameters satisfying a first burst criteria, provisioning, by the first cloud computing resource, a task instance on a second cloud computing resource for performing at least one portion of the cloud computing task. In some embodiments, the instructions may include transferring, by the first cloud computing resource, said at least one portion of the cloud computing task to the provisioned task instance on the second cloud computing resource in response to the one or more leading indicator parameters satisfying a second burst criteria. The first burst criteria and/or the second burst criteria may be indicative of the cloud computing task exceeding an operating capacity of the first cloud computing resource. Additionally, or alternatively, the second burst criteria may be indicative that a cost of performing the at least one portion of the cloud computing task using the provisioned instance on the second cloud computing resource is lower than a cost associated with performing the at least one portion of the cloud computing task using the first cloud computing resource. The cloud computing task may include any suitable task, such as a graphics processing unit (GPU) task, a rendering task, a cryptographic task, a homomorphic encryption task, a blockchain task, a notarized ledger task, a distributed ledger task, a gaming task, an entertainment task, a storage task, an application specific task, an operating system task, a virtual machine task, a container task, a machine learning task, a simulation task, a data transmission task, a genomic analysis task, a data storage task, a data processing task, a communication task, a client service task, a server task, a load balancing task, another other type of computing task, etc. The indicator parameter(s) may include any suitable metrics, such as a bandwidth metric, a latency, a GPU processing metric, a processor metric, a storage metric, etc. The leading indicator parameter(s) may comprise a multi-dimensional set of leading indicator parameters, may be indicative of a subsequent burst, a predicted burst, etc., within a specified time period, etc. In some embodiments, the instructions may include provisioning, by the first cloud computing resource, another task instance on a third cloud computing resource for performing at least one other portion of the cloud computing task in response to the one or more leading indicator parameters satisfying the first burst criteria. The examples above for the cloud computing burst management systems described herein are for purposes of illustration only, and are not intended to limit the scope of the present disclosure. The example systems and methods described herein could be used in any application where cloud computing tasks are performed, where computing burst instances occur, etc. As described herein, the example systems, cloud computing resources, networks, etc. may include a microprocessor, microcontroller, integrated circuit, digital signal processor, etc., which may include memory. The example systems, cloud computing resources, networks, etc. may be configured to perform (e.g., operable to perform, etc.) any of the example processes described herein using any suitable hardware and/or software implementation. For example, the systems, cloud computing resources, networks, etc. may execute computer-executable instructions stored in memory, may include one or more logic gates, control circuitry, etc. Leading Indicator Example Use Cases In various implementations, parameters that are external to the network and processing performance of the system100itself may be used as leading indicators to predict a burst. For example, traffic sensors may be used to detect a count of cars, people, planes, etc., in order to predict incoming burst events. If an increase of people entering an office is detected, it is likely that additional storage and CPU processing will be required by the workers. The system100may measure the traffic, and allocate or provision resources just before the predicted burst event occurs. For example, the system100may trigger allocation of additional resources when an employee badge is detected as entering a parking garage, while cloud resources may be deallocated if an employee is detected as leaving the building or having an idle workstation. The system100may trigger additional automated baggage scans at an airport when an increased flow of traffic to the airport is sensed, or allocate additional computer resources for airport tasks. As described further below, gaming applications may use external parameters in order to determine when to allocate additional resources. For example, the system100may monitor video game reviews to identify cases of players complaining about latency for a game, and then allocate additional resources for the game to address the latency issues. In the streaming video context, the system100may look for parameters such as users increasing their amount of rewinding activity, in order to determine that the streaming service is slowing down and additional resources should be allocated. For example, the system100may trigger allocation of additional resources when sales of the game spike at a location (e.g. apple app store, google play store, steam or other digital stores) or a combination of locations. The spike does not have to be of the complete game, and may include an in-game purchase such as a single level/scene, a game pass, a gameplay type (e.g. battle royale, capture the flag, king of the hill, etc.) Another trigger for the system may include content that is allowed in the game, such as whether vehicles are being allowed in, the type of vehicles allowed, the weapons available during, etc., which may make a big difference in the resources needed (e.g. if all plays are only allowed to use machetes the resources needed will be much smaller than when using rifles, drones, guided missiles, etc.). In various implementations, genomic workflows may allocate resources according to a time that a cell is loaded in the flow, where the system is expected to be ready to receive data at a future specified time period (such as 48 hours). A pathology analysis of slides may include a workflow that combines CPU and GPU processing. In that case, the system100may identify cloud computing resources that include CPU and GPU processing for provisioning task instances of the pathology analysis. For digital histopathology, the system100may create batch jobs. For example, a CPU bound job of cell counting may take a specified time period to complete, such as about one hour. At a larger scale, the system100may focus on CPU loads, although a GPU may only require a few minutes. The system100may look at a ratio of types of different tasks in order to determine what instances on what cloud computing resources should be provisioned. Various models may be to identify tumor infiltrating lymphocytes (TILs), cell counts, mutations, tissue types, mark-ups (e.g., metadata), etc. Other example requirements for genomic workflows may include BAM file requirements for storage (such as storage volume and read/write speeds), streaming speeds and latency, etc. In some cases, the system100may be configured according to workday hours, where additional resources are allocated during normal working hours and the resources are deallocated outside of normal working hours. In various implementations, machine learning tasks may include allocation of resources according to parameters such as times that scientists create notebooks (such as computer-implemented notebooks like Jupyter notebooks). The system100may detect which packages are imported into a notebook, in order to prepare resources that are specifically used by the detected packages. The system100may review source code of an application to determine the type of resources that will be needed (such as resources specific to functions called by the source code), and provision cloud computing resources that match the identified type of resources used by the source code. As another example, the system100may review historical data to determine resources that have previously been used by an application, in order to provision cloud computing resources that include those historically used resources. In various implementations, the system100may identify a number of sensors that are used for an application, in order to determine what cloud computing resources should be provisioned (such as a number of sensors that will be used to create an augmented reality (AR) model). The number of sensors used for an application, or fees for sensors, may determine an amount of resources necessary to properly render an image or other data. The system100may identify an amount of instances or cloud computing resources to provision based on the identified number of sensors. Media applications may use various inputs as leading indicators to determine an amount of resources that need to be provisioned for a burst case. For example, application tasks may include broadcasting, transcoding, feeds, rendering, etc., and the system100may use various inputs to determine which cloud computing resources should be allocated to handle a burst according to the type of task. The needed resource allocation may depend on a shooting schedule for the application, use of LED walls, prioritizing latency, prioritizing bandwidth for a design studio, etc. In some cases, the system100may attempt to achieve live or real-time processing during shooting, thus eliminating the need to carry hard drives. The system100may facilitate realization of a digital or virtual studio environment based on resource allocation. In various implementations, influencer users may trigger use of resources by their followers, and the system100may monitor activity of the influencer to determine when to allocate resources for the followers. The system100may provide real-time bottleneck detection, and provision resources when the bottleneck is detected. The system100may log events and parameters associated with the events, for future planning purposes. For example, a generic utility may be used to monitor bottlenecks such as a CPU bound condition, a storage bound condition, a bandwidth bound condition, etc. As mentioned above, the system100facilitate cloud computing resource allocation in a variety of gaming application contexts. For example, the system100may provision instances on cloud computing resources based on a geographic location of players or a gaming event (such as a live competition, eSporting event, etc.). The system100may allocate resources according to the number of players belonging to specific groups or guilds within a game, or a schedule of streaming events for a game (such as increasing resources for a game that is experiencing a high amount of viewership on a streaming platform such as Twitch). Simulations that incur large resources for rendering, such as simulated marble runs, may have cloud computing resources provisioned before the start of the simulation, or when a burst is predicted during the simulation. In various implementations, the system100may determine the type of resource that is needed to handle a burst and the cost of the resource, before determining what kind of cloud service should be requested. For example, the system100may determine whether it would be cheaper to run the task on a CPU or a GPU, which key resources are most important for the task (e.g., a CPU-heavy task versus a RAM-heavy task versus a CPU-heavy task, etc.). In some cases, tasks may be split between multiple services (e.g., AWS, Azure, etc.) depending on the cost of each task. For example, geo-location cloud services may be cheapest from Microsoft's Azure at a specified time, while GPU rendering services are cheapest from AWS. The system100may provision tasks related to geo-location on an Azure resource instance, while provisioning tasks of a same application that are related to rendering to the AWS resource instance. In various implementations, the system100may allocate resources based on a location(s) of users of an application, based on responsiveness or latency required by the users, etc. For example, a video game that requires low latency may have cloud computing resources allocated that provide the lowest latency available for the players. Distance from the server may be key to latency for the player. Therefore, the system100may select cloud computing resources that have the closest proximity to the player. In contrast, processing game statistics, pre-rendering, shading, etc. for a new level in a game, may not have any latency requirements. In that case, the system100may select a cloud computing resource that is located much further away for use during off-peak hours, to reduce the processing cost. Similarly, when processing genomic sequences, latency may not be important. The system100may allocate cloud services located in a very remote geographic location (e.g., where most users are sleeping and cloud service costs are much lower), in order to perform the genomic sequence processing. In various implementations, time-based criteria may be used as leading indicators for burst triggers, such as the time of day and a likely location where a service is expected to be needed. For example, after five P.M. there is often a spike in game activity as people get home from work and begin playing games. During holidays there are often more people playing games, or more people playing games later in the evening before a holiday. In various implementations, application-specific burst criteria may be used as leading indicators to determine when to provision additional cloud computing resources. For example, as users log in to a game, the system100may anticipate a burst of gameplay cloud services being needed. The system100may then start to provision additional resources specific to processing gameplay tasks, in response to a large number of users logging into a game. The burst criteria may vary for different genres of games. For example, faming type games may often have users periodically logging in throughout the day to collect resources, and the system100may anticipate bursts of activity around the morning, lunch, and right after work, for those types of gaming applications. The system100may provision additional resources ahead of those time periods. In various implementations, cloud services may be selected based on costs of the resources at a current time, in addition to costs of the cloud service at an expected future time that the cloud service will be needed, in order to reduce the overall cost. For example, Cloud Service A may have the current lowest price for the first hour of $10 per hour, which remains constant for the next four hours. Cloud Service B may charge $14 for the current hour, then drop to $5 per hour after. If the system100expects to provision a cloud resource for only one hour or less, the system100may select Cloud Service A. If the system100expects to provision a cloud service for two hours or more, the system100may select Cloud Service B. The system100may provide the option to reserve or suspend a cloud service, depending on the task need by an application (which may occur when a cloud service is reaching maximum capacity). For example, if there are only a few servers left that are currently available for a cloud computing resource, and a video game is attempting to host additional users, the high cost for additional cloud computing resources may cause the game to selectively stop or delay certain services of the game in order to avoid the cost for additional cloud computing resources. Burst prediction may be computed when launching an application, or as part of an application or service in or for the application. These computations may be based on the state of the application, user settings, user preferences, player subscriptions, device settings, device limitations, GPS fencing, regional restrictions, type of game (e.g., fun, eSport competition, ranking game, tournament, etc.), provider settings, connections limitations, etc. For example, in a game where a player getting ready to play is about to launch a level, the game may have a lobby, and once all the players are in the lobby, the system preforms resource requirement calculations when players are ready to start, their preferences and settings are locked in, and their start locations determined. Once the calculation is complete the resources can be provisioned for the game and then the game is launched. CONCLUSION The foregoing description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. The broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure. Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. The phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” In the figures, the direction of an arrow, as indicated by the arrowhead, generally demonstrates the flow of information (such as data or instructions) that is of interest to the illustration. For example, when element A and element B exchange a variety of information but information transmitted from element A to element B is relevant to the illustration, the arrow may point from element A to element B. This unidirectional arrow does not imply that no other information is transmitted from element B to element A. Further, for information sent from element A to element B, element B may send requests for, or receipt acknowledgements of, the information to element A. The term subset does not necessarily require a proper subset. In other words, a first subset of a first set may be coextensive with (equal to) the first set. In this application, including the definitions below, the term “module” or the term “controller” may be replaced with the term “circuit.” The term “module” may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The module may include one or more interface circuits. In some examples, the interface circuit(s) may implement wired or wireless interfaces that connect to a local area network (LAN) or a wireless personal area network (WPAN). Examples of a LAN are Institute of Electrical and Electronics Engineers (IEEE) Standard 802.11-2016 (also known as the WWI wireless networking standard) and IEEE Standard 802.3-2015 (also known as the ETHERNET wired networking standard). Examples of a WPAN are IEEE Standard 802.15.4 (including the ZIGBEE standard from the ZigBee Alliance) and, from the Bluetooth Special Interest Group (SIG), the BLUETOOTH wireless networking standard (including Core Specification versions 3.0, 4.0, 4.1, 4.2, 5.0, and 5.1 from the Bluetooth SIG). The module may communicate with other modules using the interface circuit(s). Although the module may be depicted in the present disclosure as logically communicating directly with other modules, in various implementations the module may actually communicate via a communications system. The communications system includes physical and/or virtual networking equipment such as hubs, switches, routers, and gateways. In some implementations, the communications system connects to or traverses a wide area network (WAN) such as the Internet. For example, the communications system may include multiple LANs connected to each other over the Internet or point-to-point leased lines using technologies including Multiprotocol Label Switching (MPLS) and virtual private networks (VPNs). In various implementations, the functionality of the module may be distributed among multiple modules that are connected via the communications system. For example, multiple modules may implement the same functionality distributed by a load balancing system. In a further example, the functionality of the module may be split between a server (also known as remote, or cloud) module and a client (or, user) module. The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules. Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules. References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above. Shared memory hardware encompasses a single memory device that stores some or all code from multiple modules. Group memory hardware encompasses a memory device that, in combination with other memory devices, stores some or all code from one or more modules. The term memory hardware is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of a non-transitory computer-readable medium are nonvolatile memory devices (such as a flash memory device, an erasable programmable read-only memory device, or a mask read-only memory device), volatile memory devices (such as a static random access memory device or a dynamic random access memory device), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc). The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks and flowchart elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. The computer programs include processor-executable instructions that are stored on at least one non-transitory computer-readable medium. The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. The computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language), XML (extensible markup language), or JSON (JavaScript Object Notation), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C#, Objective-C, Swift, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, JavaScript®, HTML5 (Hypertext Markup Language 5th revision), Ada, ASP (Active Server Pages), PHP (PHP: Hypertext Preprocessor), Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, MATLAB, SIMULINK, and Python®.
81,347
11861411
DETAILED DESCRIPTION OF EMBODIMENTS Cloud computing refers to the practice of using a network of remote servers hosted on a network (e.g., the Internet) to deliver information computing services (e.g., cloud services). The network architecture (e.g., including hardware and software) through which the cloud services are provided to service consumers (e.g., clients) is referred to as the cloud. Cloud computing provides access to a wide range of services, such as data processing, media processing, server, storage, network, applications, online services and the like. In some examples, media processing becomes compute intensive, and thus a media processing cloud is preferred to offload significant workloads to remote servers. Generally, a cloud computing system includes a network, one or more servers, and one or more client devices. The network facilitates communications between the servers and client devices. A client device may be, for example, a smartphone, a tablet computer, a laptop, a personal computer, a wearable device, a head-mounted display (HMD), or the like. A server can include any suitable computing or processing device that can provide computing services for one or more client devices. For example, each server can include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network. In some embodiments, a server includes a workflow manager that can select functions and build a workflow pipeline to perform a processing task. According to some embodiments of the disclosure, when computation is offloaded to remote servers, the client device, or a service provider that is responsible for providing the service, may need information regarding the processing status of the service, such as real-time information of the service, and the like. In some examples, actions can be taken in the cloud computing system to provide status information to entities involved in the cloud service. In an example, reporting is an action that is regularly performed (e.g., frequently performed based on a time interval) to provide information about events and variables to a destination during performing of a service. In another example, notifying is an action of providing information to a destination when an event occurs. In another example, monitoring is an action of an entity requesting information and receiving the requested information. For ease of description, in the following description, the status information generated in response to the actions (e.g., reporting, notifying and monitoring) is referred to as status report; the status information generated in response to the reporting action is referred to as regular report; the status information generated in response to the notifying action is referred to as notification report; and the status information generated in response to the monitoring action is referred to as monitoring report. Aspects of the disclosure provide a framework for reporting, monitoring and notifying in a cloud computing system. The framework can simplify setting up and processing of reporting, monitoring and notifying actions. Specifically, some aspects of the disclosure provide self-contained reporting, monitoring and notifying framework for cloud computing services. A self-contained framework can provide to a recipient, a container (e.g., a message) that includes all the necessary information in the container for the recipient to understand, thus the recipient does not need cross referencing information out of the container to understand the received information. It is noted that, in the present disclosure, a media processing system (also referred to as a media processing cloud in some examples) that performs network-based media processing (NBMP) is used as an example to illustrate the reporting, monitoring and notifying frameworks, and the reporting, monitoring and notifying frameworks can be used in other suitable cloud computing system. In a media processing system, a NBMP source describes the requested media processing and provides information about the nature and format of the media data. Accordingly, an NBMP workflow manager can establish a media processing workflow and informs the NBMP source that the workflow is ready, and then media processing can start. For example, media source(s) can then start transmitting media to the network for processing. In some embodiments, an NBMP workflow includes media processing tasks that are connected based on input/output relationships among the media processing tasks. Each of the media processing tasks performs a media processing operation, such as video decoding, video stitching, video encoding, and/or the like. In an example, a first media processing task performs a media processing operation based on inputs and generates outputs. The outputs of the first media processing task can be used as inputs to a second media processing task that is connected with the first media processing task. In other words, an NBMP workflow can be considered as a connected graph of media processing tasks. The workflow manager can ensure the correct operation of the workflow by configuring and monitoring each task as well as the workflow output. In some examples, the workflow manager is configured to the select the media processing functions and instantiate the media processing functions as tasks based on the workflow description that is received from the NBMP source. In a media processing system, suitable interactions can be performed to establish, load, instantiate and monitor media processing entities that will run the media processing tasks. In some examples, application programming interfaces (APIs) can be defined between an NBMP source and workflow manager; workflow manager and task(s); and an API is defined to discover appropriate function(s). In some examples, a media processing system is configured to be media format and protocol agnostic. The media processing system can identify and signal the media, metadata and auxiliary information formats for data exchanged between media source, the workflow manager and tasks. In some examples, interfaces including both data formats and application programming interfaces (APIs) among the entities connected through digital networks for media processing can be defined. Users can access and configure user operations remotely for efficient, intelligent processing. The workflows to be applied to media data can be described and managed. The media data can be uploaded to the network, and the media processing tasks can be instantiated and configured. In some embodiments, dynamic creation of media processing pipelines, as well as access to processed media data and metadata in real-time or in a deferred way are enabled. The media and metadata formats used between the media source, workflow manager and media processing entities in a media processing pipeline are also specified. In an example, clients (e.g., creators, service providers, and consumers of digital media) can describe media processing operations to be performed by media processing entities in a network. A workflow can be described by composing a set of media processing functions that are accessible through interfaces (e.g., NBMP APIs). A media processing entity (MPE) can run processing tasks applied on the media and the related metadata received from media source(s) or other tasks. The MPE can provide capabilities for configuring, managing, and monitoring processing tasks. A media processing task can be a process applied to media and metadata input(s), producing data and related metadata output(s) to be consumed by a media sink or other media processing tasks. The media processing system can support various delivery methods such as streaming, file delivery, push-based progressive download, hybrid delivery, multipath, and heterogeneous network environments. FIG.1shows an exemplary media processing system (e.g., NBMP system, a NBMP reference architecture, a NBMP architecture) (100) according to an embodiment of the disclosure. The media processing system (100) can include a plurality of entities, such as a NBMP source (101), a workflow manager (e.g., a NBMP workflow manager) (103), a function repository (105), a media source (111), a media processing entity (MPE) (113), a media sink (115), a third party entity, and/or the like. The media processing system (100) can include additional media source(s), media sink(s), and/or media processing entities. The media processing system (100) can process media data across one or more processing entities in a network. Information, such as various media and control information (or control data) for the media, can be communicated among the plurality of entities in the media processing system (100). To provide a context for discussion purposes, the media processing system (100) is described as the NBMP system (100) below. The descriptions can be suitably adapted to any media processing system. The NBMP source (101) can describe, or otherwise indicate, media processing in the network. The function repository (105) can include NBMP function descriptions of various NBMP functions. The NBMP source (101) and the workflow manager (103) can retrieve the NBMP function descriptions or functions from the function repository (105). An NBMP function can refer to implementation of a standalone and self-contained media processing operation and/or the corresponding description of the operation. A processing task or a task can refer to a runtime instance of a NBMP function that is executed by the MPE (113). An NBMP workflow or a workflow can be represented by a graph (e.g., a directed acyclic graph (DAG)) of one or more connected task(s) that achieve the requested media processing. The workflow manager (103) can provision task(s) and connect the task(s) to create, control, manage and monitor a workflow, for example, based on a workflow description document (WDD). The media source (111) can provide media content (e.g., media data, supplementary information) to be processed by a workflow. The supplementary information can include metadata or auxiliary information related to the media data. The media source (111) can provide an input to the workflow. The media sink (115) can consume an output of the workflow. The MPE (113) can run one or more media processing task(s) to process the media content. Different entities (e.g., the NBMP Source (101), the workflow manager (103) and the MPE (113)) in the NBMP system (100) can use APIs to invoke and respond to media service requests. The APIs can include a NBMP workflow API or a workflow API, a function discovery API, and a task API. The workflow API can provide an interface between the NBMP Source (101) and the workflow manager (103). The task API can provide an interface between the workflow manager (103) and media processing tasks. The function discovery API can provide an interface between the workflow manager (103)/the NBMP Source (101) and the Function Repository (105). NBMP interfaces described above can be used to create and control media processing workflows in the network. The NBMP system (100) can be split into a control plane and a media plane (or media data plane). The control plane can include the workflow API, the function discovery API, and the task API. The workflow API can be used by the NBMP source (101) to create and control a media processing workflow. The NBMP Source (101) can use the workflow API to communicate with the workflow manager (103) for configuring and controlling media processing in the network. When the NBMP Source (101) sends a request to the workflow manager (103) by including a workflow resource (WR) in an operation of the workflow API, the workflow manager (103) can parse the WR, the included WDD and corresponding descriptors, and take the appropriate actions according to the requested operation. Then, the workflow manager (103) can acknowledge the request with a response. The workflow API operations can include creating a workflow (e.g., CreateWorkflow), updating a workflow (e.g., UpdateWorkflow), deleting a workflow (e.g., DeleteWorkflow), retrieving a workflow (e.g., RetrieveWorkflow), and the like. The function discovery API can provide the means for the workflow manager (103) and/or the NBMP Source (101) to discover media processing functions that can be loaded as part of a media processing workflow. The task API can be used by the workflow manager (103) to configure and monitor task(s) (e.g., a task1and a task2run by the MPE (113)) at runtime. The task API can define interface(s) for configuration of media processing tasks by the workflow manager (103), for example, after the resources for the task are allocated in the MPE (113). Task API operations can include creating a task (e.g., CreateTask), updating a task (e.g., UpdateTask), getting a task (e.g., GetTask), deleting a task (e.g., DeleteTask), and the like. On the media plane, the media formats, the metadata, and the supplementary information formats between the NBMP Source (111) and task(s), as well as between the tasks can be defined. A workflow description (WD) can be passed from the NBMP source (101) to the workflow manager (103). The WD can describe information such as input data and output data, functions and other requirements for the workflow. The workflow manager (103) can receive a WDD from the NBMP source (101) and can build a workflow for requested media processing. In a workflow procedure, media processing functions can be selected, for example, from the function repository (105), and then corresponding media processing tasks can be configured and distributed to a set of one or more MPEs (e.g., including the MPE (113)). The set of functions provided by the function repository (105) can be read by an NBMP source (101) and the workflow manager (103). In an embodiment, the NBMP source (101) requests the creation of a workflow using a set of functions in the function repository (105). Accordingly, the NBMP source (101) is configured to select functions for the workflow. The NBMP source (101) can request the creation of the workflow as described below. The NBMP source (101) can use a description of the media processing tasks by which the workflow is to be created, and can specify a connection map to define connections of inputs and outputs of the media processing tasks. When the workflow manager (103) receives the above information from the NBMP source (101), the workflow manager (103) can instantiate the media processing tasks based on respective function names and can connect the media processing tasks according to the connection map. Alternatively, the NBMP source (101) can request the creation of a workflow using a set of keywords by which the workflow manager (103) can construct the workflow. Accordingly, the NBMP source (101) may not be aware of a set of functions to be inserted into the workflow. The NBMP source (101) can request the creation of the workflow as described below. The NBMP source (101) can use the set of keywords by which the workflow manager (103) can find the appropriate functions, and can specify the requirements of the workflow using suitable workflow description. When the workflow manager (103) receives the above information (e.g., the set of keywords) from the NBMP source (101), the workflow manager (103) can create the workflow by searching for appropriate functions using the keywords, for example, specified in a processing descriptor. The workflow manager (103) can then use other descriptors in the workflow description to provision the media processing tasks and connect the media processing tasks to create the final workflow. A processing model of the workflow manager (103) can be described as below. The workflow manager (103) can discover available media processing functions as below. The NBMP function repository (105) can provide the function discovery interface (or API) to allow external entities to query for a media processing function that can fulfil the requested processing. The workflow manager (103) can have access to a directory service that offers a searchable list of media processing functions. The workflow manager (103) can use the description of the media processing tasks in the workflow description to find the appropriate functions for the workflow. Selection of the media processing tasks for the workflow can be described below. When a request for media processing is received from the NBMP source (101), the workflow manager (103) can search the function repository (105) to find the list of all available functions that can fulfill the workflow. Using the workflow description from the NBMP Source (101), the workflow manager (103) can find the functions from the function repository (105) to implement the workflow, which can depend on the information for media processing from the NBMP Source (101). The information for media processing can include the input and output description, the description of the requested processing, and the information in other descriptors for functions in the function directory (105). Mapping of the source requests to appropriate media processing tasks to be included in the workflow can be a part of the implementation of the NBMP in the network. To reference and link input sources with input port names and output port names at the time of task creation, the input-ports and output-ports can be used to make references to the input streams. A search for appropriate functions to be instantiated as tasks can be performed by the workflow manager (103) using a function discovery API. Alternatively, the workflow manager (103) can retrieve detailed information of some or all suitable functions in the function repository (105) using the function discovery API. The workflow manager (103) can then compare the information for media processing from the NBMP source (101) with different descriptors of each function. Selected media processing tasks can be configured in the workflow. When the functions to be included in the workflow are identified, the NBMP workflow manager (103) can instantiate the functions as respective tasks and configure the tasks so that the tasks can be added to the workflow. The NBMP workflow manager (103) can extract the configuration data from the media processing information received from the NBMP source (101) and configure the corresponding tasks. The configuration of the tasks can be performed using a task API (e.g., NBMP task API). Task allocation and distribution can be described below. The workflow manager (103) can use the workflow to perform processing deployment and configure the media processing entities. In an example, for computationally intensive media processing requests, the workflow manager (103) can set up multiple computational instances and distribute a workload among the multiple computational instances. Thus, the workflow manager (103) can connect and configure the multiple computational instances as needed. In an example, the workflow manager (103) allocates a same task to multiple instances and provisions a load balancer to distribute the workload among the multiple instances using a chosen scheduling mechanism. In an alternative example, the workflow manager (103) allocates different operations of the same task to different instances (e.g., parallel operations). In both examples described above, the workflow manager (103) can set up the workflow paths between the instances, and thus the suitable workload can be successfully realized. The workflow manager (103) can configure the tasks to push the processed media data/streams (or make them available through a pull mechanism) to a next task in the workflow graph. When the workflow manager (103) receives a WDD from the NBMP Source (101), the workflow manager (103) can perform a selection of media processing functions to be inserted into the workflow. When the list of tasks to be included in the workflow is compiled, the workflow manager (103) can then connect the tasks to prepare the workflow. The workflow manager (103) can generate a workflow, for example, as represented by a graph (e.g., a DAG) from the WDD.FIG.2shows an example of a graph (e.g., a DAG) (200) according to an embodiment of the disclosure. The DAG (200) can include a plurality of nodes (T1)-(T6) and a plurality of links (or connections) (202)-(208). In an example, the DAG (200) represents the workflow (200). Each node of the DAG (200) can represent a media processing task in the workflow (200). A link (e.g., the link (202)) connecting a first node (e.g., the node (T1)) to a second node (e.g., the node (T2)) in the DAG (200) can represent a transfer of an output of the first node (e.g., the node (T1)) as an input to the second node (e.g., the node (T2)). In general, a workflow can include any suitable number of input(s) (or workflow input(s)) and any suitable number of output(s) (or workflow output(s)). The workflow input(s) can be connected to the media source (111), other workflow(s), and/or the like, and the workflow output(s) can be connected to the media sink (115), other workflow(s), and/or the like. The workflow (200) has an input (201) and outputs (209) and (210). The workflow (200) can have one or more outputs from intermediate nodes in some embodiments. According to some aspects of the disclosure, status reports can include information of variables and information of events for functions. For example, an NBMP function is an implementation of a processing operation. In some embodiments, an NBMP function can be a standalone and self-contained media processing operation and the corresponding description of the processing operation. In some examples, for each NBMP function, two independent descriptors are used in the description. One of the two independent descriptors is for events of the NBMP function and is referred to as events descriptor and the other one of the two independent descriptors is for variables and is referred to as variables descriptor. The variables descriptor includes characteristics of the variables, such as mathematical definitions, units, formats, universal identifiers and the like that are used to understand the variables. Thus, in some examples, the variables descriptor can be self-explanatory, and variables can be understood based on the variables descriptor and no need to refer other texts out of the variables descriptor. Similarly, the events descriptor includes characteristics of the events that are used to understand the events. Thus, in some examples, the events descriptor can be self-explanatory, and events can be understood based on the events descriptor and no need to refer other texts out of the events descriptor. According to an aspect of the disclosure, workflow of a service can be performed on different systems (also referred to as platforms, cloud systems). Each system may have system variables and system events. The system variables and system events can be included in the status report. In some embodiments, status reports are defined based on self-contained descriptors. In an example, a regular report is defined based on a self-contained descriptor that is referred to as reporting descriptor. For example, the regular report can include an object defined based on the reporting descriptor, and the object is referred to as a reporting descriptor object. The reporting descriptor object allows the regular report to include a subset of function variables and events, as well as system variables and events. The variables (e.g., function variables and system variables) can be included in the form defined based the variables descriptor and the events (e.g., function events and system events) can be included in the form defined based on events descriptor. Thus, the regular report can be self-explanatory, and the variables and the events can be understood based on the variables descriptor and events descriptor. In another example, a notification report is defined based on a self-contained descriptor that is referred to as notification descriptor. For example, the notification report can include an object defined based on the notification descriptor, and the object is referred to as a notification descriptor object. The notification descriptor object allows the notification report to include a subset of function variables and events, as well as system variables and events. The variables (e.g., function variables and system variables) can be included in the form defined based the variables descriptor and events (e.g., function events and system events) can be included in the form defined based on events descriptor. Thus, the notification report can be self-explanatory, and the variables and the events can be understood based on the variables descriptor and events descriptor. In another example, a monitoring report is defined based on a self-contained descriptor that is referred to as monitoring descriptor. For example, the monitoring report can include an object defined based on the monitoring descriptor, and the object is referred to as a monitoring descriptor object. The monitoring descriptor object allows the monitoring report to include a subset of function variables and events, as well as system variables and events. The variables (e.g., function variables and system variables) can be included in the form defined based the variables descriptor and events (e.g., function events and system events) can be included in the form defined based on events descriptor. Thus, the monitoring report can be self-explanatory, and the variables and the events can be understood based on the variables descriptor and events descriptor. Specifically, in some examples, each function can include a list of variables described according to the variables descriptor. FIG.3shows a list of variables in a function according to some embodiments of the disclosure. As shown inFIG.3, the function includes one or more variables, such as Variable-1, Variable-2, Variable-3 and the like. The variables or a subset of the variables can be included in a regular report in response to a reporting action. The variables or a subset of the variables can be included in a monitoring report in response to a monitoring action. The variables or a subset of the variables can be included in a notification report in response to a notifying action. In some embodiments, a variable descriptor object that is defined according to the variable descriptor can be included in a status report. The variable descriptor object can include an array of objects associated with variables. An object associated with a variable can include characteristics of the variable. For example, Object-1 includes the characteristics of Variable-1; Object-2 includes the characteristics of Variable-2; and Object-3 includes the characteristics of Variable-3. FIG.4shows a table of characteristics for a variable according to some embodiments of the disclosure. The table also includes explanation of the characteristics. For example, an object associated with a variable includes a name for the variable, a definition for the variable, a value of the variable, a unit for the variable, a parameter type for the variable and possibly a range for the variable (including a min value and a max value for the variable), and the like. FIG.5shows an example of a variables descriptor schema according to some embodiments of the disclosure. As shown, the variable descriptor is an array of objects (e.g., items) associated with variables. Further, in some examples, each function includes a list of events described according to an events descriptor. FIG.6shows a list of events in a function according to some embodiments of the disclosure. As shown inFIG.6, the function includes one or more events, such as Event-1, Event-2, and the like. The events or a subset of the events can be included in a regular report in response to a reporting action. The events or a subset of the events can be included in a monitoring report in response to a monitoring action. The events or a subset of the events can be included in a notification report in response to a notifying action. In some embodiments, an event descriptor object that is defined according to the event descriptor can be included in a status report. The event descriptor object can include an array of objects associated with events. An object associated with an event can include characteristics of the event. For example, Object-1 includes the characteristics of Event-1; Object-2 includes the characteristics of Event-2. FIG.7shows a table of characteristics for an event according to some embodiments of the disclosure. The table also includes explanation of the characteristics. For example, an object associated with an event includes name of the event, definition of the event and the like. FIG.8shows an example of an events descriptor schema according to some embodiments of the disclosure. As shown, the events descriptor can includes an array of objects (items) associated with events. According to an aspect of the disclosure, reporting is an act of regularly reporting information (e.g., generating and sending regular reports) to a destination. In some embodiments, reporting can be setup in response to request. In some examples, a requesting entity sends a request to a reporting entity. The request includes a first reporting descriptor object that is defined according to reporting descriptor. The first reporting descriptor object can include a list of desired variables and events to report, a destination to send the report, and the like. Then, the reporting entity sends the destination defined in the first reporting descriptor object, the events and variable values described in the first reporting descriptor object. For example, a regular report includes a second reporting descriptor object that is similar to the first reporting descriptor object. The second reporting descriptor object can include updated values for variables and occurrence information of the events. Generally, the reporting descriptor also includes frequency of reporting, the start time of reporting and delivery protocol of regular reports. It is noted that the destination can be the requesting entity and can be an entity different from the requesting entity. In the media processing system (100), the requesting entity can be the NBMP source (101) or the workflow manager (103). The reporting entity can be workflow manager (103) or media processing tasks. In an example, the workflow manager (103) can send a request, via a task API, to a media processing task (e.g., TASK1executed by MPE (113)) for regular reporting. The workflow manager (103) can specify the desired variables and events, the time-interval, grouping, the destination and protocol in a first reporting descriptor object in the request. Then, the media processing task can send regular reports based on the time-interval to the destination using the protocol specified in the first reporting descriptor object. A regular report includes a second reporting descriptor object that carries the information of the variables and events to report and other information, such as the time-interval, grouping, the destination, protocol and the like. In another example, the NBMP source (101) can setup one or more reporting schemes and can send a request, via workflow API, to the workflow manager (103) for regular reporting. Then, the workflow manager (103) can generate regular reports or the workflow manager (103) can request a media processing task to generate regular reports. The NBMP source (101) can specify the desired variables and events, the time-interval, grouping, the destination and protocol in a first reporting descriptor object in the request. Then, the workflow manager (103) or the media processing task can send regular reports based on the time-interval to the destination using the protocol specified in the first reporting descriptor object. A regular report includes a second reporting descriptor object that carries the information of the variables and events to report and other information, such as the time-interval, grouping, the destination, protocol and the like. In some embodiments, HTTP/1.1 is used as the protocol, and POST method is used for sending the regular reports in POST messages. For example, a body of a POST message includes a reporting descriptor object. In some examples, the body of the POST message also includes a request descriptor object in order to identify the report. In an example, the request descriptor object includes an identification of the request, a priority, and a task identification for the media processing task. FIG.9shows a table of parameters in a reporting descriptor and the corresponding types for the parameters andFIG.10shows an example of a reporting descriptor schema according to some embodiments of the disclosure. It is noted that system-events include events that belong to the media processing system and can be provided by the cloud in the same format as the events of the function. It is also noted that system-variables include variables that belong to the media processing system and can be provided by the cloud in the same format as the variables of the function. In some embodiments, the (function) variables can be included in a first variables descriptor object, and system variables can be included in a second variables descriptor object; the (function) events can be included in a first events descriptor object, and system events can be included in a second events descriptor object. According to another aspect of the disclosure, notification is an act of reporting information (e.g., generating and sending notification report) to a destination when an event occurs. In some embodiments, notification is setup in response to request. In some examples, a requesting entity sends a request to a notifying entity. The request includes a first notification descriptor object defined according to the notification descriptor. The first notification descriptor object can specify, for example, a list of desired variables and events, a destination to send the report, and the like. Then, when one or more of the events listed in the first notification descriptor object occurs, the notifying entity sends the destination defined in the first notification descriptor object, the events and variable values described in the first notification descriptor object. For example, a notification report includes a second notification descriptor object that is similar to the first notification descriptor object. The second notification descriptor object can include updated values for variables and occurrence information of the events. Generally, the notification descriptor can also include other suitable information for notification, such as the start time, notification interval, delivery protocol, and the like. It is noted that the destination can be the requesting entity and can be an entity different from the requesting entity. In the media processing system (100), the requesting entity can be the NBMP source (101) or the workflow manager (103). The notifying entity can be workflow manager (103) or media processing tasks. In an example, the workflow manager (103) can send a request, via a task API, to a media processing task (e.g., TASK1executed by MPE (113)) for notification. The workflow manager (103) can specify a list of subscribed events for notification, a list of variables, the destination and protocol in a first notification descriptor object in the request. Then, when one or more events in the list of subscribed events occurs, the media processing task can send a notification report to the destination using the protocol specified in the notification descriptor. A notification report includes a second notification descriptor object that carries the information of the variables in the list of variables, information of events in the list of subscribed events and other information. In another example, the NBMP source (101) can setup one or more notification schemes and can send a request, via workflow API, to the workflow manager (103) for notification. Then, the workflow manager (103) can generate notification reports or the workflow manager (103) can request a media processing task to generate notification reports. The NBMP source (101) can specify a list of subscribed events, a list of variables, the destination and protocol in a first notification descriptor object in the request. Then, when any of the events in the list of subscribed events occurs, the workflow manager (103) or the media processing task can send a notification report to the destination using the protocol specified in the notification descriptor. A notification report includes a second notification descriptor object that carries the information of the variables and information of the events, and other information of the regular reports, such as the destination, protocol and the like. In some embodiments, HTTP/1.1 is used as the protocol, and POST method is used for sending the notification reports in POST messages. For example, a body of a POST message includes a notification descriptor object. In some examples, the body of the POST message also includes a request descriptor object in order to identify the notification. In an example, the request descriptor object includes an identification of the request, a priority, and a task identification for the media processing task. FIG.11shows a table of parameters in a notification descriptor and the corresponding types for the parameters andFIGS.12A-Bshow an example of a notification descriptor schema according to some embodiments of the disclosure. It is noted that system-events include events that belong to the media processing system and can be provided by the cloud in the same format as the events of the function. It is also noted that system-variables include variables that belong to the media processing system and can be provided by the cloud in the same format as the variables of the function. In some embodiments, the (function) variables can be included in a first variables descriptor object, and system variables can be included in a second variables descriptor object; the (function) events can be included in a first events descriptor object, and system events can be included in a second events descriptor object. According to another aspect of the disclosure, monitoring is an act of an entity requesting information and receiving the requested information. In some embodiments, monitoring can be performed in response to request. In some examples, a monitoring entity (an entity who monitors) sends a monitoring update request to a monitored entity (an entity being monitored). The monitoring update request includes a first monitoring descriptor object defined according to monitoring descriptor. The first monitoring descriptor object can specify, for example, a list of desired variables and event. In response to the monitoring update request, the monitored entity sends to the monitoring entity a monitoring reporting with a second monitoring descriptor object that carries values of the requested variables and a subset of events from the desired events that have occurred. In the media processing system (100), the monitoring entity can be the NBMP source (101) or the workflow manager (103). The monitored entity can be workflow manager (103) or media processing tasks. In an example, the workflow manager (103) can send a monitoring update request, via a task API, to a media processing task (e.g., TASK1executed by MPE (113)). The workflow manager (103) can specify the desired variables and events in a first monitoring descriptor object in the monitoring update request. In response to the monitoring update request, the media processing task can send a monitoring report to the workflow manager (103). The monitoring report includes a second monitoring descriptor object that carries the information of the variables and events to report. In another example, the NBMP source (101) can send a monitoring update request, via workflow API, to the workflow manager (103). The NBMP source (101) can specify the desired variables and events in a first monitoring descriptor object in the monitoring update request. In response to the monitoring update request, the workflow manager (103) can generate a monitoring report and send to the NBMP source (101). A monitoring report includes a second monitoring descriptor object that carries the information of the variables and events to report. FIG.13shows a table of parameters in a monitoring descriptor and the corresponding types for the parameters andFIG.14shows an example of a monitoring descriptor schema according to some embodiments of the disclosure. It is noted that system-events include events that belong to the media processing system and can be provided by the cloud in the same format as the events of the function. It is also noted that system-variables include variables that belong to the media processing system and can be provided by the cloud in the same format as the variables of the function. In some embodiments, the (function) variables can be included in a first variables descriptor object, and system variables can be included in a second variables descriptor object; the (function) events can be included in a first events descriptor object, and system events can be included in a second events descriptor object. In some embodiments, HTTP/1.1 is used as the protocol, and POST method is used for sending the monitoring reports in POST messages. For example, a body of a POST message includes a monitoring descriptor object. In some examples, the body of the POST message also includes a request descriptor object in order to identify the monitoring reports. FIG.15shows a flow chart outlining a process (1500) according to an embodiment of the disclosure. In an example, the process (1500) is executed in a cloud, such as the media processing system (100), and the like. In some embodiments, the process (1500) is implemented in software instructions, thus when the processing circuitry executes the software instructions, the processing circuitry performs the process (1500). The process starts at (S1501) and proceeds to (S1510). At (S1510), a request including first characteristics associated with a variable is received. The first characteristics associated with the variable can fully describe the variable. For example, the first characteristics include a definition of the variable, a unit of the variable, a format of the variable, a universal identifier of the variable and the like. The variable can be understood based on the first characteristics without cross referencing other texts out of the first characteristics. In some embodiments, the first characteristics are a portion of a first variables descriptor object. The first variables descriptor object includes an array of objects respectively associated with variables. The variables can be function variables or system variables. The request can also include a first events descriptor object that includes an array of objects respectively associated events. The events can be function events or system events. In an example, the request is sent from the NBMP source (101) to the workflow manager (103). In another example, the request is sent from the workflow manager (103) to a media processing task. At (S1520), a message is generated, the message includes the first characteristics of the variable and an updated value of the variable. In an embodiment, the request includes a first reporting descriptor object to request regular reports. The first reporting descriptor object can include a first variable descriptor object associated with variables, a first event descriptor object with associated with events, and a reporting interval. Based on the first reporting descriptor object, a message is generated regularly based on the reporting interval. The message can include a second reporting descriptor object. For example, the second reporting descriptor object includes a second variables descriptor object that is similar to the first variables descriptor object with updated values for the variables. The second reporting descriptor object in the message can also include a second event descriptor object having a subset of events based on occurrences of the events. In another embodiment, the request includes a first notification descriptor object to request notification report that is triggered by occurrence of events. The first notification descriptor object can include a first variable descriptor object associated with variables, and a first event descriptor object associated with events. Then, in response to an occurrence of at least one of the subscribed events, the message is generated. The message includes a second notification descriptor object that is similar to the first notification descriptor object. For example, the second notification descriptor object includes a second variables descriptor object that is similar to the first variable descriptor object but with updated values for the variables. The second notification descriptor object also includes a second events descriptor object having a subset of the subscribed events. In another embodiment, the request includes a first monitoring descriptor object to request a monitoring report. The first monitoring descriptor object can include with a first variable descriptor object associated with variables, and a first event descriptor object associated with events. Then, in response to the request, the message is generated. The message includes a second monitoring descriptor object that is similar to the first monitoring descriptor object. For example, the second monitoring descriptor object includes a second variables descriptor object that is similar to the first variable descriptor object but with updated values for the variables. The second monitoring descriptor object also includes a second events descriptor object having a subset of the events. At (S1530), the message is sent to a recipient, and the process proceeds to S1599and terminates. In the example of regular report, the first and second reporting descriptor objects include a destination (e.g., url), and the message is sent to the destination. In the example of notification report, the first and second notification descriptor objects include a destination (e.g., url), and the message is sent to the destination. In the example of monitoring report, the message is sent to the entity where the request came from. The process (1500) can be suitably adapted. Step(s) in the process (1500) can be modified and/or omitted. Additional step(s) can be added. Any suitable order of implementation can be used. The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. The methods and embodiments in the disclosure may be used separately or combined in any order. Further, each of the methods (or embodiments), functions or tasks, may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). In one example, the one or more processors execute a program that is stored in a non-transitory computer-readable medium. The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,FIG.16shows a computer system (1600) suitable for implementing certain embodiments of the disclosed subject matter. The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like. The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like. The components shown inFIG.16for computer system (1600) are exemplary in nature and are not intended to suggest any limitation as to the scope of use or functionality of the computer software implementing embodiments of the present disclosure. Neither should the configuration of components be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary embodiment of a computer system (1600). Computer system (1600) may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video). Input human interface devices may include one or more of (only one of each depicted): keyboard (1601), mouse (1602), trackpad (1603), touch screen (1610), data-glove (not shown), joystick (1605), microphone (1606), scanner (1607), camera (1608). Computer system (1600) may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (1610), data-glove (not shown), or joystick (1605), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (1609), headphones (not depicted)), visual output devices (such as screens (1610) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability—some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted). Computer system (1600) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (1620) with CD/DVD or the like media (1621), thumb-drive (1622), removable hard drive or solid state drive (1623), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like. Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals. Computer system (1600) can also include an interface to one or more communication networks. Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general purpose data ports or peripheral buses (1649) (such as, for example USB ports of the computer system (1600)); others are commonly integrated into the core of the computer system (1600) by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system (1600) can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi-directional, for example to other computer systems using local or wide area digital networks. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above. Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (1640) of the computer system (1600). The core (1640) can include one or more Central Processing Units (CPU) (1641), Graphics Processing Units (GPU) (1642), specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) (1643), hardware accelerators for certain tasks (1644), and so forth. These devices, along with Read-only memory (ROM) (1645), Random-access memory (1646), internal mass storage such as internal non-user accessible hard drives, SSDs, and the like (1647), may be connected through a system bus (1648). In some computer systems, the system bus (1648) can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus (1648), or through a peripheral bus (1649). Architectures for a peripheral bus include PCI, USB, and the like. CPUs (1641), GPUs (1642), FPGAs (1643), and accelerators (1644) can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM (1645) or RAM (1646). Transitional data can be also be stored in RAM (1646), whereas permanent data can be stored for example, in the internal mass storage (1647). Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU (1641), GPU (1642), mass storage (1647), ROM (1645), RAM (1646), and the like. The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts. As an example and not by way of limitation, the computer system having architecture (1600), and specifically the core (1640) can provide functionality as a result of processor(s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core (1640) that are of non-transitory nature, such as core-internal mass storage (1647) or ROM (1645). The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core (1640). A computer-readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core (1640) and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM (1646) and modifying such data structures according to the processes defined by the software. In addition or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator (1644)), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software. While this disclosure has described several exemplary embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.
57,457
11861412
DETAILED DESCRIPTION Specific embodiments will now be described with reference to the accompanying figures. In the below description, numerous details are set forth as examples of embodiments described herein. It will be understood by those skilled in the art, that have the benefit of this Detailed Description, that one or more embodiments of embodiments described herein may be practiced without these specific details and that numerous variations or modifications may be possible without departing from the scope of the embodiments described herein. Certain details known to those of ordinary skill in the art may be omitted to avoid obscuring the description. In the below description of the figures, any component described with regard to a figure, in various embodiments described herein, may be equivalent to one or more like-named components described with regard to any other figure. For brevity, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments described herein, any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure. Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. As used herein, the phrase operatively connected, or operative connection, means that there exists between elements/components/devices a direct or indirect connection that allows the elements to interact with one another in some way. For example, the phrase ‘operatively connected’ may refer to any direct (e.g., wired directly between two devices or components) or indirect (e.g., wired and/or wireless connections between any number of devices or components connecting the operatively connected devices) connection. Thus, any path through which information may travel may be considered an operative connection. In general, embodiments described herein relate to methods, systems, and non-transitory computer readable mediums storing instructions for configuring a converged infrastructure (CI) that includes any number of CI nodes (e.g., server computing devices) and any number of network devices using a single user interface. As used herein, the term converged infrastructure, or CI, should be understood to mean any type of converged infrastructure, including, by way of example, hyper-converged infrastructure (HCl), that includes computing resources such as computing devices/nodes and network devices. Prior to embodiments described herein, configuring a converged infrastructure with virtualized CI nodes included, at least, configuring a virtualization environment manager (e.g., vCenter from VMware), configuring all network devices (e.g., enabling ports, enabling communication protocols, assigning Internet Protocol (IP) addresses, configuring virtual local area networks (VLANs), etc.), and configuring a client computing device from which to perform the deployment steps. All of the aforementioned steps, and others, must be performed before any CI nodes can be deployed to have a complete CI. Additionally, other tasks, such as, for example, configuring out-of-band (OOB) management, has to be performed for each node separately. Such steps require using a multitude of disparate graphical user interfaces (UIs), command line interfaces (CLIs), etc. Embodiments described herein help mitigate the complexity of CI deployment, which may reduce the time to deploy a CI and/or reduce the number of potential errors that may occur during CI deployment. In one or more embodiments, a single UI is presented which allows users to configure network devices, CI cluster nodes, OOB management for the CI nodes, virtualization environment managers, and in-band management in order to fully deploy a CI solution. In one or more embodiments, a user is presented a rendered UI screen from a deployment wizard executing on a CI cluster node on the user's client device that allows the user to select to configure network devices or a CI cluster that includes any number of CI nodes. In one or more embodiments, network devices must be configured prior to configuration of a CI cluster. If network devices have already been configured, a user may select to configure the CI cluster, otherwise the user will select to configure the network devices. In one or more embodiments, if the network is to be configured via the deployment wizard, the user is presented a series of rendered UI screens on the user's client device so that the deployment wizard may obtain necessary network device configuration information. Such screens may include a screen to select step-by-step set up or import settings from a network configuration file (e.g., a JavaScript Open Notation (JSON) file created using a pre-engagement questionnaire (PEQ)). In one or more embodiments, if the user selects step-by-step, some or all of the fields on subsequent screens may be populated with suggested entries, which the user may choose to accept or override. In one or more embodiments, if the user selects to import network device configuration information from a network device configuration information file, information from that file will be used to populate at least some of the fields in subsequent screens, which the user may view and choose to accept or override. Other UI screens rendered during the network device configuration workflow may include screens listing discovered network devices and/or ports of such network devices connected to CI nodes, virtual local area network (VLAN) configuration screens, a summary screen, and a screen allowing the user to add network information to a CI deployment file (e.g., a JSON file to be used by an application programming interface (API) to perform the cluster deployment). In one or more embodiments, once the configuration of the network devices has been completed and/or added to the CI deployment file, the deployment wizard renders a screen allowing the user to select to proceed with CI cluster deployment. Once the user selects to proceed with CI cluster deployment, in one or more embodiments, the user is presented a series of rendered UI screens on the user's client device so that the deployment wizard may obtain necessary CI cluster configuration information. Such screens may include, but are not limited to, a screen to select step-by-step set up or import settings from a CI cluster configuration file (e.g., a JSON file created using a (PEQ)). In one or more embodiments, if the user selects step-by-step, some or all of the fields on subsequent screens may populated with suggested entries, which the user may choose to accept or override. In one or more embodiments, if the user selects to import CI cluster configuration information from a CI cluster configuration information file, information from that file will be used to populate at least some of the fields in subsequent screens, which the user may view and choose to accept or override. Other UI screens rendered during the CI cluster configuration workflow may include screens listing discovered CI nodes, virtualization environment manager configuration screen(s), a summary screen, and a screen allowing the user to add CI cluster information to a CI deployment file (e.g., a JSON file to be used by an API to perform the cluster deployment). In one or more embodiments, once the network device configuration and CI cluster configuration are complete, the CI deployment file created by the deployment wizard is used to deploy the CI. In one or more embodiments, the configuration of the network devices includes configuration of only ports connected to CI nodes and/or that are otherwise not connected to other devices. As such, the network devices used for the CI may have at least some ports connected to other devices for other purposes, and the CI deployment described herein will not affect those ports. FIG.1shows a diagram of a system in accordance with one or more embodiments described herein. The system may include a converged infrastructure (CI) (100). The CI (100) may include a virtualization environment manager (118) and any number of CI nodes (e.g., CI node A (102), CI node N (110)). CI node A (102) may include a leader deployment wizard (104), processor(s) (106), and storage device(s) (108). CI node N (110) may include a non-leader deployment wizard (112), processor(s) (114), and storage device(s) (116). The system may also include a CI deployment client device (122). The CI deployment client device (122) may include a user interface display device (124). Each of these components is described below. In one or more embodiments, CI (100) is a collection of operatively connected devices configured to function together to perform computing tasks using combined resources. One non-limiting example of a CI (100) is a HCl, in which some or all of the computing resources (e.g., processing, storage, networking) may be software-defined, such that underlying hardware resources may be provisioned as needed for whatever workloads are to be performed on the HCl. In one or more embodiments, software-defined computing resources may include a virtualization layer existing above the physical hardware layer that provides the computing resources of CI (100) to higher layers, such as virtual machines, operating systems, applications, workloads, containers, etc. In one or more embodiments, CI (100) is a HCl, with the CI nodes sharing processing resources and storage and being commonly connected to one or more network devices of CI (100). In one or more embodiments, CI (100) is all or any portion of a data center. CI (100) may include any number of CI nodes, any number of network devices, and one or more virtualization environment managers, each of which is discussed further, below. One of ordinary skill in the art having the benefit of this Detailed Description will appreciate that CI (100) may include any number of other components and/or devices without departing from the scope of embodiments described herein. In one or more embodiments, the CI nodes (102,110) are computing devices of any type located in a common virtualization environment, such as, for example, CI (100). In one or more embodiments, a virtualization environment is any environment in which any number of computing devices, such as CI node A (102) and CI node N (110), are subject, at least in part, to a shared scheme for pooling compute resources (e.g., processors, storage devices, etc.) for use in deploying virtualized computing device instances (e.g., virtual machines (VMs), containers, virtual appliances, emulators, etc.). In one or more embodiments, the CI nodes (e.g.,102,110) within the CI (100) may be any single computing device, collection of computing devices, portion of one or more computing devices, or any other grouping of computing resources (e.g., portions of a hyper-converged infrastructure). In one or more embodiments, a computing device is any device, portion of a device, or any set of devices capable of electronically processing instructions and may include any number of components, which may include, but are not limited to, any of the following: one or more processors (e.g. components that include integrated circuitry) (106,114), memory (e.g., random access memory (RAM)) (not shown), input and output device(s) (not shown), non-volatile storage device(s) (e.g., solid-state drives (SSDs), hard disk drives (HDDs)) (108,116), one or more physical interfaces (e.g., network ports, storage ports) (not shown), any number of other hardware components (not shown), and/or any combination thereof. Examples of computing devices include, but are not limited to, a server (e.g., a blade-server in a blade-server chassis, a rack server in a rack, etc.), a desktop computer, a mobile device (e.g., laptop computer, smart phone, personal digital assistant, tablet computer, and/or any other mobile computing device), a storage device (e.g., a disk drive array, a fibre/fiber channel storage device, an Internet Small Computer Systems Interface (iSCSI) storage device, a tape storage device, a flash storage array, a network attached storage device, etc.), and/or any other type of computing device with the aforementioned requirements. In one or more embodiments, any or all of the aforementioned examples may be used or combined to create a system of such devices, which may collectively be referred to as a CI node (102,110). Other types of computing devices may be used without departing from the scope of the embodiments described herein. In one or more embodiments, one or more CI nodes (102,110) within CI (100) may be grouped together to form a CI cluster. In one or more embodiments, a CI cluster is a set of CI nodes configured to function together and share various resources to implement, at least in part, a virtualization environment. In one or more embodiments, a CI cluster may include all CI nodes (102,110) in CI (100) or any portion thereof. CI (100) may include any number of CI clusters of one or more CI nodes (102,110) without departing from the scope of embodiments described herein. In one or more embodiments, the storage devices (108,116) and/or memory (not shown) of a computing device or system of computing devices may be one or more data repositories for storing any number of data structures storing any amount or type of data (i.e., information). In one or more embodiments, a data repository is any type of storage unit and/or device (e.g., a file system, database, collection of tables, RAM, and/or any other storage mechanism or medium) for storing data. Further, the data repository may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical location. In one or more embodiments, any storage device (108,116) and/or memory (not shown) of a computing device or system of computing devices may be considered, in whole or in part, as non-transitory computer readable mediums storing software and/or firmware. Such software and/or firmware may include instructions which, when executed by the one or more processors (not shown) or other hardware (e.g. circuitry) of a computing device and/or system of computing devices, cause the one or more processors (106,114) and/or other hardware components to perform operations in accordance with one or more embodiments described herein. The software instructions may be in the form of computer readable program code to perform methods of embodiments as described herein, and may, as an example, be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a compact disc (CD), digital versatile disc (DVD), storage device (108,116), diskette, tape storage, flash storage, physical memory, or any other non-transitory computer readable medium. In one or more embodiments, a CI node (102,110) includes a hypervisor (not shown), which may also be referred to as a virtual machine monitor. In one or more embodiments, a hypervisor is any hardware (e.g., circuitry), software, firmware, or any combination thereof that includes functionality to manage the underlying hardware resources of a CI node (102,110), and to make the hardware resources available for use by VMs, which execute on the hypervisor. Thus, the hypervisor abstracts the underlying hardware from the VMs. In one or more embodiments, the hypervisor receives instructions from VMs (not shown) and performs the instructions using the appropriate underlying hardware (e.g., processor(s), storage, networking components, etc.). Such instructions from a VM may be altered by the hypervisor into a form appropriate for the underlying hardware. For example, the operating system of a VM may seek to execute instructions for a particular processor type, and the hypervisor may translate the instructions to a form that the actual underlying hardware processors can process. Additionally or alternatively, certain instructions from a VM may be passed through a hypervisor for execution using the underlying hardware without modification. A hypervisor may function as a hardware scheduler that schedules when instructions from various VMs will be executed on underlying hardware. For example, many VMs, each with virtual processors allocated, may require that the hypervisor schedule when the underlying hardware processors will be used to execute instructions for the VMs. Hypervisors may perform any other functions (e.g., provide virtual network components, virtual storage components, etc.) without departing from the scope of embodiments described herein. In one or more embodiments, a VM (not shown) is an emulation of a computing device (described above), or any portion thereof, that is abstracted from the underlying hardware of a CI node (102,110) that hosts the VM. In one or more embodiments, a VM may include functionality to perform any of the functionality of a physical computing device. For example, a VM may include an operating system in which any number of software applications exist and execute. As another example, a VM may be packaged as a virtual appliance configured to perform one or more workloads when deployed. In one or more embodiments, one or more CI nodes (102,110) also include a deployment wizard (e.g., leader deployment wizard (104), non-leader deployment wizard (112)). In one or more embodiments, a deployment wizard (104,112) is any hardware (e.g., circuitry), software, firmware, or any combination thereof that includes functionality to be accessed by a client device (discussed further, below), to render UI screens and perform other actions to obtain information related to CI (100) to create a CI deployment file, and to use the CI deployment file (e.g. via one or more APIs) to deploy a CI (100). In one or more embodiments, deploying a CI (100) includes, but is not limited to, configuring, at least in part, a CI cluster using the CI nodes (102,104), any number of network devices (120) within CI (100), a virtualization environment manager, and/or OOB functionality for the CI nodes (102,110). In one or more embodiments, one CI node (e.g.,102) among a set of CI nodes (102,104) that are to be part of a CI cluster (discussed above) is selected to be a leader using any scheme for selecting one device from a set of devices. In one or more embodiments, the leader CI node (102) includes the leader deployment wizard (104), while other CI nodes (e.g.,110) include non-leader deployment wizards (e.g.,112). Each CI node (104) having a deployment wizard (104,112) gives each CI node the potential to be the leader from the perspective of a deployment wizard. However, practically, only the leader deployment wizard (104) will be accessed at a given time and used to deploy CI (100). In one or more embodiments, each deployment wizard (104,112) has the same IP address, but a client device (discussed further, below) accessing that IP address will only be accessing the leader deployment wizard (104) on the CI node (102) that was elected as leader for a given CI cluster. In one or more embodiments, a CI (100) also includes a virtualization environment manager (118). In one or more embodiments, the virtualization environment manager (118) is operatively connected to the CI nodes (102,110) of the CI (100) and to network devices (e.g., network device120) of the CI (100). In one or more embodiments, a virtualization environment manager (118) is a computing device (described above) and/or a VM.FIG.1shows the virtualization environment manager (118) within CI (100) and external to the CI nodes (102,110). However, one having ordinary skill in the art will appreciate that the virtualization environment manager may alternately execute on one of the CI nodes (102,110), or be external to and operatively connected to CI (100). In one or more embodiments, a virtualization environment manager (118) provides a user interface for one or more entities for managing the virtualization environment. As such, the virtualization environment manager (118) is operatively connected to the CI nodes (102,110) and network device(s) (120) of the virtualization environment, and therefore has access to information related to the CI nodes (102,110) and to the VMs executing on the CI nodes (102,110), as well as any other computing devices (e.g., storage devices, network devices, etc.) within the virtualization environment. In one or more embodiments, a virtualization environment manager (118) allows entities to view information about the computing devices and VMs of a virtualization environment, to modify aspects of the configuration of such devices and VMs, to deploy or remove VMs on the CI nodes (102,110), to configure networking and storage for the VMs, or to perform any other task(s) relevant to managing a virtualization environment deployed within the CI (100). In one or more embodiments, CI (100) includes at least one network device (120). In one or more embodiments, a network device (120) is a physical device that includes and/or is operatively connected to persistent storage (not shown), memory (e.g., random access memory (RAM)) (not shown), one or more processor(s) (e.g., integrated circuits) (not shown), and any number of physical network interface(s) (not shown), which may also be referred to as ports. Examples of a network device include, but are not limited to, a network switch, a router, a multilayer switch, etc. A network device (120) is not limited to the aforementioned specific examples. In one or more embodiments, a network device (120) also includes any number of additional components, such as, for example, network chips (not shown), field programmable gate arrays (FPGAs) (not shown), application specific integrated circuits (ASICs) (not shown), indicator lights (not shown), fans (not shown), clocks (not shown), etc. In one or more embodiments, a network device (120) includes any software configured to perform various functions of the network device. Such software may, for example, execute using one or more processors (including circuitry therein) of a network device, or any other hardware resource of a network device capable of executing software. One example of such software is an operating system (OS) (not shown). In one or more embodiments disclosed herein, an OS includes any software and/or firmware for managing the resources (e.g., hardware, other software, etc.) of one or more network devices. In one or more embodiments, a network device includes functionality to send and/or receive packets (or other network traffic data, such as, e.g., frames, etc.) at any of the physical network interfaces (i.e., ports) of the network device and to process the packets. In one or more embodiments, processing a packet includes, but is not limited to, a series of one or more table lookups (e.g., longest prefix match (LPM) lookups, forwarding equivalence class (FEC) lookups, etc.) and corresponding actions (e.g., forward from a certain egress port, add a labeling protocol header, rewrite a destination address, encapsulate, etc.). Examples of packet processing include, but are not limited to, performing a lookup to determine: (i) whether to take a security action (e.g., drop the network traffic data unit); (ii) whether to mirror the network traffic data unit; and/or (iii) how to route/forward the packet in order to transmit the packet from an interface of the network device (120). In one or more embodiments, the network device (120) is part of (e.g., operatively connected to) a network (not shown). A network (not shown) may refer to an entire network or any portion thereof (e.g., a logical portion of the devices within a topology of devices). A network may include a datacenter network, a wide area network, a local area network, a wireless network, a cellular phone network, or any other suitable network that facilitates the exchange of information from one part of the network to another. A network may be located at a single physical location, or be distributed at any number of physical sites. In one or more embodiments, a network may be coupled with or overlap, at least in part, with the Internet. In one or more embodiments, a network includes a collection of one or more network devices that facilitate network connectivity for one or more operatively connected devices (e.g., the CI nodes (102,110) of CI (100)). In one or more embodiments, the network devices and other devices within the network are arranged in a network topology (not shown). In one or more embodiments, a network topology is an arrangement of various devices of a network. AlthoughFIG.1shows a single network device (120) within CI (100), there may be any number of network devices in a CI (100). For example, having two or more network devices creates redundancy that is not available with a single network device, and, as such, may be desirable. In one or more embodiments, only certain ports of network device (120) are operatively connected to CI nodes within the CI, while other ports are operatively connected to other devices for purposes that may or may not be related to the CI (100). Accordingly, when a deployment wizard (104) is used to configure the network device(s), only the ports connected to devices within the CI (and, optionally, ports with no connection) should be configured, while ports connected to the other devices should not be affected. In one or more embodiments, the deployment wizard is configured to identify the ports that should be configured for the CI and to exclude from configuration ports that are connected to other devices. In one or more embodiments, configuring ports on a network device for a CI includes pushing out a CI profile to the network devices, which may, among other things, create a unified fabric among network device ports configured with such a profile. In one or more embodiments, the system also includes a CI deployment client device (122). In one or more embodiments, the CI deployment client device (122) is a computing device (described above). In one or more embodiments, the CI deployment client device (122) is operatively connected to CI (100) (e.g., via a network of which network device (120) is a part). In one or more embodiments, the CI deployment client device (122) includes functionality to access a leader deployment wizard (104) (discussed above). In one or more embodiments, a CI deployment client device (122) is operated by a user seeking to deploy a CI (100). In one or more embodiments, the CI deployment client device (122) includes a user interface display device (124). In one or more embodiments, a user interface display device (124) is any hardware (e.g., circuitry), software, firmware, or any combination thereof of the CI deployment client device (122) that includes functionality to display a UI to a user when deploying a CI (100). As an example, the user interface display device (124) may be a browser application executing within an operating system of a CI deployment client device. In one or more embodiments, a user uses the user interface display device (124) to access the leader deployment wizard (104) of a CI node (102) using an IP address of the leader deployment wizard (104). In one or more embodiments, the user interface display device renders user interface screens for a user using information obtained from a leader deployment wizard (104). In one or more embodiments, the UI screens are presented to the user in a sequential manner based on selections made by a user during a CI deployment. In one or more embodiments, there may be various paths of sequential UI screens after a given UI screen presents the user with two or more choices from amongst which the user selects one. For example, a certain UI screen may request that a user select to configure either networking for a CI (100) or a CI cluster for the CI (100). In one or more embodiments, if no networking has yet been configured for the CI (100), a user chooses to configure networking first in order to properly deploy a CI (100), and is then presented with a series of UI screens for configuring various aspects of a network and network devices for the CI (100). In one or more embodiments, once the networking for a CI (100) is configured, either using the deployment wizard (104) or by being previously otherwise configured, the deployment wizard may return to the UI screen that allows the user to select to configure the CI cluster, and, after CI cluster configuration is selected, the user is presented with various UI screens to configure various aspects of a CI cluster (e.g., the CI nodes, a virtualization environment manager, OOB management functionality, etc.) In one or more embodiments, the workflow embodied by the various UI screens may be progressed through in any logical order that is capable of leading to a successful deployment of a CI (100). The CI deployment process using a leader deployment wizard accessed via a user interface display device of a CI deployment client device is discussed further in the descriptions ofFIG.2AandFIG.2B, below. WhileFIG.1Ashows a configuration of components, other configurations may be used without departing from the scope of embodiments described herein. For example, there may be any number of CI nodes within a CI. As another example, there may be any number of network devices within a CI. As another example, the virtualization environment manager may execute on one of the CI nodes. As another example, the virtualization environment manager may exist external to and operatively connected to the CI, and may or may not be managing any number of additional virtualization environments other than the virtualization environment deployed within the CI. Accordingly, embodiments disclosed herein should not be limited to the configuration of components shown inFIG.1A. FIG.2Ashows a flowchart describing a method for converged infrastructure (CI) deployment in accordance with one or more embodiments disclosed herein. While the various steps in the flowchart shown inFIG.2Aare presented and described sequentially, one of ordinary skill in the relevant art, having the benefit of this Detailed Description, will appreciate that some or all of the steps may be executed in different orders, that some or all of the steps may be combined or omitted, and/or that some or all of the steps may be executed in parallel. In Step200, a request is received to initiate a CI deployment. In one or more embodiments, the request is received by a user using a client device to access an IP address of a leader deployment wizard on a CI node. For example, a user of a client device may open a browser on the client device and enter the IP address of the deployment wizard into the address bar of the browser. In Step202, in response to receiving the request in Step200, the deployment wizard initiates a back-end discovery process to discover all relevant hardware details for devices (e.g., CI nodes, network devices, etc.) that are to be part of the CI being deployed. As an example, hypertext transfer protocol (HTTP) requests may be sent to the various devices seeking responses with at least a portion of the requested information. As another example, API calls may be used to obtain at least a portion of the information. Information that may be obtained as part of the discovery process may include, but is not limited to, CI node model names, CI node identifiers (e.g., service tags, serial numbers, etc.), CI node details (e.g., network components, storage components, storage capacity, ports and port identifiers, hypervisor information, etc.), network device information (e.g., network device model, network device identifiers, any configured VLANs, etc.), which network device ports are connected to CI nodes, any pre-existing network device configuration, etc. In Step204, the deployment wizard creates a CI deployment file using the CI information set obtained in Step202. In one or more embodiments, a CI deployment file is any one or more files of any type that will later be used to perform the deployment of the CI. In one or more embodiments, the CI deployment file created in Step204is only partially complete, and subsequent steps will add additional information to the CI deployment file to complete it prior to deployment of the CI. As an example, a CI deployment file may be a JSON file designed to be used with an API to perform the deployment. In Step206, the deployment wizard renders a UI screen that includes selection mechanisms for selecting to configure network devices or to configure a CI cluster from the CI nodes. As an example, there may be a UI button that a user may navigate to (e.g., using a pointing device, keyboard, etc.) that, if pressed, selects to configure network devices, and another button that a user may navigate to and press to select to configure a CI cluster. In one or more embodiments, the selection UI screen may include any other content without departing from the scope of embodiments described herein. For example, the UI screen may include a welcome statement, visual representations of devices in the CI, etc. In Step208, a determination is made as to whether the network devices for the CI have already been configured. In one or more embodiments, the network devices must be configured prior to configuration of the CI cluster, including having ports connected to the CI configured with a CI profile, having necessary VLANs configured, having necessary network addresses configured, etc. In one or more embodiments, if the network devices have not yet been so configured, then the user must select to configure the network devices, and the method proceeds to Step210. If, on the other hand, the network devices have previously been so configured using any other configuration scheme, the user may select to configure the CI cluster, and the method proceeds toFIG.2B. In Step210, the deployment wizard provides the client device information for rendering a series of UI screens to obtain information relating to the network device configuration. In one or more embodiments, one screen in the series of screens requests that the user select whether they want to perform a step by step configuration, or if they want to import network information from a pre-engagement questionnaire (PEQ) file (e.g., a JSON file created using information obtained via a PEQ from the entity seeking to deploy the CI) that was previously completed and that includes some or all of the necessary information to complete the network device configuration. In one or more embodiments, if step by step configuration is selected, then subsequent UI screens may be pre-populated with information that represents suggested configuration choices. In one or more embodiments, if the option to import network information is selected, then subsequent UI screens will be populated, at least in part, with information from the PEQ file. The subsequent UI screens will also include relevant information obtained by the deployment wizard during the discovery process performed in Step202. One example of a subsequent screen is a screen that lists each network device discovered during the discovery process, and one or more identifiers and a model number for each device. Other information may be displayed for the user on this screen (or any other screen) without departing from the scope of embodiments described herein. Such a screen may allow the user to view all or any portion of the network devices to be included in the CI, and request a username and password for the selected network devices. In one or more embodiments, in response to receiving a username and password for the selected network devices, the deployment wizard may verify whether the information allows the network device(s) to be accessed, and to display whether or not verification was successful to the user via the UI screen on which the username and password were entered. Another example of a subsequent screen rendered during network device configuration for the CI is a VLAN assignment screen. Such a screen may display suggested VLANs (e.g., five VLANs) and suggested ports to be included in the VLANs as tagged or untagged ports. Alternately, the VLAN assignment screen(s) may be populated, at least in part, using information from the PEQ file. In one or more embodiments, via the UI screens rendered by the deployment wizard, a user may be given the ability to modify or edit all or any portion of the information relevant to network device configuration, such as, for example, VLAN number, VLAN description, port assignments to a VLAN, etc. Another example of a subsequent screen that may be rendered during the network device configuration is a summary screen reflecting the planned network device configuration based on information gained via and/or displayed on the network configuration screens, and any other network device configuration to be performed by the deployment wizard. Such a screen may also indicate whether a validation of the planned network device configuration was successful. One of ordinary skill in the art, having the benefit of this detailed disclosure, will appreciate that any other UI screens for obtaining and/or displaying network device information for use in network device configuration may be used without departing from the scope of embodiments described herein. In Step212, network device information obtained by the deployment wizard is added to the CI deployment file to later be used to configure the network devices during deployment of the CI. Once the network device configuration information has been added to the CI deployment file, the method proceeds toFIG.2B. FIG.2Bshows a flowchart describing a method for CI deployment in accordance with one or more embodiments disclosed herein. While the various steps in the flowchart shown inFIG.2Bare presented and described sequentially, one of ordinary skill in the relevant art, having the benefit of this Detailed Description, will appreciate that some or all of the steps may be executed in different orders, that some or all of the steps may be combined or omitted, and/or that some or all of the steps may be executed in parallel. In Step220, a screen is rendered for the user that has selection mechanisms for selecting to configure either network devices or a CI cluster. If the network devices were already configured prior to the user accessing the deployment wizard, then this may be the first rendering of the screen to the user, and the user must select to configure the CI cluster to proceed with the deployment. Alternatively, if the network devices were not configured for the CI prior to accessing the deployment wizard, this might be the second rendering of the screen for the user, who has already been through the network device configuration portion of the deployment wizard (seeFIG.2A). The selection screen is discussed in the description of Step206ofFIG.2A, above. In either case, the user now uses the selection mechanism for configuring the CI cluster to proceed with the CI deployment. In Step222, the deployment wizard provides the client device information for rendering a series of UI screens to obtain information relating to the CI cluster configuration. In one or more embodiments, one screen in the series of screens requests that the user select whether they want to perform a step by step configuration, or if they want to import CI cluster information from a PEQ file (e.g., a JSON file created using information obtained via a PEQ from the entity seeking to deploy the CI) that was previously completed and that includes some or all of the necessary information to complete the CI cluster configuration. In one or more embodiments, if step by step configuration is selected, then subsequent UI screens may be pre-populated with information that represents suggested configuration choices. In one or more embodiments, if the option to import CI cluster information is selected, then subsequent UI screens will be populated, at least in part, with information from the PEQ file. The subsequent UI screens will also include relevant information obtained by the deployment wizard during the discovery process performed in Step202forFIG.2A. One example of a subsequent screen is a screen that lists each CI node discovered during the discovery process, and one or more identifiers and a model number for each device. Other information may be displayed for the user on this screen (or any other UI screen) without departing from the scope of embodiments described herein. Such a screen may allow the user to all or any portion of the CI nodes to be included in the CI cluster. A user may select all CI nodes discovered, or any portion thereof. In one or more embodiments, if only a portion of the CI nodes are selected, the user may later access the deployment wizard to create one or more additional clusters from the CI nodes that were not selected during the previous CI deployment. Another example of a subsequent screen rendered during CI cluster configuration for the CI is a virtualization environment manager screen. Such a screen may include a mechanism to select to join the CI cluster to an existing virtualization environment manager, which may or may not be managing other virtualization environments. In one or more embodiments, if a selection is made to join the CI to an existing virtualization environment manager, then the screen may include fields that are populated with relevant information identifying the virtualization environment manager and its access information (e.g., user name and password), which may be populated by the user or may be derived from the PEQ file if one was imported at the start of the CI cluster configuration. In one or more embodiments, if a user wants to use a new virtualization manager for the CI, then the user may be given the opportunity to provide various items of information (or accept suggestions for such information), which may include, but is not limited to, a datacenter name, a cluster name for the CI cluster being configured, a virtualization environment manager password, etc. As with all configuration screens presented by the deployment wizard, all or any portion of the pre-populated information may be editable by the user during the CI cluster configuration. In one or more embodiments, the ability of the deployment wizard to receive information from a virtualization environment manager and/or, during the CI deployment, transmit information to the virtualization environment manager creates a bi-directional communication channel, which may reduce the number of touchpoints (i.e., separate interfaces) that are used for CI deployment, which may reduce potential errors in such deployments. Another example of a subsequent UI screen presented by the deployment wizard during the CI cluster configuration is one or more UI screens for configuring one or more optional features. In one or more embodiments, an optional feature is one that is not required to be configured for a successful CI deployment, though it may be recommended. In one or more embodiments, one such feature is OOB management for the CI nodes. Instead of a user having to access each CI node independently to configure OOB management, in one or more embodiments, the deployment wizard presents a UI screen that allows the user to configure all the CI nodes by allowing OOB configuration information to be auto-filled, by accepting the OOB management configuration information that was obtained from the PEQ file for the CI cluster, or by entering the information. In one or more embodiments, such information may include, but is not limited to, a starting and ending IP address (i.e., an IP address range) for the OOB management interfaces of the CI nodes, subnet mask(s), gateway address(es), username(s), password(s), plugin information, and physical location information (e.g., datacenter name, aisle name, rack name, room name, etc.). Another example of a subsequent screen that may be rendered during the CI cluster configuration is a summary screen reflecting a list of CI nodes selected to be part of the cluster, and relevant information about the nodes. Such information may include, but is not limited to, identifiers of the CI nodes, hypervisor hostnames, network addresses, and OOB management addresses. In one or more embodiments, the information for each CI node may also include an order number. In one or more embodiments, an order number reflects an ordering of the CI nodes based on any ordering scheme. For example, the deployment wizard may order the CI nodes from lowest to highest service tag number. In one or more embodiments, the user may change the order number using the edit functionality provided on the summary screen to reflect an order that the user prefers. Examples of preferred ordering schemes include, but are not limited to, top of rack to bottom of rack, bottom of rack to top of rack, etc. In Step224, CI cluster configuration information obtained by the deployment wizard is added to the CI deployment file to later be used to configure the CI cluster during deployment of the CI. In Step226, the CI is deployed using the CI deployment file. In one or more embodiments, the CI deployment file is used with one or more APIs to configure the network devices, CI nodes, virtualization environment manager, and optional OOB management settings to deploy the CI. The deployment process may include the choices made and/or agreed to by the user during the use of the deployment wizard, as well as any other configuration steps necessary to successfully deploy the CI (e.g., pushing out a CI profile for each network device port connected to the CI). In one or more embodiments, the capability for pushing out network device port profiles from the deployment wizard on a CI node to the relevant switch port eliminates the need for a user to have to access each network device to be included in the CI and enable the appropriate network device port profile for the ports connected to the CI nodes of the CI. Although not shown inFIG.2AorFIG.2B, at various times during the deployment wizard's execution, information that the user has agreed to and/or accepted may be stored (e.g., cached), which may allow a user to pause the deployment and later come back to it, may protect the information in the event some error occurs that causes the deployment to have to stop at some point, etc. Additionally, the CI deployment file generated at the end of the use of the deployment wizard may be made available to provide information for future CI deployments. For example, if a user has ten CI nodes, and only used five of them in a CI cluster during the present deployment, the user may want to configure another CI using a CI cluster with the remaining five CI nodes. In such a scenario, the user may import information from the previous CI deployment file during the network device configuration and/or CI cluster configuration portions of the deployment configuration. Example Scenario The following example is for explanatory purposes only and not intended to limit the scope of embodiments described herein. Additionally, while the example shows certain aspects of embodiments described herein, all possible aspects of such embodiments may not be illustrated in this particular example. Consider a scenario in which a company purchases a hyper-converged infrastructure (HCl) solution that includes ten CI nodes and two network devices in order to have redundancy for the network of the HCl. The network devices are to be used for the CI, but also for general network functionality for a variety of other devices. The ten CI nodes will be connected to ports one through twenty of each network device (i.e., two ports per node). Once the devices are physically installed in an appropriate one or more rack(s) and properly connected to each other and to a power source, an administrator must then configure the HCl. In this scenario, the administrator did not complete a PEQ for a CI cluster, and so does not have any CI cluster PEQ files to import during the HCl configuration. However, the administrator did complete a PEQ for the network devices, and thus has a PEQ file to import during the network device configuration portion of the HCl deployment. First, the administrator obtains a laptop to be used as a client device that has a supported browser. The administrator opens the browser and enters the IP address of the deployment wizard into the address bar, which accesses the leader deployment wizard for the HCl. The deployment screen displays a welcome page that informs the administrator that they have accessed the deployment wizard, which will guide them through the HCl deployment process. The network devices have not been configured for the HCl, though they have otherwise been configured for devices connected to ports other than the ports to which the CI nodes are connected. Once the administrator accesses the deployment wizard, the deployment wizard performs a discovery process to obtain relevant hardware information for the network devices and the CI nodes, including what ports of the network devices are actually connected to the CI nodes, and uses the information to generate an initial CI deployment file that will be added to as the administrator navigates through the deployment wizard. On the welcome screen, the administrator navigates to the button to select starting the network device configuration. Next, the administrator is presented with a screen that lists the two network devices by an ID number and model number, and fields for the administrator to enter the usernames and passwords for the two network devices. Once the administrator enters the usernames and passwords, the screen is updated to display a successful validation that the deployment wizard can access the network devices. The administrator then selects to move forward with the network device configuration. In one or more embodiments, the next screen presented to the user requests that the user select a step by step set up, or to import network device configuration information from a PEQ file. As the administrator has such a file, the administrator selects that option and provides the location of the file, which the deployment wizard uses to obtain network device configuration information. The deployment wizard then presents a series of screens detailing the network device configuration information obtained from the PEQ file, and gives the administrator the chance to edit the information. The administrator does not choose to edit any of the information, and instead accepts the information on each screen by navigating to the next screen. One of the screens displays the five VLANs that are to be configured by showing the VLAN number and description obtained from the PEQ file. Each VLAN is also shown as having ports1-20assigned, as these are the only ports connected to the CI nodes. The remaining ports on the network devices, which are connected to other devices outside the HCl, are not affected by the HCl deployment process. After all screens have been reviewed by the administrator, a summary page is presented, reflecting the planned HCl network device configuration and an indication that the configuration was successfully validated. Once the administrator reviews this screen, the administrator selects to continue the HCl configuration, which causes the deployment wizard to add the network device configuration information to the CI deployment file. The deployment wizard then displays the welcome page again, and the administrator uses the appropriate button to select to configure the CI cluster. The first screen of the CI cluster deployment lists the ten CI nodes. The administrator selects five of them to be in a first cluster. The deployment wizard then presents a series of screens with CI cluster information pre-populated with suggested values for various items of information, such as names, network addresses, etc. for the CI nodes. The administrator does not choose to edit any of the information, and instead accepts the information on each screen by navigating to the next screen. One of the screens requests that the administrator select whether to add the CI cluster to an existing virtualization environment manager, which, in this scenario, is vCenter. The administrator already has a vCenter instance managing other virtualization environments, and therefore elects to use that vCenter. Accordingly, the administrator enters identifying information, a username, and a password for the vCenter instance, as well as information identifying the new cluster (e.g., a chosen cluster name) so that vCenter may prepare for the addition of the new cluster. The administrator is also presented with a screen that allows the user to configure OOB management for the CI nodes. The administrator selects to allow the deployment wizard to autofill the OOB management information. After all screens have been reviewed by the administrator, a summary page is presented, reflecting the planned HCl cluster configuration and an indication that the configuration was successfully validated. Once the administrator reviews this screen, the administrator selects to continue the HCl configuration, which causes the deployment wizard to add the CI cluster configuration information to the CI deployment file. At this point, the HCl is ready to be deployed. Therefore, the administrator selects the option for the deployment wizard to perform the deployment of the HCl. In response, the deployment wizard uses necessary APIs and the CI deployment file to deploy the HCl by pushing out CI profiles to ports1-20on the network devices, assigning addresses to the ports, configuring required communication protocols for the ports, enabling the ports, adding the VLANs for the ports, providing vCenter with the necessary information to add the CI cluster, configuring the CI nodes with the CI cluster information, including configuring OOB management for the five nodes of the CI cluster, and, finally, validating that the deployment was successful. Once the validation is complete, the HCl solution is ready to be used. Accordingly, the administrator then begins the post deployment tasks of adding the VMs and virtual appliances the company needs to perform the workloads for which the HCl was purchased. In one or more embodiments, now that the administrator has completed a deployment of a HCl including a CI cluster with five of the ten CI nodes purchased by the company, the CI deployment file may be re-used, at least in part, to facilitate deployment of another HCl that includes all or any portion of the remaining five CI nodes as a CI cluster in the HCl. As discussed above, embodiments of the invention may be implemented using computing devices.FIG.3shows a diagram of a computing device in accordance with one or more embodiments of the invention. The computing device (300) may include one or more computer processors (302), non-persistent storage (304) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (306) (e.g., a hard disk, an optical drive such as a compact disc (CD) drive or digital versatile disc (DVD) drive, a flash memory, etc.), a communication interface (312) (e.g., Bluetooth® interface, infrared interface, network interface, optical interface, etc.), input devices (310), output devices (308), and numerous other elements (not shown) and functionalities. Each of these components is described below. In one embodiment of the invention, the computer processor(s) (302) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing device (300) may also include one or more input devices (310), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device. Further, the communication interface (312) may include an integrated circuit for connecting the computing device (300) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device. In one embodiment of the invention, the computing device (300) may include one or more output devices (308), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (302), non-persistent storage (304), and persistent storage (306). Many different types of computing devices exist, and the aforementioned input and output device(s) may take other forms. The problems discussed above should be understood as being examples of problems solved by embodiments of the invention and the invention should not be limited to solving the same/similar problems. The disclosed invention is broadly applicable to address a range of problems beyond those discussed herein. While embodiments described herein have been described with respect to a limited number of embodiments, those skilled in the art, having the benefit of this Detailed Description, will appreciate that other embodiments can be devised which do not depart from the scope of embodiments as disclosed herein. Accordingly, the scope of embodiments described herein should be limited only by the attached claims.
58,839
11861413
Throughout the description, similar reference numbers may be used to identify similar elements. DETAILED DESCRIPTION It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Turning now toFIG.1, a block diagram of a cloud system100in accordance with an embodiment of the invention is shown. The cloud system100may be a public cloud platform, which allows entities, such as organizations and enterprises, to use the platform to run their applications in separate cloud-based computing environments. For ease of description, the cloud system100is shown to include one cloud-based computing environment102and an autoscaler104. In a particular implementation, the cloud-based computing environment102may be a VMware Cloud Organization of a VMware Cloud™ on AWS (VMC on AWS) and the autoscaler104may be a feature provided as part of the VMC on AWS. As shown inFIG.1, the cloud-based computing environment102includes one or more software-defined data centers (SDDCs)106, which each includes one or more clusters108of host computers. In an embodiment, each SDDC106is a collection of bare-metal host computers, which may be installed with various software. In this embodiment, each SDDC106is running atop dedicated hardware, i.e., bare-metal host computers. The SDDCs106are described in more detail below. The SDDCs106in the cloud-based computing environment102are supported by a pool110of reserved resource instances112, which, in this embodiment, are host computers. These reserved resource instances112may be provisioned to the cloud-based computing environment102as needed in the various clusters108of host computers. Thus, the reserved resource instances112are not part of the cloud-based computing environment102until they are requested and provisioned to the cloud-based computing environment. The number of reserved resource instances112in the pool110that can be provisioned to the cloud-based computing environment102may be based on a subscription, which may define a period of time for the subscription and the cost per reserved resource instance, in addition to the number of reserved resource instances112contracted for the cloud-based computing environment. When the reserved resource instances112are exhausted for the cloud-based computing environment102, i.e., there are no more reserved resource instances in the pool, on-demand resource instances114, e.g., on-demand host computers, may be requested from the cloud system100and provisioned to the cloud-based computing environment102. However, the on-demand resource instances114are typically more costly than the reserved resource instances112. Thus, a cost-effective approach to maintaining the cloud-based computing environment102is to reduce the use of on-demand resource instances114whenever possible. Turning now toFIG.2, an SDDC200that can be deployed in the cloud-based computing environment102in accordance with an embodiment of the invention is illustrated. As shown inFIG.2, the SDDC200includes one or more clusters202of host computer systems (“hosts”)204. In an embodiment, each cluster202share resources, such as memory, central processing unit (CPU) and storage, and can be managed as a single entity. The hosts204in the clusters202may be constructed on a server grade hardware platform206, such as an x86 architecture platform. As shown, the hardware platform206of each host204may include conventional components of a computing device, such as one or more processors (e.g., CPUs)208, system memory210, a network interface212, and storage214. The processor208can be any type of a processor commonly used in servers. The memory210is volatile memory used for retrieving programs and processing data. The memory210may include, for example, one or more random access memory (RAM) modules. The network interface212enables the host204to communicate with other devices that are inside or outside of the SDDC200. The network interface212may be one or more network adapters, also referred to as a Network Interface Card (NIC). The storage214represents one or more local storage devices (e.g., one or more hard disks, flash memory modules, solid state disks and optical disks), which may be used together with storages from other hosts in the same cluster to form a virtual storage area network (vSAN)216. Each host204may be configured to provide a virtualization layer that abstracts processor, memory, storage and networking resources of the hardware platform206into the virtual computing instances, e.g., virtual machines218, that run concurrently on the same host. The virtual machines218run on top of a software interface layer, which is referred to herein as a hypervisor220, that enables sharing of the hardware resources of the host by the virtual machines. One example of the hypervisor220that may be used in an embodiment described herein is a VMware ESXi™ hypervisor provided as part of the VMware vSphere® solution made commercially available from VMware, Inc. The hypervisor220may run on top of the operating system of the host or directly on hardware components of the host. For other types of virtual computing instances, the host may include other virtualization software platforms to support those virtual computing instances, such as Docker virtualization platform to support “containers”. In the illustrated embodiment, the hypervisor220includes a logical network (LN) agent222, which operates to provide logical networking capabilities, also referred to as “software-defined networking” (SDN). Each logical network may include software managed and implemented network services, such as bridging, L3 routing, L2 switching, network address translation (NAT), and firewall capabilities, to support one or more logical overlay networks in the SDDC200. The logical network agent222receives configuration information from a logical network manager224(which may include a control plane cluster) and, based on this information, populates forwarding, firewall and/or other action tables for dropping or directing packets between the virtual machines218in the host204and other virtual computing instances on other hosts, as well between the virtual machines218in the host204and devices outside of the SDDC200. Collectively, the logical network agent222, together with other agents on other hosts, according to their forwarding/routing tables, implement isolated overlay networks that can connect arbitrarily selected virtual machines or other virtual computing instances with each other. Each virtual machine or virtual computing instance may be arbitrarily assigned a particular logical network in a manner that decouples the overlay network topology from the underlying physical network. Generally, this is achieved by encapsulating packets at a source host and decapsulating packets at a destination host so that virtual machines on the source and destination can communicate without regard to underlying physical network topology. In a particular implementation, the logical network agent222may include a Virtual Extensible Local Area Network (VXLAN) Tunnel End Point or VTEP that operates to execute operations with respect to encapsulation and decapsulation of packets to support a VXLAN backed overlay network. In alternate implementations, VTEPs support other tunneling protocols such as stateless transport tunneling (STT), Network Virtualization using Generic Routing Encapsulation (NVGRE), or Geneve, instead of, or in addition to, VXLAN. The SDDC200also includes a virtualization manager226that manages the clusters202of hosts204. In an embodiment, the virtualization manager226is a computer program that resides and executes in a computer system, such as one of the hosts204, or in a virtual computing instance, such as one of the virtual machines218running on the hosts204. One example of the virtualization manager226is the VMware vCenter Server® product made available from VMware, Inc. The virtualization manager226is configured to carry out administrative tasks for the clusters of hosts in the SDDC200, which may include monitoring resource utilizations (e.g., CPU, memory and storage utilizations) in the clusters, managing the hosts in the clusters, managing the virtual machines running on the hosts in the clusters, provisioning virtual machines, migrating virtual machines from one host to another host, and load balancing between the hosts in the clusters. As noted above, the SDDC200also includes the logical network manager224(which may include a control plane cluster), which operates with the logical network agents222in the hosts204to manage and control logical overlay networks in the SDDC. Logical overlay networks comprise logical network devices and connections that are mapped to physical networking resources, e.g., switches and routers, in a manner analogous to the manner in which other physical resources as compute and storage are virtualized. In an embodiment, the logical network manager224has access to information regarding physical components and logical overlay network components in the SDDC200. With the physical and logical overlay network information, the logical network manager224is able to map logical network configurations to the physical network components that convey, route, and filter physical traffic in the SDDC200. In one particular implementation, the logical network manager224is a VMware NSX™ manager running on any computer, such as one of the hosts204or a virtual machine218in the SDDC200. Turning back toFIG.1, the autoscaler104operates to automatically scale out and scale in the clusters108of hosts in the different SDDCs106to provide an elastic cluster feature for the cloud-based computing environment102. A scale-out operation on a cluster is an operation to add resources to the cluster when one or more resource utilizations, e.g., CPU, memory and storage, exceeds scale-out resource utilization thresholds. In an embodiment, a cluster is scaled out when any of the resource utilizations consistently remain above the scale-out resource utilization thresholds. A scale-in operation on a cluster is an operation to remove or release resources from the cluster when one or more resource utilizations, e.g., CPU, memory and storage, fall below scale-in resource utilization thresholds. In an embodiment, a cluster is scaled in when all the resource utilizations are consistently below the scale-in resource utilization thresholds. In an embodiment, the resources that are being removed for scale-in operations and added for scale-out operations are host computers. However, in other embodiments, these resources may be other type of physical resources, such as storage devices, or virtual resources, such as virtual compute, memory and/or storage resources. In an embodiment, the autoscaler is implemented as software running in the cloud system100. In addition, the autoscaler104provide an enhanced elastic cluster feature, which makes the best use of the reserved resource instances112, e.g., reserved host computers, and reduces the use of on-demand resource instances114, e.g., on-demand host computers. Specifically, whenever one or more resource utilizations of a particular cluster in the cloud-based computing environment102exceeds the corresponding scale-out resource utilization thresholds and the reserved resource instances112for the cloud-based computing environment102have been exhausted, the autoscaler104checks the resource utilizations of all the clusters in the SDDCs106of the cloud-based computing environment before adding one of the on-demand resource instance114. Using aggressive scale-in resource utilization thresholds, which are higher than the standard scale-in resource utilization thresholds, the autoscaler104then performs a scale-in operation on any other cluster in the cloud-based computing environment102whose resource utilizations are below the aggressive scale-in resource utilization thresholds to make a reserved resource instance112available for scale-out use in order to avoid using on-demand resources, e.g., adding a new on-demand resource instance114. In an embodiment, the autoscaler104performs an aggressive scale-in operation on a cluster only if all the utilization values (e.g., storage, CPU and memory) are below the aggressive scale-in resource utilization thresholds. This makes sure that the clusters are not overcommitted and that there is no performance degradation. The aggressive scale-in thresholds can be slightly higher than the scale-in resource utilization thresholds, which automatically trigger a scale-in operation under normal scaling conditions, i.e., when reserved resource instances are available. Examples of standard scale-in resource utilization thresholds and aggressive scale-in resource utilization thresholds are illustrated in the following table. StandardAggressiveScale-in ResourceScale-in ResourceUtilizationUtilizationResourceThresholdThresholdStorage20%35%CPU60%65%Memory60%65% In an embodiment, the autoscaler104may create a buffer of reserved resource instances112when the reserved resource instances for the cloud-based computing environment102have been exhausted. This means, when a scale-out recommendation is generated for a cluster in the cloud-based computing environment, the autoscaler checks the number of available reserved resource instances. If there is only one reserved resource instance capacity, the autoscaler proceeds with using the last reserved resource instance for a scale-out operation. In parallel, the autoscaler will scan the other clusters in the cloud-based computing environment to check if one or more aggressive scale-in operations can be performed to release more reserved resource instances into a pool of resource instances based on the buffer number, i.e., the desired number of reserved resources instances that are available for future use, which can be a predefined value set by a user. This way, when the next scale-out recommendation comes in, there will already be at least one reserved resource instance available in the pool, thus reducing the time for the scale-out operation. In addition, this approach reduces the dependency of a scale-out operation of one cluster on a scale-in operation of another cluster. In an embodiment, when all the reserved resource instances112have run out for the cloud-based computing environment102, for the clusters falling in the aggressive scale-in resource utilization thresholds, the autoscaler104can give priority to a cluster that already has an on-demand resource instance114, e.g., an on-demand host computer. In this way, if possible, the on-demand resource instance is reused in the cluster that needs to be scaled out and provisioning a new additional on-demand resource instance is avoided. In an embodiment where the resource instances are host computers, if the cluster to be scaled out and the cluster with the lowest resource utilization (i.e., the cluster to be scaled in) are in the same SDDC106of the cloud-based computing environment102, the autoscaler104will just move a host computer from one cluster to the other since all the host computers in the SDDC will be at the same version. This will save the time required in releasing an instance, i.e., an existing host computer in the cluster to be scaled in, and provisioning a new cloud instance, a new host computer, for the cluster to be scaled out. This approach will especially be useful to reduce the recovery time objective (RTO) when the workloads spike during disaster recovery, which causes scale-out operations. There are two major advantages of the enhanced elastic cluster feature provided by the autoscaler104. The first major advantage is the cost effectiveness of the feature. By making effective use of the reserved resource instances112, the use of on-demand resources is avoided unless it is absolutely necessary. This helps to save on the extra cost required for on-demand resources. The second major advantage is the time efficiency of the feature. Consider a situation where there is a cluster with four (4) host computers and 60 Terabyte (TB) storage capacity and the aggressive scale-in resource utilization threshold for storage is set to 35%. The storage utilization of the cluster is 35%. So, there is approximately 5 TB (35% of 15 TB) of data on each host computer. Evacuating 1 TB of data takes maximum of 30 minutes. Thus, evacuating 5 TB of data will take maximum of 150 minutes or two and a half hours. Based on these calculations, releasing a host computer can take up to 160 minutes and provisioning a new host computer can take 20 minutes. So, the total time required to perform a scale-in operation first and then a scale-out operation can be around 180 minutes. If both the clusters (the cluster to be scaled out and the cluster to be scaled in) are in different SDDCs106of the cloud-based computing environment102, the autoscaler104can go ahead with removing the host computer from one SDDC and provisioning a new one in the other SDDC. However, if both the clusters are in the same SDDC, depending upon the use case, there are two options. The first option is to reuse the host computer without the overhead of cleaning up or re-imaging the host computer and just move the host computer from the low utilization cluster (the cluster to be scaled in) to the high utilization cluster (the cluster to be scaled out). This will save approximately 25 minutes since the autoscaler does not have to remove or provision a new host computer in the cloud-based computing environment. The second option is to just remove the host computer and provision a new host computer in the cloud-based computing environment if the host computer needs to be cleaned up and re-imaged before reusing it. This is because the cleaning up and re-imaging a host computer can increase the time to reuse the host computer by around 30 minutes. Thus, in this case, removing the host computer and provisioning a new host computer in the cloud-based computing environment would be more time efficient. FIG.3Aillustrates the autoscaling operation executed by the autoscaler104when both the cluster to be scaled out and the cluster to be scaled in are in the same SDDC in the cloud-based computing environment102in accordance with an embodiment of the invention. As shown inFIG.3Afor this example, the cloud-based computing environment includes two SDDCs106A and106B. The SDDC106A includes three clusters C1-C3 of host computers. Resource utilizations for the clusters C1-C3 are shown in the following table: ResourceCluster C1Cluster C2Cluster C2Storage55%75%25%CPU75%75%60%Memory60%60%62% As shown inFIG.3A, the SDDC106B includes two clusters C4-C5 of host computers. Resource utilizations for the clusters C4-C5 are shown in the following table: ResourceCluster C4Cluster C5Storage30%60%CPU60%68%Memory62%80% Also shown inFIG.3Ais a pool310of unused or available reserved resource instances112, e.g., available reserved host computers, for the cloud-based computing environment102. The reserved resource instances in the pool are all the available reserved resource instances that are currently not being used in any of the clusters in the cloud-based computing environment. In this illustrated example, high resource utilizations in the cluster C2 generates a scale-out recommendation by the autoscaler104using the scale-out resource utilization thresholds, as indicated by the arrow330. In an embodiment, the autoscaler may initiate an autoscaling operation based on a predefined schedule, e.g., every 5 minutes. As part of the autoscaling operation, requests for current resource utilizations of all the clusters in the cloud-based computing environment102are made by the autoscaler, which may be processed by virtualization managers (not shown) in the SDDCs106A and106B. The received resource utilization values for the clusters are then compared to the scale-out resource utilization thresholds to make scale-out recommendations for clusters with high resource utilizations, which in the illustrated example, resulted in a scale-out recommendation for the cluster C2. In response to the scale-out recommendation for the cluster C2, the autoscaler104checks the pool310to see if any reserved resource instances112are available, as indicated by the arrow332. If one or more unused reserved resource instances are available, the autoscaler will execute a scale-out operation on the cluster C2, which will involve adding one unused reserved resource instance to the cluster C2. However, if unused reserved resource instances are exhausted, the autoscaler checks resource utilization of all the clusters in the cloud-based computing environment102using the aggressive scale-in resource utilization thresholds, as indicated by the arrow334, to find clusters that can be scaled in. In this example, the clusters that can be scaled in using the aggressive scale-in resource utilization thresholds are the clusters C3 and C4, and the cluster with the lowest resource utilizations is the cluster C3, which happens to be in the same SDDC106A as the cluster to be scaled out, i.e., the cluster C2. Thus, in this case, the autoscaler104will remove a host computer204from the cluster C3 (the cluster being scaled in), as indicated by the arrow336. The removed host computer is then added to the cluster C2 (the cluster being scaled out), as indicated by the arrow338. Thus, in this example, a host computer is moved from the cluster C3 (the cluster being scaled in) to the cluster C2 (the cluster being scaled out). In some embodiments, the selection of the host computer to be removed from the cluster C4 may be made by the autoscaler104or the virtualization manager (not shown) in the SDDC106A. FIG.3Billustrates the autoscaling operation executed by the autoscaler104when the cluster to be scaled out and the cluster to be scaled in are in different SDDCs in the cloud-based computing environment102in accordance with an embodiment of the invention. As shown inFIG.3B, the cloud-based computing environment again includes the two SDDCs106A and106B and the pool310of unused or available reserved resource instances112, which were described above. In this example, the resource utilizations for the clusters C1-C3 are shown in the following table: ResourceCluster C1Cluster C2Cluster C2Storage55%75%30%CPU75%75%60%Memory60%60%62% The resource utilization for the clusters C4-C5 are shown in the following table: ResourceCluster C4Cluster C5Storage25%60%CPU60%68%Memory62%80% In this illustrated example, similar to the example shown inFIG.3A, high resource utilizations in the cluster C2 generates a scale-out recommendation by the autoscaler104using the scale-out resource utilization thresholds, as indicated by the arrow340. In response, the autoscaler again checks the pool310to see if any unused reserved resource instances112are available, as indicated by the arrow342. If one or more unused reserved resource instances are available, the autoscaler will execute a scale-out operation on the cluster C2, which will involve adding one unused reserved resource instance to the cluster C2. However, if unused reserved resource instances are exhausted, the autoscaler checks resource utilizations of all the clusters in the cloud-based computing environment102using the aggressive scale-in resource utilization thresholds, as indicated by the arrow344, to find clusters that can be scaled in. In this example, the clusters that can be scaled in using the aggressive scale-in resource utilization thresholds are the clusters C3 and C4, and the cluster with the lowest resource utilizations is the cluster C4, which happens to be in a different SDDC, i.e., the SDDC106B, as the cluster to be scaled-out, i.e., the cluster C2. Thus, in this case, the autoscaler104will remove and release a host computer204from the cluster C3 (the cluster being scaled in), as indicated by the arrow346, which results in one reserved resource instance being available in the cloud-based computing environment102. Next, a new host computer, i.e., the now-available reserved resource instance, is added to the cluster C2 (the cluster being scaled out), as indicated by the arrow348. Thus, in this example, a host computer is released from the cluster C4 (the cluster being scaled in) and a new host computer is provisioned to the cluster C2 (the cluster being scaled out). In some embodiments, the selection of the host computer to be removed from the cluster C4 may be made by the autoscaler104or the virtualization manager (not shown) in the SDDC106B. In an embodiment, the autoscaler104may use various parameters to select which target cluster in the cloud-based computing environment102can be scaled in to accommodate the scale-out of a cluster of host computers in the cloud-based computing environment with high resource utilizations. These parameters may be set or modified by an administrator of the cloud-based computing environment so that the enhanced elastic cluster feature can be customized as needed. Some of these parameters are as follows: Aggressive Scale-In Resource Utilization Thresholds These thresholds are used when reserved resource instances112are needed in the cloud-based computing environment102. As described above, these thresholds may include thresholds for storage, CPU and memory, which may be customized by an administrator of the cloud-based computing environment. The use of these thresholds is further explained below using examples. Consider a scale-out scenario in the cloud-based computing environment102where the reserved resource instances112have been exhausted. The storage utilization of a first cluster in the cloud-based computing environment has exceeded 70% (exceeding the scale-out utilization threshold for storage) and a second cluster in the cloud-based computing environment has storage utilization of 28%. Let's assume that the standard scale-in resource utilization threshold for storage is 20%, which means that a cluster is scaled in, i.e., a host computer is removed, when the storage utilization goes below 20%. Let's further assume that the aggressive scale-in resource utilization threshold for storage is set to 35%. In this scale-out scenario, the autoscaler104will check the resource utilizations of other clusters in all the SDDCs in the cloud-based computing environment. The storage utilization of the second cluster (28%) is more than the standard scale-in threshold (20%) but less than the aggressive scale-in threshold (35%). Thus, in this case, the autoscaler can remove a host computer from the second cluster and reuse this instance or use a new available instance to scale out the first cluster. As mentioned above, there can be similar aggressive scale-in resource utilization thresholds for memory as well as CPU. This will make the enhanced elastic cluster feature of the autoscaler104more cost efficient with optimal use of resources. Among these three resources, storage may be a hard resource constraint and may have a higher priority over CPU and memory when determining the cluster to be scaled in. In order to make these parameter more flexible, default aggressive scale-in resource utilization thresholds may be set for every SDDC106in the cloud-based computing environment102and an administrator can customize the aggressive scale-in resource utilization thresholds for every cluster in each SDDC based on the workloads running on the clusters. Cluster Priority This parameter allows an administrator of the cloud-based computing environment102to set priority to the clusters based on the workloads running on the clusters. For example, for clusters with test workloads, the priority for these clusters can be set “LOW”, and for cluster with production workloads, the priority for the clusters can be set “HIGH”. With these settings, during a scale-in operation, the autoscaler104will select one of the “LOW” priority clusters first so as to avoid affecting the “HIGH” priority clusters with higher priority workloads, e.g., production workloads. In an embodiment, the options for this parameter may be “LOW”, “MEDIUM” and “HIGH”, where “LOW” priority clusters will be selected for scale in over “MEDIUM” priority clusters, and “MEDIUM” priority clusters will be selected for scale in over “HIGH” priority clusters. Data Utilization This parameter allows an administrator of the cloud-based computing environment102to set the autoscaler104to take into consideration the data present on the host computers in order to remove or move host computers from one cluster to another cluster in the cloud-based computing environment102as quick as possible. With this parameter enabled, the autoscaler will pick the “cheapest” host computer to move/remove based on the amount of vSAN data on the host computer, which will reduce the time required to transfer the data for the move/removal. Cost Vs Time This parameter allows an administrator of the cloud-based computing environment102to have the flexibility to select between a time based priority or a cost based priority for certain applicable autoscaling situations. For example, consider a situation where there are no buffer or available reserved resource instances and a scale-out event is generated for one of the clusters. If none of the other clusters fit the aggressive scale-in criteria, the autoscaler104would go ahead and add an on-demand resource instance. However, if the autoscaler finds a cluster which can be scaled in, this parameter allows the user the flexibility to decide whether the user prefers cost (the scale out will wait until a host computer is made available by the scale in) or time (the scale out will go ahead and add an on demand host and the scale in will simultaneously release a reserved instance for future scale out). Existing On-Demand Resource Instance The effects of this parameter when enabled are illustrated using two cases. In the first case, a scale-out recommendation for a first cluster in the cloud-based computing environment102is generated by the autoscaler104, but all the other clusters have resource utilizations higher than the aggressive scale-in resource utilization thresholds and there are no unused reserved instances available. In this case, the autoscaler will provision an on-demand resource instance in the first cluster. When the next scale-out recommendation for a second cluster in the cloud-based computing environment is generated, if the resource utilization values of the first cluster meets the aggressive scale-in resource utilization thresholds, the autoscaler will reuse the on-demand resource instance from the first cluster in the second cluster. In the second case, a scale-out recommendation for the first cluster is generated by the autoscaler104, but there is one available reserved resource instance and there is another cluster with an on-demand resource instance that meets the aggressive scale-in criteria. In this case, the autoscaler will go ahead with the scale-out of the first cluster using the available reserved resource instance and simultaneously release the on-demand resource instance from the other cluster to be more cost effective. SDDC Priority This parameter allows an administrator of the cloud-based computing environment102to set the autoscaler to prefer host computers being moved from one cluster to another cluster within the same SDDC rather than across different SDDCs. For example, if there are two clusters in different SDDCs that meet all the aggressive scale-in criteria, the autoscaler will pick the cluster based on the SDDC to which the cluster to be scaled out belongs. In other words, for scale in, priority will be given to the cluster which belongs to the same SDDC as the cluster which will be scaled out. This makes moving the host computer from one cluster to another cluster faster since moving a host computer within the SDDC is faster than moving a host computer from one SDDC to another SDDC. An autoscaling operation on the cloud-based computing environment102performed by the autoscaler104in accordance with an embodiment of the invention is described with reference to a process flow diagram shown inFIGS.4A and4B. The autoscaling operation begins at step402, where a scale-out event is generated for a particular cluster in the cloud-based computing environment by the autoscaler. In an embodiment, a scale-out event is a scale-out recommendation that is generated for a cluster when the autoscaler determines that one or more resource utilizations of the cluster exceed scale-out resource utilization thresholds. The values for the resource utilizations may be received from the virtualization manager of the SDDC to which the cluster belongs. Next, at step404, a determination is made by the autoscaler104whether a reserved resource instance is required to scale out. This requirement may be a user-set policy. If a reserved resource instance is not required, then the operation proceeds to step414. However, if a reserved resource instance is required, then the operation proceeds to step406, where a determination is made by the autoscaler104whether there is a reserved resource instance available for the cloud-based computing environment102. If a reserved resource instance is not available, the operation proceeds to step410. However, if a reserved resource instance is available, the operation proceeds to step408, where a scale-out operation is started on the cluster to be scaled out. The operation then proceeds to step410. At step410, other clusters in the cloud-based computing environment102are examined by the autoscaler104to find clusters falling in the aggressive scale-in resource utilization thresholds. In an embodiment, the clusters falling in the aggressive scale-in resource utilization thresholds are clusters with resource utilization values for storage, CPU and memory that are all below the corresponding aggressive scale-in resource utilization thresholds. Next, at step412, a determination is made whether any matching clusters, i.e., any clusters falling in the aggressive scale-in resource utilization thresholds, are found. If no matching clusters are found, the operation proceeds to step414. However, if matching clusters are found, the operation proceeds to step420. At step414, a determination is made by the autoscaler104whether an on-demand resource instance is allowed to be used to scale out the cluster. This requirement may be a user-set policy. If the use of an on-demand resource instance is allowed, the operation proceeds to step416, where the cluster is scaled out using an on-demand resource instance. The operation then comes to an end. However, if the use of an on-demand resource instance is not allowed, the operation proceeds to step418, where the event is rejected and the operation then comes to an end. It is noted here that a scale-out operation on the same cluster will likely be retried during the next cycle, e.g., in 5 minutes, with the assumption that another scale-out event will be generated for the same cluster. At optional step420(after one or more matching clusters have been found), any matching clusters with “HIGH” priority are removed from a list of matching clusters. In other embodiments, both “HIGH” and “MEDIUM” priority clusters may be removed so that only “LOW” priority clusters are considered. Next, at step422, a candidate cluster to scale in is selected by the autoscaler104based on vSAN data and priority on SDDC level. In other embodiments, the selection of the candidate cluster may be based solely on resource utilizations, e.g., lowest among the matching clusters, or based on vSAN data or priority on SDDC level, which are user-selected parameters. Next, at step424, a determination is made whether a candidate cluster has been found. If no candidate cluster has been found, then the operation proceeds to step414. However, if a candidate cluster has been found, then the operation proceeds to step426, where the candidate cluster is scaled in, which results in a resource instance, e.g., a host computer, being removed or released from the candidate cluster. Next, step428, a determination is made by the autoscaler104whether the reason for the scale in is for a scale-out event or creating a buffer reserved resource instance, i.e., a reserved resource instance available for future use. If the reason for the scale in is creating a buffer reserved resource instance, the operation proceeds to step430, where no further action is taken by the autoscaler. The operation then comes to an end. However, if the reason for the scale in is for a scale-out event, the operation proceeds to step432, where the reserved/on-demand resource instance released from the candidate cluster is reused to scale out if a scale-out operation has not already started. The operation then proceeds to step434. At step434, a determination is made by the autoscaler104whether enough reserved resources instances are in the buffer, i.e., the pool of available reserved resource instances. If there is not enough reserved resource instances in the buffer, then the operation proceeds back to step410to try to make a reserved resource available by scaling in a candidate cluster using the aggressive scale-in resource utilization thresholds. However, if there is enough reserved instances in the buffer, the operation proceeds to step436, where no further action is taken by the autoscaler. The operation then comes to an end. In some embodiments, the autoscaler104may initiate actions that require checking for reserved resource instances in the buffer, at step438. As an example, in an embodiment, a buffer check may be repeatedly initiated to determine whether sufficient reserved instances are in the buffer. As another example, in an embodiment, a demand prediction across workloads running in the cloud-based computing environment102may be initiated, which requires checking to ensure that there are sufficient reserved resource instances in the buffer for the predicted demand when needed. In these embodiments, the operation proceeds to step434to determine whether enough reserved resources instances are in the buffer so that one or more additional reserved resource instances can be added to the buffer using the aggressive scale-in resource utilization thresholds, as described above. A computer-implemented method for autoscaling clusters of host computers in a cloud-based computing environment in accordance with an embodiment of the invention is described with reference to a process flow diagram ofFIG.5. At block502, a scale-out recommendation is generated for a cluster of host computers in the cloud-based computing environment. At block504, the cloud-based computing environment is checked for any available reserved resource instances in response to the scale-out recommendation. At block506, when a number of available reserved resource instance for the cloud-based computing environment is below a predefined value, the cloud-based computing environment is searched for any target clusters of host computers to scale in based on at least one resource utilization using an aggressive scale-in resource utilization threshold that is greater than a corresponding standard scale-in resource utilization threshold. At block508, when at least one target cluster of host computer is found, a scale-in operation is executed on a candidate cluster of host computers selected from the at least one target cluster of host computers to remove an existing resource instance from the candidate cluster of host computers. At block510, a scale-out operation is executed on the cluster of host computers using an available resource instance for the cloud-based computing environment. Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner. It should also be noted that at least some of the operations for the methods may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer useable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations, as described herein. Furthermore, embodiments of at least portions of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-useable or computer-readable medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device), or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disc. Current examples of optical discs include a compact disc with read only memory (CD-ROM), a compact disc with read/write (CD-R/W), a digital video disc (DVD), and a Blu-ray disc. In the above description, specific details of various embodiments are provided. However, some embodiments may be practiced with less than all of these specific details. In other instances, certain methods, procedures, components, structures, and/or functions are described in no more detail than to enable the various embodiments of the invention, for the sake of brevity and clarity. Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.
45,069
11861414
DETAILED DESCRIPTION Described herein are methods and systems for enabling users such as enterprises to scale their networks to the cloud and more efficiently enable the migration of their networks from on-premises data centers to virtualized systems. In some embodiments, efficient transformation of system calls between partitions is enabled, such as from a local master-slave architecture to a hardware isolated distributed architecture. In many virtualized environments, a root partition may be configured with various system capabilities (e.g., networking intelligence, security) which may also be replicated in guest partitions. This may result in inefficiencies because of duplication of tasks in both the root and guest partitions. In some implementations, some functions performed in the kernel space may be invoked using system calls such as Input/Output Controls (IOCTLs) to access drivers and other functions. In some scenarios, two system calls may be generated, one in the guest partition and one in the root partition, resulting in duplicated effort. Embodiments are disclosed for abstracting the interfaces for making system calls such that details between whether calls are made by the root partition or guest partition are abstracted. By abstracting the interfaces, instead of repeating code to implement the system calls, a cross partition system call is implemented. In one embodiment, a shim is implemented so that the system call may be agnostic as to whether it is coming from separate partitions. In one embodiment, techniques are described for converting the conventions such that multiple guest partitions may use a single trusted partition for executing the system call. In one embodiment, when applications in guest partitions make a system call, the system call is intercepted and makes the call over a cross hypervisor partition transport mechanism such as, but not limited to a hyper-socket crossing a hypervisor partition. The other end of such a transport terminates at a trusted service inside a trusted root partition. In another embodiment, a designated trusted guest partition may be created that is different from an application guest partition and that is trusted and different from the root partition. The trusted service may then perform the following: Authenticate the call as coming from a trusted guest partition. Validate the message buffer and any data relevant for the system call Perform the system call on behalf of the guest partition. Return the results of such a system call back to the guest partition or further process the call such as sending the message on the network. The benefits of the described techniques include but are not limited to reduced CPU cost, and the ability to perform trusted activities such as encryption on only a trusted partition and not on guest partitions. Functions that can utilize such services includes Hyper-V sockets, which is a Windows Socket with a specialized endpoint for targeting virtual machines, and environments tailored for safely running applications in isolated environments such as an isolated container. FIG.1illustrates one example of a master-slave, controller-enforcer architecture that may be converted in accordance with some embodiments. In some cases, it may be desirable for one VM to be the master and other VMs to be the slaves. The master may be implemented in a highly secure VM with a secure isolation boundary. Various embodiments enable configurations and/or policies that take existing master-slave architectures and distributing the architecture requiring code changes even when the original code assumes one machine with one interface. Existing boundaries or APIs may be distributed across VMs and containers, allowing encapsulation and avoiding forking of code versions. In some embodiments, an abstraction layer may be implemented that abstracts the details of the system call interface. This allows parsing to be performed in an isolated VM or container. For example, for Wifi controls, individual VMs may not have directed access to the access point and only the root partition does. The techniques allow for these details to be abstracted, allowing for greater coding efficiencies and enabling system optimizations such as load balancing. Various embodiments provide a way to declare policies and convert local device IO control and IO request packet (IRP) communication. Embodiments implemented techniques for describe the device type-path and host process identity of communication channels. Further embodiments, enable negotiation of version information across the isolation boundary, conversion of the communication to a machine to machine communication in a network, conversion of the communication to a VM to VM communication on a single host, conversion of the communication to a host to VM communication on a single host, and conversion of the conventions to a single master to multiple slave architecture. FIG.2illustrates an example architecture of a master-slave or controller-slave/enforcer, including a10manager DeviceloCtrl( ) API210. In one embodiment, the DeviceloCtrl API may be implemented with functionality that enables code to be agnostic of the underlying system call details. The functionality may include a policy based on device type/path and process identity that allows the scope of this communication. The policy may further specify the remote binding on which this connection should be made (across a hypervisor, across a network, across RDMA, etc.). FIG.3illustrates how the pattern illustrated inFIG.2may be transformed to a hardware isolated distributed architecture across two machines. The transport mechanism (e.g., hyperV socket) crosses partitions, and the connection may be abstracted. FIG.4illustrates how the VM to VM communication in a single architecture can be transformed.FIG.5illustrates how the host to VM communication in a single host can be transformed.FIG.6illustrates how the VM to host communication in a single host architecture can be transformed.FIG.7illustrates how, with some extensions to DeviceloCtrl and minimal error handling error cases, the architecture may be transformed to a single master/controller to multiple slave/enforcers. In one embodiment, the above described transformations may be performed by RemoteDeviceloCtrl changes on the DeviceloCtrl( ) API. Additionally, one or more of the following components may be implemented: Services or processes controller may be a user mode process that is the master or controller of the architecture. The driver slave or driver may be a kernel component that performs the slave or enforcer functions. DeviceloCtrl( ). This is an example API intercept API (a single call) that performs the function of sending and receiving an input buffer, output buffer, status code, and error code, and IOCTL number. RemoteDeviceloCtrl( ). This component may include functionality and components, one running in the master and one running in the slave machine/VM that allows the transformation between a local master/controller—slave/enforcer architecture to a hardware isolated distributed master/controller—slave/enforcer architecture. Machine Controller—this may be machine where the controller process is running. Machine Slave—this component may be the machine where the slave/enforcer driver software is running. VM Controller—The VM where the controller process is running. VM Slave—The VM where the slave/enforcer driver software is running. The following is one example of how a local architecture functions, and how the local architecture functions after the transformation allowed by the RemoteDeviceloCtrl functionality. Case 1: The master opens the slave device. The master sends a code and an input buffer. The slave processes the request and responds with an error code and output buffer. The master receives an error code and an output buffer. The master and slave continue their operation. Case 2: The master opens the slave device. The master sends a code and input buffer and waits for the operation to complete. At a subsequent time in the future, an event occurs and the slave completes the operation (e.g., completing an IRP), and a notification is returned to the master as a completion of the call. The master receives the error code and the output buffer. The master and slave continue their operation. With the RemoteDeviceIoCtrl functionality, a developer may configure a policy specifying: 1) The master identity, 2) the slave device type/path, 3) the remote endpoint (across a hypervisor, a network, RDMA, etc.). Once this is done the flow may be as follows: Case 1: The master opens the slave device. RemoteDeviceIoCtrl determines that the master process identity and master device path are a match and enables remoting as specified. The master sends a code and an input buffer. RemoteDeviceIoCtrl sends the code and input buffer to the slave's remote location. The slave responds. RemoteDeviceIoCtrl sends the slave error code and output buffer back to the location of the master. The master receives an error code and an output buffer. The master and slave continue their operation. Case 2: The master opens the slave device. RemoteDeviceIoCtrl determines that the master process identity and master device path are a match and enables remoting as specified. The master sends a code and input buffer and waits for the operation to complete. RemoteDeviceIoCtrl sends the code and input buffer to the slave's remote location. At a subsequent time in the future, an event occurs and the slave completes the operation (e.g., completing an IRP), and a notification is returned. RemoteDeviceIoCtrl sends the notification, error code, and output buffer to the master's location. The master receives the notification of the completion of the call. The master receives the error code and the output buffer. The master and slave continue their operation. FIG.8Aillustrates techniques for converting the conventions such that multiple guest partitions may use a single trusted partition for executing the system call. In one embodiment, when applications in guest partitions make a system call, the system call is intercepted and makes the call over a cross hypervisor partition transport mechanism such as, but not limited to a hyper-socket crossing a hypervisor partition. The other end of such a transport terminates at a trusted service inside a trusted root partition. In another embodiment, a designated trusted guest partition may be created that is different from an application guest partition and that is trusted and different from the root partition. The trusted service may then perform the following: Authenticate the call as coming from a trusted guest partition. Validate the message buffer and any data relevant for the system call Perform the system call on behalf of the guest partition. Return the results of such a system call back the guest partition or further processes the call such as sending the message on the wire. FIG.8Billustrates an embodiment where multiple guest partitions may use a trusted guest partition for executing the system call, and where the trusted guest partition is separate from the root partition. FIG.9illustrates an example computing environment in which the embodiments described herein may be implemented.FIG.9illustrates a data center900that is configured to provide computing resources to users900a,900b, or900c(which may be referred herein singularly as “a user900” or in the plural as “the users900”) via user computers902a,902b, and902c(which may be referred herein singularly as “a computer902” or in the plural as “the computers902”) via a communications network930. The computing resources provided by the data center900may include various types of resources, such as computing resources, data storage resources, data communication resources, and the like. Each type of computing resource may be general-purpose or may be available in a number of specific configurations. For example, computing resources may be available as virtual machines. The virtual machines may be configured to execute applications, including Web servers, application servers, media servers, database servers, and the like. Data storage resources may include file storage devices, block storage devices, and the like. Each type or configuration of computing resource may be available in different configurations, such as the number of processors, and size of memory and/or storage capacity. The resources may in some embodiments be offered to clients in units referred to as instances, such as virtual machine instances or storage instances. A virtual computing instance may be referred to as a virtual machine and may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor). Data center900may include servers996a,996b, and996c(which may be referred to herein singularly as “a server996” or in the plural as “the servers996”) that provide computing resources available as virtual machines929aand929b(which may be referred to herein singularly as “a virtual machine929” or in the plural as “the virtual machines929”). The virtual machines929may be configured to execute applications such as Web servers, application servers, media servers, database servers, and the like. Other resources that may be provided include data storage resources (not shown onFIG.9) and may include file storage devices, block storage devices, and the like. Servers996may also execute functions that manage and control allocation of resources in the data center, such as a controller995. Controller995may be a fabric controller or another type of program configured to manage the allocation of virtual machines on servers996. Referring toFIG.9, communications network930may, for example, be a publicly accessible network of linked networks and may be operated by various entities, such as the Internet. In other embodiments, communications network930may be a private network, such as a corporate network that is wholly or partially inaccessible to the public. Communications network930may provide access to computers902. Computers902may be computers utilized by users900. Computer902a,902bor902cmay be a server, a desktop or laptop personal computer, a tablet computer, a smartphone, a set-top box, or any other computing device capable of accessing data center900. User computer902aor902bmay connect directly to the Internet (e.g., via a cable modem). User computer902cmay be internal to the data center900and may connect directly to the resources in the data center900via internal networks. Although only three user computers902a,902b, and902care depicted, it should be appreciated that there may be multiple user computers. Computers902may also be utilized to configure aspects of the computing resources provided by data center900. For example, data center900may provide a Web interface through which aspects of its operation may be configured through the use of a Web browser application program executing on user computer902. Alternatively, a stand-alone application program executing on user computer902may be used to access an application programming interface (API) exposed by data center900for performing the configuration operations. Servers996may be configured to provide the computing resources described above. One or more of the servers996may be configured to execute a manager920aor920b(which may be referred herein singularly as “a manager920” or in the plural as “the managers920”) configured to execute the virtual machines. The managers920may be a virtual machine monitor (VMM), fabric controller, or another type of program configured to enable the execution of virtual machines929on servers996, for example. It should be appreciated that although the embodiments disclosed above are discussed in the context of virtual machines, other types of implementations can be utilized with the concepts and technologies disclosed herein. In the example data center900shown inFIG.9, a network device929may be utilized to interconnect the servers996aand996b. Network device929may comprise one or more switches, routers, or other network devices. Network device929may also be connected to gateway940, which is connected to communications network930. Network device929may facilitate communications within networks in data center900, for example, by forwarding packets or other data communications as appropriate based on characteristics of such communications (e.g., header information including source and/or destination addresses, protocol identifiers, etc.) and/or the characteristics of the private network (e.g., routes based on network topology, etc.). It will be appreciated that, for the sake of simplicity, various aspects of the computing systems and other devices of this example are illustrated without showing certain conventional details. Additional computing systems and other devices may be interconnected in other embodiments and may be interconnected in different ways. It should be appreciated that the network topology illustrated inFIG.9has been greatly simplified and that many more networks and networking devices may be utilized to interconnect the various computing systems disclosed herein. These network topologies and devices should be apparent to those skilled in the art. It should also be appreciated that data center900described inFIG.9is merely illustrative and that other implementations might be utilized. Additionally, it should be appreciated that the functionality disclosed herein might be implemented in software, hardware or a combination of software and hardware. Other implementations should be apparent to those skilled in the art. It should also be appreciated that a server, gateway, or other computing device may comprise any combination of hardware or software that can interact and perform the described types of functionality, including without limitation desktop or other computers, database servers, network storage devices and other network devices, PDAs, tablets, smartphone, Internet appliances, television-based systems (e.g., using set top boxes and/or personal/digital video recorders), and various other consumer products that include appropriate communication capabilities. In addition, the functionality provided by the illustrated modules may in some embodiments be combined in fewer modules or distributed in additional modules. Similarly, in some embodiments the functionality of some of the illustrated modules may not be provided and/or other additional functionality may be available. Turning now toFIG.10, illustrated is an example operational procedure for implementing system calls in a virtualized computing environment comprising a plurality of computing devices that are configured to host a plurality of virtual machines in accordance with the present disclosure. The operational procedure may be implemented in a system comprising one or more computing devices. Referring toFIG.10, operation1001illustrates instantiating an interface configured to abstract partitions in the virtualized computing environment. Operation1001may be followed by operation1003. Operation1003illustrates receiving, by the interface, a system call that is to be executed across a system boundary in a localized computing environment. Operation1003may be followed by operation1005. Operation1005illustrates determining, by the interface based on a declarative policy, one or more of a device type, device path, or process identity associated with the system call. Operation1005may be followed by operation1007. Operation1007illustrates based on the determining, executing the system call in the virtualized computing environment. Operation1007may be followed by operation1009. Operation1009illustrates in response to completion of the system call, returning a result of the system call. In an embodiment, the system call is a I/O control code or I/O request packet. In an embodiment, the first and second thresholds are determined based on performance parameters of the virtualized computing environment. In an embodiment, the partitions include a root partition and a guest partition. In an embodiment, the system boundary includes a user mode and a kernel mode boundary. In an embodiment, version information is negotiated across the system boundary. In an embodiment, the system call is converted to a machine-to-machine communication in the virtualized computing environment. In an embodiment, the system call is converted to a virtual machine-to-virtual machine communication on a single host in the virtualized computing environment. In an embodiment, the system call is converted to a host-to-virtual machine communication on a single host in the virtualized computing environment. In an embodiment, the system call is converted to a single master to multiple slave architecture in the virtualized computing environment. In an embodiment, the results include an error code and an output buffer. In an embodiment, the system call is converted such that multiple partitions use a single trusted partition for executing the system call in the virtualized computing environment. In an embodiment, the trusted partition is a trusted root partition. In an embodiment, the trusted partition is a trusted guest partition. In an embodiment, the system call is executed in a trusted service in the trusted partition, wherein the trusted service: authenticates the call as coming from a trusted guest partition; validates a message buffer and data applicable to the system call; and executes the system call on behalf of the guest partition. Referring toFIG.11, illustrated is another example operational procedure for implementing system calls in a virtualized computing environment comprising a plurality of computing devices that are configured to host a plurality of virtual machines. The operational procedure may be implemented in a system comprising one or more computing devices. Referring toFIG.11, operation1101illustrates receiving a system call that is to be executed across a system boundary in a localized computing environment. Operation1101may be followed by operation1103. Operation1103illustrates determining one or more of a device type, device path, or process identity associated with the system call. Operation1103may be followed by operation1105. Operation1105illustrates based on the determining, executing the system call in a virtualized computing environment. Operation1105may be followed by operation1107. Operation1107illustrates in response to completion of the system call, returning a result of the system call. Operation1109may be followed by operation1111. Operation1111illustrates instantiating additional spoke virtual networks and connecting them to the virtual hub network until a number of spoke virtual networks reaches a second threshold or the nodes and interconnections in the mesh network topology are allocated. In an embodiment, the system call is executed across at least two partitions in the virtualized computing environment. In an embodiment, the system call is a master-slave call. The various aspects of the disclosure are described herein with regard to certain examples and embodiments, which are intended to illustrate but not to limit the disclosure. It should be appreciated that the subject matter presented herein may be implemented as a computer process, a computer-controlled apparatus, or a computing system or an article of manufacture, such as a computer-readable storage medium. While the subject matter described herein is presented in the general context of program modules that execute on one or more computing devices, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures and other types of structures that perform particular tasks or implement particular abstract data types. Those skilled in the art will also appreciate that the subject matter described herein may be practiced on or in conjunction with other computer system configurations beyond those described herein, including multiprocessor systems. The embodiments described herein may also be practiced in distributed computing environments, where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Networks established by or on behalf of a user to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to a distributed set of clients may be referred to as a service provider. Such a network may include one or more data centers such as data center100illustrated inFIG.1, which are configured to host physical and/or virtualized computer servers, storage devices, networking equipment and the like, that may be used to implement and distribute the infrastructure and services offered by the service provider. In some embodiments, a server that implements a portion or all of one or more of the technologies described herein, including the techniques to implement the capturing of network traffic may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.FIG.12illustrates such a general-purpose computing device1200. In the illustrated embodiment, computing device1200includes one or more processors1210a,1210b, and/or1210n(which may be referred herein singularly as “a processor1210” or in the plural as “the processors1210”) coupled to a system memory1212via an input/output (I/O) interface1230. Computing device1200further includes a network interface1240coupled to I/O interface1230. In various embodiments, computing device1200may be a uniprocessor system including one processor1210or a multiprocessor system including several processors1210(e.g., two, four, eight, or another suitable number). Processors1210may be any suitable processors capable of executing instructions. For example, in various embodiments, processors1210may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x126, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors1210may commonly, but not necessarily, implement the same ISA. System memory1212may be configured to store instructions and data accessible by processor(s)1210. In various embodiments, system memory1212may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques and data described above, are shown stored within system memory1212as code1225and data1226. In one embodiment, I/O interface1230may be configured to coordinate I/O traffic between the processor1210, system memory1212, and any peripheral devices in the device, including network interface1240or other peripheral interfaces. In some embodiments, I/O interface1230may perform any necessary protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory1212) into a format suitable for use by another component (e.g., processor1210). In some embodiments, I/O interface1230may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface1230may be split into two or more separate components. Also, in some embodiments some or all of the functionality of I/O interface1230, such as an interface to system memory1212, may be incorporated directly into processor1210. Network interface1240may be configured to allow data to be exchanged between computing device1200and other device or devices1260attached to a network or network(s)1250, such as other computer systems or devices as illustrated inFIGS.1through4, for example. In various embodiments, network interface1240may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example. Additionally, network interface1240may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs or via any other suitable type of network and/or protocol. In some embodiments, system memory1212may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIGS.1-12for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. A computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device1200via I/O interface1230. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media, such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device1200as system memory1212or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface1240. Portions or all of multiple computing devices, such as those illustrated inFIG.12, may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices, or special-purpose computer systems, in addition to or instead of being implemented using general-purpose computer systems. The term “computing device,” as used herein, refers to at least all these types of devices and is not limited to these types of devices. Various storage devices and their associated computer-readable media provide non-volatile storage for the computing devices described herein. Computer-readable media as discussed herein may refer to a mass storage device, such as a solid-state drive, a hard disk or CD-ROM drive. However, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media that can be accessed by a computing device. By way of example, and not limitation, computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computing devices discussed herein. For purposes of the claims, the phrase “computer storage medium,” “computer-readable storage medium” and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media, per se. Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon. As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion. In light of the above, it should be appreciated that many types of physical transformations take place in the disclosed computing devices in order to store and execute the software components and/or functionality presented herein. It is also contemplated that the disclosed computing devices may not include all of the illustrated components shown inFIG.12, may include other components that are not explicitly shown inFIG.12, or may utilize an architecture completely different than that shown inFIG.12. Although the various configurations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter. Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein. It should be appreciated any reference to “first,” “second,” etc. items and/or abstract concepts within the description is not intended to and should not be construed to necessarily correspond to any reference of “first,” “second,” etc. elements of the claims. In particular, within this Summary and/or the following Detailed Description, items and/or abstract concepts such as, for example, individual computing devices and/or operational states of the computing cluster may be distinguished by numerical designations without such designations corresponding to the claims or even other paragraphs of the Summary and/or Detailed Description. For example, any designation of a “first operational state” and “second operational state” of the computing cluster within a paragraph of this disclosure is used solely to distinguish two different operational states of the computing cluster within that specific paragraph—not any other paragraph and particularly not the claims. In closing, although the various techniques have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.
38,419
11861415
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Implementations of the present disclosure are directed to scheduling workloads to clusters in container orchestration systems. More particularly, and as described in further detail herein, implementations of the present disclosure provide a service mesh to optimize utilization of physical hardware in heterogenous clusters. In some implementations, actions include receiving, by a service mesh provisioned within a container orchestration system, a request from a client, determining, by the service mesh, a load balancing strategy that is to be applied for routing of the request within the heterogeneous cluster, and transmitting, by the service mesh, the request to a service within the heterogenous cluster, the service routing the request to a node for processing based on the load balancing strategy. To provide further context for implementations of the present disclosure, and as introduced above, in modern software deployments containerization is implemented, which can be described as operating system (OS) virtualization. In containerization, applications (or microservices, software processes) are run in isolated user spaces referred to as containers. The containers use the same shared OS, and each provides a fully packaged and portable computing environment. That is, each container includes everything an application needs to execute (e.g., binaries, libraries, configuration files, dependencies). Because a container is abstracted away from the OS, containerized applications can execute on various types of infrastructure. For example, using containers, an application can execute in any of multiple cloud-computing environments. Container orchestration automates the deployment, management, scaling, and networking of containers. For example, container orchestration systems, in hand with underlying containers, enable applications to be executed across different environments (e.g., cloud computing environments) without needing to redesign the application for each environment. Enterprises that need to deploy and manage a significant number of containers (e.g., hundreds or thousands of containers) leverage container orchestration systems. An example container orchestration system is the Kubernetes platform, maintained by the Cloud Native Computing Foundation, which can be described as an open-source container orchestration system for automating computer application deployment, scaling, and management. In container orchestration systems, such as Kubernetes, clusters include physical hardware (e.g., servers, processors, memory) that execute applications. As physical hardware and operating systems executing thereon are constantly developed and integrated into cloud platforms, it commonly occurs that clusters become heterogenous with respect to capabilities of the physical machines. However, scheduling workloads on heterogenous cluster is challenging and utilization of resources can be limited by the service load balancing strategy implemented by the container orchestration system. In further detail, and with example reference to Kubernetes, Kubernetes manages containers with pods, which are the smallest deployable objects in Kubernetes. Applications are usually defined as Kubernetes deployments, which are backed by a number of identical pods running application containers. Each application is exposed to the externals of the Kubernetes cluster through a service. The service provides an abstract way to expose an application running on a set of pods as a network service, and the service is connected to pods using label selectors. Each pod carries a set of labels and the service keeps track of the pods that it can communicate with. When a request is sent to the service routes the request to one of the backing pods. When there are multiple pods available, a round-robin load balancing strategy is used to distribute the load. That is, each pod is utilized in turn and the load is distributed equally to across all pods. When a cluster is formed with physical hardware (machines) of the same type, the cluster is called a homogeneous cluster. Because container orchestration systems such as Kubernetes can run on any machine type, it is most common to choose a homogeneous cluster from a cloud provider. However, there are cases where a cluster can include different machine types, making it a heterogeneous cluster. For example, the following example situations result in heterogeneous clusters: the infrastructure is maintained in-house and new hardware is added into an existing cluster; the infrastructure is maintained by cloud hyperscalers and new machines are added on top of existing booked machines; and the infrastructure is maintained by cloud hyperscalers and heterogeneous machines are booked on purpose to ensure high resource availability. When heterogenous clusters are formed, applications will run on machines with different capabilities. For example, some machines can be considered low-end (e.g., as low-end nodes) that have reduced capabilities (e.g., processing, memory) as compared with machines that are considered high-end (e.g., as high-end nodes). When a service is connected to pods of different capabilities, the default round-robin load balancer routes the same number of requests to the pods regardless of capability. A pod on a high-end machine is able to serve requests in a shorter time, yet it does not get more requests as compared to a pod on a low-end machine. This results in under-utilization of high-end machines and over-utilization of low-end machines. This creates a scenario, in which any advantages intended by deploying high-end machines are erased. In view of the above context, implementations of the present disclosure provide a service mesh to optimize utilization of physical hardware in heterogeneous clusters. In some examples, a least connection load balancing strategy is used to distribute requests to nodes within heterogeneous clusters. As described in further detail herein, the service mesh enables a fine-grained load balancing strategy to route traffic in heterogenous clusters without any modification to applications. Example service mesh providers include Istio, Linkerd, and Kuma. For purposes of illustration, and without limitation, implementations of the present disclosure are described in further detail herein with reference to Istio, which can be described as open-source software that provides for the creation and management of service meshes that run natively within Kubernetes-orchestrated containers. However, it is contemplated that implementations of the present disclosure can be realized with any appropriate service mesh provider. As described herein, implementations of the present disclosure significantly improve performance of application servers as compared to a pure service approach, resulting in higher throughput and lower request latency. FIG.1depicts an example container orchestration architecture100in accordance with implementations of the present disclosure. In the depicted example, the example container orchestration architecture100represents deployment of a portion of a container orchestration system, Kubernetes introduced above. More particularly, the example architecture100represents a basic structure of a cluster within Kubernetes In the example ofFIG.1, the example architecture100includes a control plane102and a plurality of nodes104. Each node104can represent physical worker machines and are configured to host pods. In Kubernetes, a pod is the smallest deployable unit of resources and each pod is provided as one or more containers with shared storage/network resources, and a specification for how to run the containers. In some examples, a pod can be referred to as a resource unit that includes an application container. The control plane102communicates with the nodes104and is configured to manage all of the nodes104and the pods therein. In further detail, the control plane102is configured to execute global decisions regarding the cluster as well as detecting and responding to cluster events. In the example ofFIG.1, the control plane102includes a control manager110, one or more application programming interface (API) server(s)112, one or more scheduler(s)114, and a cluster data store116. The API server(s)112communicate with the nodes104and exposes the API of Kubernetes to exchange information between the nodes104and the components in the control plane102(e.g., the cluster data store116). In some examples, the control plane102is set with more than one API server(s)112to balance the traffic of information exchanged between the nodes104and the control plane102. The scheduler(s)114monitor the nodes104and execute scheduling processes to the nodes104. For example, the scheduler(s)114monitors events related to newly created pods and selects one of the nodes104for execution, if the newly created pods are not assigned to any of the nodes104in the cluster. The cluster data store116is configured to operate as the central database of the cluster. In this example, resources of the cluster and/or definition of the resources (e.g., the required state and the actual state of the resources) can be stored in the cluster data store116. The controller manager110of the control plane102communicates with the nodes104through the API server(s)112and is configured to execute controller processes. The controller processes can include a collection of controllers and each controller is responsible for managing at least some or all of the nodes104. The management can include, but is not limited to, noticing and responding to nodes when an event occurs, and monitoring the resources of each node (and the containers in each node). In some examples, the controller in the controller manager110monitors resources stored in the cluster data store116based on definitions of the resource. As introduced above, the controllers also verify whether the actual state of each resource matches the required state. The controller is able to modify or adjust the resources, so that actual state matches the required state depicted in the corresponding definition of the resources. In the example ofFIG.1, each node104includes an agent120and a proxy122. The agent120is configured to ensure that the containers are appropriately executing within the pod of each node104. The agent120is referred to as a kubelet in Kubernetes. The proxy122of each node104is a network proxy that maintains network rules on nodes104. The network rules enable network communication to the pods in the nodes104from network sessions inside or outside of the cluster. The proxy122is a kube-proxy in Kubernetes. FIG.2depicts an example architecture200that can be used to execute implementations of the present disclosure. The example ofFIG.2includes a container orchestration system202within which a cluster204is provided. In accordance with implementations of the present disclosure, a service mesh206is provided to route requests for processing to the cluster204. In some examples, the service mesh206is provided within a control plane of the container orchestration system202(e.g., the control plane102ofFIG.1). Although a single cluster204is depicted in the example ofFIG.2, it is contemplated that the service mesh206can communicate with and route requests to any appropriate number of clusters204. As depicted inFIG.2, the cluster204includes a first set of nodes210aand a second set of nodes210b. Here, each node in the first set of nodes210aand the second set of nodes210bis physical hardware that executes an instance of an application. For example, and as depicted inFIG.2, the first set of nodes210ahosts application servers212and the second set of nodes210bhosts application servers212. A service214is also provided, through which requests are routed to nodes in the first set of nodes210aand the second set of nodes210b. In the context of the present disclosure, the first set of nodes210acan be considered low-end nodes of the cluster204and the second set of nodes210bcan be considered high-end nodes of the cluster204. Consequently, the cluster204is considered to be a heterogeneous cluster. In some examples, low-end represents that the nodes of the first set of nodes210ahave reduced capabilities (e.g., processing, memory) as compared to nodes of the second set of nodes210b. With that, high-end represents that the nodes of the second set of nodes210bhave increased capabilities (e.g., processing, memory) as compared to nodes of the first set of nodes210a. Although the example ofFIG.2depicts two sets of nodes, it is contemplated that implementations of the present disclosure can be realized with any appropriate number of sets of nodes. In the example ofFIG.2, the service mesh206includes an ingress gateway220, a virtual service222, and a destination rule224. The service mesh206can be described as a dedicated infrastructure layer on top of the applications204. The service mesh206enables capabilities to be transparently (from the perspective of the applications204) added without modifying application code. Example capabilities include, without limitation, observability, traffic management, and security. The service mesh206can enable secure service-to-service communication in a cluster with Transport Layer Security (TLS) encryption, strong identity-based authentication and authorization, automatic load balancing for Hypertext Transfer Protocol (HTTP), Remote Procedure Call (RPC) (e.g., gRPC), WebSocket, and Transmission Control Protocol (TCP) traffic. The service mesh206also enables fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection. In some implementations, an external endpoint (e.g., domain name) is exposed by the service mesh206(e.g., exposed by the ingress gateway220). Here, the external endpoint that had been previously exposed by the service214(i.e., prior to implementation of the service mesh206) is no longer needed. Instead, the service mesh206takes on the external endpoint of the service214. In this manner, clients can use the same external endpoint as used previously without change. If, however, the service mesh206is configured with a new external endpoint, clients need to change their code to point to this new external endpoint. In some implementations, the destination of the request (e.g., which node in the first send second sets of nodes210a,210bto send the request to) is known by the virtual service222with the assistance of the service214. In some examples, the service214stores information of all of the available pods (i.e., distributed across the first set of nodes210aand the second set of nodes210b). In further detail, the ingress gateway220is the entry point to the service mesh206and exposes communication routes (e.g., HTTP, HTTP) from outside of the cluster to services within the cluster. That is, external traffic that is directed to the application is routed to the ingress gateway220. In some examples, a host is configured and a communication protocol (e.g., HTTP, HTTPS) is set. Listing 1, below, represents an example of an ingress gateway named bocr-gateway: Listing 1: Example Ingress GatewayapiVersion: networking.istio.io/v1beta1kind: Gatewaymetadata:name: bocr-gatewaynamespace: defaultspec:selector:istio: ingressgatewayservers:- hosts:- ′*′port:name: httpnumber: 80protocol: HTTP In some examples, the virtual service222defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic is matched, the traffic is sent to a named destination service (e.g., the service214of the cluster204). Listing 2, below, represents an example of a virtual service that is connected to the ingress gateway of Listing 1 and that routes all traffic to a Kubernetes service called line-based-bocr-service: Listing 2: Example Virtual ServiceapiVersion: networking.istio.io/v1beta1kind: VirtualServicemetadata:name: bocr-virtual-servicenamespace: defaultspec:gateways:- bocr-gatewayhosts:- ′*′http:- match:- uri:prefix: /v1/modelsroute:- destination:host: line-based-bocr-serviceport:number: 8501 In some examples, the destination rule224defines policies that apply to traffic intended for a service (e.g., the service214of the cluster204) after routing has occurred. These rules specify one or more configurations. Example configurations include, without limitation, load balancing, connection pool size from a sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool. In some implementations, the service mesh206can transmit requests to multiple clusters204. In some examples, selection of the cluster204is based on a configuration, which can be provided as a mapping of a uniform resource locator (URL) of a request to the cluster204to route to. An example of this is represented in Listing 2, in which all requests with the URL pattern (v1/models) will be routed to a service called line-based-bocr-service (e.g., the service214of the cluster204). In accordance with implementations of the present disclosure, a load balancing strategy is configured in the destination rule224in order to optimize utilization of machines within the heterogeneous cluster (e.g., the heterogenous cluster that includes the first set of nodes210aand the second set of nodes210b). Example load balancing strategies include, without limitation, least connection, consistent hash, and locality load balancer. Implementations of the present disclosure are described in further detail with non-limiting reference to least connection. In some examples, least connection load balancing is a dynamic load balancing algorithm where requests are distributed to an application server212with the least number of active connections at the time the request is received. Here, an active connection can be described as a connection to an application server212, and thus respective node, during processing of the request by the application server212. In short, the number of active connections that a node has is representative of a number of requests that the node is handling. In some examples, the service mesh206installs side-car containers in pods of the application servers212, which are transparent to the application. In some examples, the side-car containers track a number of active requests being handled in respective nodes. In the case that multiple pods have an equal number of connections, and that number is determined to be the least, one is randomly chosen. In a heterogeneous cluster, the high-end machines (e.g., nodes in the second set of nodes210b) would complete requests in a shorter time, resulting in fewer connections compared to low-end machines (e.g., nodes in the first set of nodes210a). As a result, the load balancing strategy will send more requests to high-end machines than low-end machines. At the outset, as requests are initially sent to the cluster, it can be assumed that the machines get the same number requests. Listing 3, below, represents an example destination rule that defines the load balancing strategy as least connection (LEAST_CONN), which is applied to the service line-based-bocr-service: Listing 3: Example Destination RuleapiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata:name: bocr-least-conn-rulespec:host: line-based-bocr-servicetrafficPolicy:loadBalancer:simple: LEAST_CONN FIG.3depicts an example process300that can be executed in accordance with implementations of the present disclosure. In some examples, the example process300is provided using one or more computer-executable program executed by one or more computing devices. A request is received (302). For example, and as described herein, a request is received by the ingress gateway220of the service mesh206ofFIG.2. The request is provided to a virtual service (304). For example, and as described herein, the ingress gateway220provides the request to the virtual service222of the service mesh206. A load balancing strategy that is to be applied for handling the request is determined (306). For example, and as described herein, the virtual service222determines the load balancing strategy from the destination rule224. In some examples, the load balancing strategy includes least connection. The request and load balancing strategy are transmitted to a service of a cluster (308). For example, and as described herein, the virtual service222transmits the request and the load balancing strategy to the service214of the cluster204. As described herein, the service214routes the request to a node in one of the first set of nodes210aand the second set of nodes210bfor processing. For example, and in the example case of least connection, the service214determines which node in the cluster has the least number of active connections and routes the request to that node. Implementations of the present disclosure have been tested with respect to a traditional approach using an experimental set-up. In the experimental set-up, the service mesh approach of the present disclosure is compared to a pure Kubernetes service approach (i.e., load balancer applying round-robin) to evaluate respective performances on heterogeneous cluster utilization. The heterogenous cluster used in the experimental set-up included an Nvidia V100 (AWS instance type p3.2xlarge) and an Nvidia T4 (AWS instance type g4dn.2xlarge). Table 1, below, provides performance information of the respective machines when used to execute application servers: TABLE 1Experimental Set-up Machine CapabilitiesNumber of Clients (latency/throughput)Node Type12345V1001000/1.01300/1.61500/1.91800/2.22300/2.2T41200/0.81500/1.31900/1.62500/1.63200/1.6 From Table 1, it can be seen that the V100 is more powerful and has a maximum throughput of 2.2 requests per second, while the T4 is slower and has a maximum throughput of 1.6 requests per second. Hence, the V100 can be referred to as high-end, while the T4 can be referred as low-end, relative to one another. To compare the two approaches, a load test framework Locust with 10 concurrent users was implemented to send requests to application servers. The request throughput (in requests per second (RPS)) and request latency (in milliseconds (ms)) were measured. Table 2, below, summarizes the results: TABLE 2Experimental ResultsCluster Set-upAvg.MedianMin.Max.Avg.LatencyLatencyLatencyLatencyThroughputTraditional33873000109170583.1Service Mesh2567210099461473.9 From Table 2, it can be seen that the traditional approach resulted in an average throughput of 3.1 RPS, which is only almost twice the performance of the slower T4 machine. Hence, it is seen that the traditional approach results in the faster V100 machine being under-utilized. In contrast, it can be seen that the service mesh approach of the present disclosure resulted in an average throughput of 3.9 RPS, which is approximately the sum of the best performance of the respective machines (i.e., 2.2+1.6=3.8). Hence, it is seen that the service mesh approach of the present disclosure results in a relatively balanced utilization between the machines. In this example, the average throughput of 3.9 (which is greater than the sum of 3.8) can be attributed to rounding and significant figures. During execution of the experiment, it was noted that, for the traditional approach, the throughput of the application is not stable, and fluctuated between 2 and 3.5 RPS. For the traditional approach, it was also noted that the latency fluctuated, because, when a request is routed to the V100, it will be completed in a relatively short time, while a request routed to the T4 takes a relatively longer time to process. During execution of the experiment, it was noted that, for the service mesh approach of the present disclosure, the throughput of the application is stable as was the latency of the application, which is a result of the more powerful V100 machine receiving more requests and the slower T4 receiving fewer requests. The service mesh of the present disclosure routes requests in a way that the requests take a similar time to complete, because the faster machine will receive more requests and eventually slow. Referring now toFIG.4, a schematic diagram of an example computing system400is provided. The system400can be used for the operations described in association with the implementations described herein. For example, the system400may be included in any or all of the server components discussed herein. The system400includes a processor410, a memory420, a storage device430, and an input/output device440. The components410,420,430,440are interconnected using a system bus450. The processor410is capable of processing instructions for execution within the system400. In some implementations, the processor410is a single-threaded processor. In some implementations, the processor410is a multi-threaded processor. The processor410is capable of processing instructions stored in the memory420or on the storage device430to display graphical information for a user interface on the input/output device440. The memory420stores information within the system400. In some implementations, the memory420is a computer-readable medium. In some implementations, the memory420is a volatile memory unit. In some implementations, the memory420is a non-volatile memory unit. The storage device430is capable of providing mass storage for the system400. In some implementations, the storage device430is a computer-readable medium. In some implementations, the storage device430may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. The input/output device440provides input/output operations for the system400. In some implementations, the input/output device440includes a keyboard and/or pointing device. In some implementations, the input/output device440includes a display unit for displaying graphical user interfaces. The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device, for execution by a programmable processor), and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer can include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer can also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet. The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.
30,675
11861416
While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood that the drawings and detailed description hereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Any headings used herein are for organizational purposes only and are not meant to limit the scope of the description or the claims. As used herein, the word “may” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e. meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof. DETAILED DESCRIPTION OF EMBODIMENTS FIG.1illustrates an example system environment in which help-enabled locks may be employed to reduce the time taken to complete critical sections associated with operations on shared data objects, according to at least some embodiments. As shown, system100may comprise a computing environment110, within which a set120of data accessors (DAs)125(e.g., readers and/or writers) may run. In the depicted example scenario ofFIG.1, a plurality of data accessors including DAs125A-125E may be running concurrently (i.e., the lifetimes of at least some of the DAs may overlap at least partly). The computing environment may also comprise one or more shared data objects (SDOs) such as SDO130, which may be read and/or modified at various points in time by the data accessors. In various embodiments, the number of data accessors125may change over time; for example, there may be intervals during which no writes are being attempted or performed on a given SDO130, periods during which no reads are being attempted or performed, periods in which numerous readers are attempting to concurrently or near-concurrently read from a given SDO, and so on. Data accessors may be dynamically activated and/or deactivated in at least some embodiments, e.g., by forking new threads or processes at a computing device or terminating such threads or processes. Similarly, the number of shared data objects may change over time as well in various embodiments. A given data accessor (such as a thread of a multi-threaded program) may perform respective critical section operations (CSOs)170comprising read and/or write operations, as well as associated computations/processing, on numerous SDOs during its lifetime in the depicted embodiment, as well as other types of operations that are not part of critical sections. The computing environment110may comprise a single server or computing device in some embodiments (e.g., with one or more processing elements such as cores or CPUs), and multiple servers/computing devices in other embodiments. In at least some embodiments, the computing environment within which the data accessors125run and/or the shared data objects and associated metadata are stored may include one or more servers implementing a NUMA (non-uniform memory access) architecture. Individual ones of the SDOs130may be defined at any desired granularity in different embodiments—e.g., one SDO may comprise a 32-bit data structure, while another SDO may be a multi-megabyte data structure. In at least some embodiments, a category of locks referred to as help-enabled locks (HELs), and associated lock management and critical section workload sharing techniques referred to as help-enabled locking techniques, may be used to implement concurrency control with respect to SDOs130. In the embodiment depicted inFIG.1, for example, an HEL132may be used to protect access to SDO130. At a given point in time, in various embodiments an HEL132may be held/owned by at most one data accessor (the HEL holder), while zero or more other data accessors may be waiting for the HEL (e.g., they may have submitted requests to acquire the HEL, but may not yet have acquired the HEL because it is already held by some other DA). In the example scenario ofFIG.1, DA125E is a holder150of HEL132, while DAs125C,125A and125D are waiters152for HEL132. At least some of the work performed in a critical section associated with an HEL130may be divisible into sub-operations or partitions in some embodiments, such that multiple sub-operations may at least in principle be performed in parallel, or at least partially in parallel, by respective DAs. For example, in the depicted example ofFIG.1, a critical section task or operation170may be partitioned into twenty sub-operations S1-S20. According to at least some embodiments, an HEL holder150may initiate one or more help sessions while holding the HEL132, in effect enabling or allowing any DAs that happen to be waiting for the HEL to perform or execute/implement one or more sub-operations of the critical section for which the HEL was acquired. DAs125that do not hold the HEL132but nevertheless participate in the work of the critical section (or at least make themselves available for participation in the work of the critical section) may be referred to in various embodiments as helpers or helper DAs. In various embodiments, the HEL holder may also perform some of the sub-operations of the critical section. Details regarding how the holder starts and ends such help sessions, and regarding techniques that may be employed to ensure that different helper DAs do not end up attempting the same sub-operation in various embodiments, are provided below. In the example scenario shown inFIG.1, the HEL holder (DA125E) may only have to implement sub-operations S1, S3, S7, S9 and S10, while helpers may perform the remaining fifteen sub-operations (e.g., DA125C implements S2, S20, S17, S11 and S13; DA125A implements S4, S5 and S6; and so on). Note that in some scenarios, there may not necessarily happen to be any waiters152for a given HEL132during a time period in which that HEL is held, so the HEL holder150may end up performing the entire critical section operation in such cases in various embodiments. As one skilled in the art will appreciate in light of this disclosure, certain embodiments in which help-enabled locking techniques are implemented may be capable of achieving various advantages, including enabling substantially higher throughputs and lower response times for certain types of data access workloads by completing critical sections more quickly than if other types of locks were employed. Furthermore, the speedup of critical sections may be accomplished in various embodiments by enabling data accessors that would otherwise simply be waiting for a lock (and not doing anything useful) to perform part of the critical section operations. The overall wait time for locks may also be reduced in various embodiments. In addition, the help-enabled locking techniques described may be deployed in at least some embodiments (e.g., by augmenting existing lock implementations using dynamic libraries) without requiring application code to be modified, which is a significant benefit for long-running applications in production environments. A variety of use cases may benefit from the techniques, such as workloads in which the shared data objects comprise commonly used sets or sequences of elements (e.g., arrays, lists, hash tables, hash map based sets, tree based sets etc.) such that processing sub-operations can be performed on the elements at least partly in parallel or independently of one another. The exact speedup of the critical sections may of course vary depending on various factors in different embodiments, such as the extent to which the work of the critical section is easily subdivided into sub-operations that can be safely performed concurrently, the total number of concurrent or near-concurrent data accessors, the relative timing in which data accessors request the help-enabled lock, and so on. Example pseudo-code set 1 (EPS1) shown below indicates, at a high level, an example approach towards implementing help-enabled locking techniques which may be employed in some embodiments. A C++ style syntax is used in EPS1 by way of example; note that any appropriate programming language may be used in various embodiments. In EPS1, data accessors are assumed to be threads running within a single multi-CPU server, and the HelpEnabledLock structure referenced in line 1 corresponds to the HEL132shown inFIG.1. -----EPS1: Example pseudo-code set 1 for HEL algorithms-------1:HelpEnabledLock HEL;2:// The helper function3:// Applies a function foo( ) to a region of a set4:// The function is assumed to be multi-thread safe, and can therefore be executed by5:// multiple helpers threads in parallel6:struct helperFunArgs {7:Iterator regionIters[ ];8:volatile int* nextRegionIndex;9:int numRegions; // number of regions of shared data object10:};11:void helperFun(helperFunArgs * args) {12:// Get the iterator for the next unprocessed region13:int regionIndex = fetchAndInc(args→nextRegionIndx);14:while (regionIndex < args→numRegions) {15:Iterator iter = regionIters[regionIndex];16:// apply foo( ) to each element of the region17:while(++iter != iter.end( )) {18:*iter = foo(*iter)19:}//end while20:regionIndex = fetchAndInc(args→nextRegionIndx);21:} // end while22:}// end helperFun23:// data accessor workflow begins here24:// data accessor acquires HEL25:HEL.lock( );26:// HEL is now held exclusively; holder/owner can initiate help session27:// First, owner sets helper function arguments28:helperFunArgs args;29:// subdivide the SDO into regions which can be processed by helpers30:args.regionIters = sharedDataObject.getIterators(partitionParams);31:// set starting region for helpers32:// (holder may reserve some regions for itself by setting a different nextIndex than −1)33:volatile nextIndex = −1;34:args.numRegions = sharedDataObject.getRegionCount( );35:args.nextRegionIndex = &nextIndex;36:// holder initiates help session, providing helperFun and its arguments37:// after this point, multiple threads may be executing helperFun,38:// so the owner cannot assume it has exclusive access to the SDO39:HEL.askForHelp(helperFun, args);40:// help is not guaranteed; there may not be any waiters.41:// The lock owner may invoke helperFun itself to ensure that the work gets done;42:// note that helperFun only returns when all regions are being processed, or none43:// remain to be processed44:helperFun(args);45:// wait for help session to be completed46:// when stopHelping( ) returns, the holder is guaranteed that no other threads are47:// executing helperFun48:HEL.stopHelping( );49:// help session is complete; execute any other work of the critical section if needed50:// and then release the HEL51:doRemainingCSWork( );52:HEL.unlock( );----End EPS1 ----------------------------------- In embodiments in which logic similar to that shown in EPS1 is employed, the workflow of a data accessor begins with an attempt to acquire a help-enabled lock (line 25). After the lock is acquired, the holder of the lock sets up arguments of a helper function (lines 28-35) and then invokes an “askForHelp” function (line 39), in effect indicating that the holder/owner of HEL is willing to accept help in completing the work of the critical section protected by HEL from some number of waiters for HEL, if any such waiters happen to become available for sharing the work. Of course, in some scenarios it may be the case that no waiters exist during the critical section, in which case the lock owner may perform the entire critical section itself. In embodiments in which EPS1-like logic is employed, the lock holder may itself invoke the helper function (“helperFun”, line 44), using the arguments that were set earlier, and perform some or all of the sub-operations of the critical section. The example critical section sub-operations in EPS1 comprise applying a function “foo” (line 18), within the helper function, to a number of elements (accessed using the “iter” variable) of a region or portion of a shared data object (“sharedDataObject”, introduced in line 30 of EPS1, corresponding to SDO130ofFIG.1). For example, in one example scenario the shared data object may comprise an array or table of, say, 100000 records, which may be subdivided into 10 regions of 10000 records (elements) each. In at least some embodiments, an indication of an iterator (e.g., similar to “regionIters” of line 7 of EPS1) may be provided to helpers by the lock owner, to enable the helper to demarcate or identify the helper's sub-operation(s) of the critical section. In one embodiment, an indication of the function (similar to function “foo” of EPS1) to be applied to elements (or all) of a portion of a shared data object in a sub-operation by a helper may also be passed to a helper as an argument. For example, a pointer to such a function, or the name of the function, may be provided as a parameter to the helpers in some implementation. In EPS1, an atomic fetch and increment (“fetachAndInc”) operation may be applied to the “nextRegionIndex” variable of the helper function arguments (e.g., in lines 13 and 20) by a data accessor (the HEL owner or a helper) to ensure that different data accessors work on distinct portions of the shared data object. Other synchronization approaches to avoid duplicating work (and/or to avoid corrupting the shared data object) by different data accessors working concurrently may be employed in different embodiments. In some embodiments, the lock owner may reserve some regions of the shared data object for itself, by setting a starting index value (“nextIndex” in line 33 of EPS1) that prevents any helpers from processing one or more regions. In various embodiments, the sub-operations of the critical section, which may potentially be performed concurrently by several data accessors, may be expected to either (a) be inherently data parallel (i.e., safe to execute in parallel with no synchronization) or (b) use explicit synchronization if they are not inherently data parallel. The helper function (lines 11-22 of EPS1) may be designed in such a way in at least some embodiments that a caller (the owner of HEL, or a waiter for HEL) would return from it only after all the sub-operations of the critical section are either completed (either by the current caller, or by some other caller), or have been taken up by some other data accessor. Thus, upon the return from the helper function invocation on line 44, the HEL owner would be guaranteed that all the sub-operations are either underway (e.g., by a helper that has not yet finished its sub-operation) or completed (e.g., by some combination of the owner and zero or more helpers). The lock owner may, in some embodiments, invoke the equivalent of a “stopHelping” function (line 48 of EPS1) to terminate a help session, which may involve waiting for the completion of sub-operations by any remaining helpers. A critical section protected by HEL may in some embodiments comprise several types of tasks or operations—some that can be cleanly divided into sub-operations that can be performed by helpers (if available), and some that cannot be divided and have to be performed by the lock owner/holder. After the sharable part of the critical section is complete (e.g., when the lock holder returns from “stopHelping” in EPS1), the remaining portion of the critical section work (if any) may be performed by the holder (line 51) and the HEL may be released (line 52 of EPS1). Note that although only a single help session is illustrated in EPS1, a help-enabled locking algorithm may allow multiple help sessions (involving parallel sub-operations being performed on the same data object, or on different data objects) within a given critical section in at least some embodiments, with the lock owner having exclusive access to any resources protected by the lock between such sessions. In at least one embodiment, a help-enabled lock132may be implemented using an embedded lock (e.g., EL138ofFIG.1). For example, an operating system or other execution framework in which SDOs130are accessed may support various types of baseline locking mechanisms or lock types, and an enhanced/augmented locking mechanism supporting help sessions of the kind introduced above may in effect be built on top of one or more of the baseline locking mechanisms. In an example scenario in which an object-oriented programming approach (e.g., similar to that of C++ or Java™) is employed, an existing lock may be embedded in a help-enabled lock class that provides wrappers for lock/unlock functions as well as the equivalent of the “askForHelp” function introduced above (to start help sessions) and the equivalent of the “stopHelping” function introduced above (to end help sessions). In such an example scenario, the embedded lock may only be required to support lock and unlock functions, with the helping-related features built on top of the embedded lock's own features. In at least some embodiments in which such embedded locks are employed, the helping functionality introduced above may be added to the existing lock mechanism, without changing the implementation of the existing lock mechanism, and/or without requiring recompilation of at least some of the code of the implementation. Example pseudo-code section EPS2 shown below, also expressed using C++ like syntax, demonstrates one approach involving the use of an embedded lock for implementing help sessions, which may be used in some embodiments. Note that any appropriate programming language may be used in various embodiments in which techniques similar to those of EPS2 are deployed. In EPS2, a C++ template class is used for the “HelperLock” class (of which HEL132ofFIG.1may represent one instance), with the embedded existing lock type (corresponding to EL138ofFIG.1) being a type argument (see, e.g., line 49 of EPS2). Data accessors are assumed to be threads running within a single multi-CPU server in EPS2, although similar approaches to that of EPS2 may be employed for other types of data accessors (e.g., independent processes running in a distributed computing environment) in at least some embodiments. In EPS2, the “HelperFunction” class corresponds to the helper function of EPS1. At a high level, the properties of an example implementation in which logic similar to that of EPS2 is employed may be summarized as follows with respect to at least some embodiments. A HelperLock object HL (which contains an embedded lock L) may be either in a locked state or in an unlocked state (initially, the state may be unlocked). A data accessor (such as a thread T) that was the most recent to lock or acquire HL is the lock owner/holder. The embedded lock L may also be in a locked or unlocked state, but L may be in an unlocked state even when HL is locked by some thread T. (This may be the case, for example, during a help session.) When a thread T calls HL.Lock( ), an attempt to acquire the embedded lock L is made by calling L.lock( ) (line 88 of EPS2). When the call to L.lock( ) returns, there are two possibilities: (1) HL is unlocked, in which case HL.Lock( ) returns, making T the lock owner; or (2) HL is locked, which means that T is in a help session. In this latter scenario, T becomes a helper (unless there are already enough helpers, as determined using the “myHelperID” variable in line 99), executes the helper function, and when done calls L.lock( ) again. Note that if the lock holder limits the maximum number of helpers (e.g., by setting numHelpersNeeded to a non-zero value in line 154), in some embodiments one or more data accessors that are available to act as helpers may nevertheless determine that they cannot act as helpers (i.e., that they cannot perform sub-operations of the critical section) because the number of current helpers has reached the limit. The comments included within EPS2 help explain various other aspects of the example implementation. -----EPS2: Example pseudo-code set 1 for HEL using embedded existing lock -------1:/**2:* HelperLock: adds helping functionality to an existing lock type3:* (denoted as the embedded lock type, captured by the lock_t template4:* argument)5:*6:*/7:template <typename lock_t>8:class HelperLock {9:public:10:/**11:* HelperLock interface12:*/13:void lock( );14:// The helper function and its arguments are captured in an object that is15:// derived from the following HelperFunction class, which only defines one16:// virtual method, Run( ), that executes the helper function. Any arguments17:// to the helper function and/or return values may be captured by the18:// subclass that implements the Run( ) method.19://20:// In addition, the Run method gets an argument that provides a unique ID21:// that is assigned for each helper thread in a help session; the ID22:// will be in the range of 0 to the total number of helping threads, and can23:// be used to simplify and optimize synchronization in the helper function24:// between all threads executing in the help session. (A simple example25:// is where the helper function needs to return a value, and we would like26:// to avoid synchronizing concurrent stores of the return values by the27:// different helper threads.)28://29:class HelperFunction {30:public:31:virtual void Run(int helperId) = 0;32:};33:// askForHelp: called by lock holder to start a help session in which waiting34:// threads may help with a given operation. Parameters include the helper function35:// (of a type that is a descendent of the HelperFunction class) describing the work36:// to perform), and an optional maxHelpers argument that allows the lock owner37:// to restrict the number of helper threads that are running concurrently38:// with it. If maxHelpers is not set, all available threads that are waiting for the39:// lock may call the helper function during the help session.40://41:void askForHelp(HelperFunction* fun, int maxHelpers);42:// stophelping: Ends the help session, waiting for all helper threads to be done and43:// resume waiting for the lock. After stopHelping returns, no other thread44:// may be running code that requires the embedded lock.45://46:void stopHelping( );47:// Constructor48://49:HelperLock(lock_t& lockToAugment):embeddedLock(lockToAugment) {50:}51:private:52:/**53:* Data members54:*/55:// Information about an ongoing help session.56://57:// All members are protected by the embedded lock, except numHelperThreads58:// which is modified using atomic fetch-and-increment/decrement operations.59://60:struct HelpSessionInfo {61:bool inHelpSession = false;62:HelperFunction* helpFun = NULL;63:// Info on current session64:int numHelpersNeeded = 0;65:volatile int numHelperThreads = 0;66:};67:lock_t& embeddedLock;68:HelpSessionInfo hsInfo;69:// A conditional variable helpersWaitCV is used to sleep/wakeup non-active70:// helpers. It is associated with an additional lock, waitingMutex, rather than the71:// embedded lock, because the embedded lock may be of any type, and may not72:// support conditional variables. We also add a counter to avoid a situation73:// where a non-active thread waits for a notification after the session that it74:// had entered has already been completed.75://76:// CompletedSessions is only modified when holding both the embedded lock77:// and the waiting mutex, and read when holding at least one of them.78://79:pthread_mutex_t waitingMutex;80:volatile int numCompletedSessions = 0;81:pthread_cond helpersWaitCV; // conditional variable82:public:83:/**84:* Interface implementation85:*/86:void lock( ) {87:while (true) {88:embeddedLock.lock( );89:// If not in a help session, it's a regular lock acquisition, return90:if (!hsInfo.inHelpSession) return;91:/**92:* In a help session, and I'm holding the embedded lock93:*/94:bool holdingEmbeddedLock = true;95:int sessionId = numCompletedSessions;96:// are more helper threads needed?97:int myHelperId = fetchAndInc(&hsInfo.numHelperThreads);98:if (hsInfo.numHelpersNeeded == 0 ||99:myHelperId < hsInfo.numHelpersNeeded) {100:// I'm an active helper: call helper function with helper ID.101://102:// Before the actual call, release the embedded lock to103:// let other threads become helpers, and to allow the lock104:// owner to terminate the help session. Also, capture a local105:// copy of the helper function pointer to avoid making it106:// volatile, as it will be read after releasing the embedded107:// lock.108://109:HelperFunction *hfun = hsInfo.helpFun;110:embeddedLock.unlock( );111:holdingEmbeddedLock = false;112:hfun−>Run(myHelperId);113:// Done helping, fall through to wait for the help session114:// to complete115:} else {116:// My help is not needed, release the embedded lock.117:embeddedLock.unlock( );118:}119:// Decrement numHelperThreads to allow lock owner to terminate120:// the help session.121://122:fetch_and_dec(&hsInfo.numHelperThreads);123:/**124:* Done helping (or my help was not needed).125:* Wait for the help session to complete,126:* then loop back to re-acquire the embedded lock.127:*/128:pthread_mutex_lock(&waitingMutex);129:// Check that we are still in the session we entered, to avoid130:// sleeping on the CV after the session completes,131:// (and not being woken up); perform check in a loop to deal with132:// potential spurious wakeups.133://134:// Note that numCompletedSessions is only read under135:// waitingMutex, held upon return from the wait call.136://137:if (sessionId == numCompletedSessions) {138:// loop to deal with spurious wakeups139:while (sessionId == numCompletedSessions) {140:pthread_cond_wait(&helpersWaitCV,141:&waitingMutex);142:} // end while143:} // end if144:pthread_mutex_unlock(&waitingMutex);145:// The help session we entered is done. Loop back.146:} // end while(true)147:} // end lock( )148:// askForHelp: HEL owner starts a help session149:void askForHelp(HelperFunction* fun, int maxHelpers) {150:// Initialize hsInfo and unlock the embedded lock151://152:hsInfo.helpFun = fun;153:hsInfo.inHelpSession = true;154:hsInfo.numHelpersNeeded = maxHelpers;155:hsInfo.numHelperThreads = 0;156:embeddedLock.unlock( );157:} // end askForHelp158:// stopHelping: HEL owner terminates a help session159:void stopHelping( ) {160:// Acquire the embedded lock, preventing any other threads from joining161:// the help session.162:embeddedLock.lock( );163:hsInfo.inHelpSession = false; // Not really necessary until we release the164:// HEL, but it's cleaner to do it here.165:// No need to do anything if no one helped us (just an optimization)166:if (hsInfo.numHelperThreads == 0) return;167:/**168:* We had helpers. Update numCompletedSessions to indicate that the169:* current session is done, wake up any threads that may be blocking on170:* the CV, and wait for any helper threads that may still be executing171:* the helper function to finish.172:*/173:// 1. Update numCompletedSessions: done while holding waitingMutex as a174:// threads that is about to block on the CV in this session has to read175:// numCompletedSessions atomically with its (potential) call to wait( ).176://177:pthread_mutex_lock(&waitingMutex);178:numCompletedSessions++;179:pthread_mutex_unlock(&waitingMutex);180:// 2. Wakeup threads blocked in the session181:pthread_cond_broadcast(&helpersWaitCV);182:// 3. Wait for all helping threads to complete.183://184:while (hsInfo.numHelperThreads != 0) ;185:// Additional notes:186://187:// a) Waiting after the call to broadcast is an optimization, so188:// that the logic run by the blocked threads once they wake up can be189:// executed while other helping threads are still executing the helper function190:// (if the HEL owner finishes the last available sub-operation long191:// before some helper threads finish their sub-operations, that can make a192:// difference.)193://194:// b) We could add another counter to distinguish active helper threads195:// (that may be running the helper function) from those that do not,196:// and wait only for the active ones. The others will notice that the197:// help session is done at some point later on and will not touch198:// any state that the HEL owner cares about.199:} // end stopHelping200:void unlock( ) {201:embeddedLock.unlock( ); // simply release embedded lock202:} // end unlock203:};----End EPS2 ----------------------------------- In embodiments in which an approach similar to EPS2 is employed, a “HelpSessionInfo” object (whose elements are defined in lines 60-66 and set in lines 152-155) may contain a pointer to the function (“helpFun”) that is to be applied by helpers during a help session. The lock holder may set a limit on the number of concurrent helpers in some embodiments, e.g., using the “numHelpersNeeded” variable. The equivalent of a “numCompletedSessions” variable may be used to indicate whether the current help session has been completed or not in some embodiments. A helper saves the “numCompletedSessions” value (line 95) before doing its sub-operations of the critical section, and determines that the session has ended if the “numCompletedSessions” value has changed (this is checked in lines 137-143); the lock owner modifies “numCompletedSessions” (lines 177-179) to end a help session. In the “askForHelp” function (lines 149-157 of EPS2), which corresponds to starting a help session in various embodiments, the HEL holder sets the help session information (“hsInfo”) arguments that will be read by helpers. The HEL holder then releases the embedded lock, enabling potential helpers to (a) acquire the embedded lock (line 88), (b) update the “numHelperThreads” variable (line 97) using an atomic operation (e.g., fetch-and-increment) to indicate that they are active helpers, (c) release the embedded lock (line 111) and (d) perform part of the critical section work. The release of the embedded lock prior to performing the sub-operation of the critical section may enable other helpers to also perform their sub-operations in various embodiments. The updating of “numHelperThreads” (or some similar variable/signal) may comprise providing an indication by a given helper that at least one sub-operation of the critical section is going to be implemented by the helper (in effect notifying other data accessors that the sub-operation has been claimed by the helper) in various embodiments. In EPS2, a broadcast primitive associated with a conditional variable is used (line 181) to notify one or more other data accessors that the help session has ended; in some embodiments, other approaches to signal the end of the help session may be used. In various embodiments, any of numerous variations of the basic logic illustrated in EPS2 may be employed to achieve the overall objectives of shortening critical sections by enabling waiting data accessors to perform some of the work of the critical section concurrently. The details of various aspects of the implementation (such as the use of a waiting mutex etc.) may differ in different embodiments. For example, in EPS2, the helper function is called once by each helper; in some embodiments a given helper may instead call the helper function repeatedly (e.g., after acquiring the embedded lock between successive calls to the helper function). One advantage of the latter approach would be to optimize the synchronization between the lock owner and the helpers, taking advantage of the fact that helpers are executing code under the embedded lock between successive invocations of the helper function. Also, in EPS2, synchronization between/among the helpers and the holder, as well as the logic to decide when to return from the helper function, is implemented in the helper function itself; in some embodiments, alternate approaches may be taken towards synchronization and return logic. In at least one embodiment, built-in helper functions for common use cases (such as parallelizing a “for” loop) may be provided as part of the HEL design. In another variation, in some embodiments, the “stopHelping” function may return values that may potentially be useful for the lock owner, such as the number of helpers which participated in the help session, the fraction of the work that was performed by helpers, how many NUMA nodes were involved in the help session, and so on. Such information may be used by the lock owner, for example, to determine whether it is worthwhile to begin another help session (e.g., for another part of the critical section). In one embodiment, the lock owner may indicate (e.g., via parameters) the specific types of information to be returned from the “stopHelping” function. The approach illustrated in the EPS2 example is generic, and may be applied to augment a variety of different lock types without for example re-implementing (or even recompiling) the code of the underlying lock types in various embodiments. In some embodiments, as in the approaches discussed above in the context of EPS1 and EPS2, at some stage (e.g., in operations corresponding to line 184 of EPS2) an HEL holder may wait until all the helpers have completed their sub-operations before proceeding with other work. In other embodiments, a somewhat different approach may be taken. If the primary lock holder or helpers discover that no unassigned work of the critical section operation that can be parallelized is left unassigned in such an embodiment, that is, that there is no additional helping that can be taken up, they may return from the lock operation while residual helpers (helpers that have begun but not yet finished their sub-operations) are still running. This may be referred to as an “early return” optimization in some embodiments. Early return optimization may allow at least some additional work to be completed more quickly than if all helpers have to complete their sub-operations before any of the data accessors can proceed. Note that new help sessions may not be started while an existing session is in this type of “almost done” state in various embodiments. Note also that in some cases, this type of optimization may not necessarily be safe, e.g., if existing code is being converted naively to use help-enabled locks. Consider an example scenario where one data accessor DA1 obtains the HEL, and some of the waiting data accessors help with I/O that is to be performed under the lock. Normally, when DA1 returns from the lock operation, the I/O started within the critical section would be expected to be finished, but if DA1 were permitted to return early, before all the helpers were done, this would not necessarily be the case. However, for some types of use cases (e.g., a scenario in which a hash table is being re-sized with the aid of one or more helpers), early return optimization may be safe and potentially beneficial. Even in embodiments in which these types of early returns are permitted, overlap between help sessions may be impermissible. In one embodiment, a parameter indicating whether early returns are permitted may be passed when a help-enabled lock is initialized. In other embodiments, a parameter indicating whether early returns are permitted may be passed as part of individual lock acquisition requests. FIG.2illustrates example timelines of completing a critical section with and without the use of a help-enabled lock, according to at least some embodiments. In the depicted example scenario, a critical section may comprise 12 sub-operations270(S1-S12), with each sub-operation expected to require approximately similar levels of effort or resources (e.g., some combination of CPU cycles, I/O operations, etc. at a given server or computing device). Timeline290shows two example of the total amount of time it may take to finish the sub-operations270. If only the lock holder performs the sub-operations S1-S12, critical section duration 205 may comprise approximately 12 units of time (as indicated by the notation T0+12t, where T0 is the starting time of the critical section), one unit for each of the sub-operations performed sequentially in the depicted example. In contrast, consider an alternative scenario in which three other data accessors (apart from the lock holder) happen to attempt to acquire a help-enabled lock (of the kind discussed above in the context ofFIG.1, EPS1 and/or EPS2) being used to protect the critical section, say within the first two time units. If such data accessors become helpers during a help session initiated by the lock owner, the work may be distributed among the helpers and the lock owner and completed within approximately four units of time as shown in duration 207 in the depicted embodiment. Depending on the sequence and arrival times of the other data accessors that are unable to acquire the lock, the work of the critical section may be distributed as follows. The lock owner may perform sub-operations275(e.g., S1, S4, S8 and S12). A first helper H1, which begins waiting for the lock shortly after the holder acquires it, may perform sub-operations276A (e.g., S2, S6 and S10); a second helper may perform sub-operations276B (e.g., S3, S7 and S11); and the third helper may perform sub-operations276C (e.g., S5 and S9). The total time taken may be reduced from 12 units in the lock-holder-only scenario to approximately 4 units (T0+4t) in the depicted 3-helper scenario. Of course, the total time may be reduced by different factors depending on the relative arrival times of the helpers; the scenario depicted inFIG.2is not intended to be limiting with regard to the potential benefits of using help-enabled locks. In at least some embodiments, as indicated earlier, a critical section may comprise several different types of operations, one or more of which may potentially be sped up by using respective help sessions of the kind introduced earlier. Other portions of the critical section activities may sometimes be harder to parallelize.FIG.3illustrates example scenario in which a critical section may comprise a plurality of help sessions, according to at least some embodiments. In the depicted example scenario, a critical section starts (indicated by label302) at time Tstart along timeline390. A first phase of the critical section may comprise a set of operations340A that may not be subdivided for distribution among lock waiters (e.g., because the operations are inherently single-threaded, because the overhead or complexity of subdividing the operations is too high relative to the potential benefit, and/or for other reasons) in the depicted embodiment. In a second phase of the critical section, a help session350A may be initiated by the lock holder. During this phase, up to N1 helper threads (if available) and the lock holder may collectively perform a set of sub-operations {Sa} of the critical section, e.g., applying one or more functions to respective portions of a shared data object SDO1. Help session350A may be followed by another phase of holder-only operations340B in the depicted example scenario. Then, a second help session350B may be initiated, in which a second set of sub-operations {Sb} may be distributed among up to N2 helpers, if available, in the depicted embodiment. The optimum or maximum number of helpers that may participate in a given help session (e.g., N1 or N2, in sessions350A and350B respectively) may depend on, for example, the nature of the shared data on which the work is to be performed, how easy it is to partition or iterate over the shared data, and so on. In some embodiments in which a critical section comprises multiple help sessions, the same shared data (e.g., SDO1) may be accessed during different help sessions; in other embodiments, different shared data objects (e.g., SDO1 in session350A, SDO2 in session350B) may be accessed/processed in different help sessions. Similarly, in some embodiments, the same helper function may be implemented during different help sessions of a given critical section, while in other embodiments, different helper functions may be applied during the different help sessions (either to the same shared data, or to different shared data objects). In the example depicted inFIG.3, the final phase of the critical section may comprise a third set of one or more holder-only operations340C, and the critical section may end (label312) at the conclusion of this phase. Note that the sequence illustrated inFIG.3is not intended to be limiting; a given critical section may comprise one or more help sessions, interspersed with any number (including zero) of holder-only operation phases of the kind shown inFIG.3in various embodiments. Note also that, as indicated earlier, the initiation of a help session by a lock holder in various embodiments does not necessarily imply that any helpers will participate in the help session. A lock owner may for example start a help session, and end up (if no helpers arrive or volunteer) having to complete all the sub-operations that could potentially have been implemented by helpers. In at least one embodiment, a waiter for a help-enabled lock may not necessarily participate in a help session; e.g., helping may be voluntary, and some data accessors may decide based on various factors or conditions not to participate in a given help session. FIG.4is a flow diagram illustrating aspects of operations which may be performed by data accessors in a computing environment in which help-enabled locks are used, according to at least some embodiments. As shown in block401, a data accessor DA1 may acquire a help-enabled lock HEL1 (similar to HEL132ofFIG.1), associated with a critical section CS1 and a shared data object SDO-a (on which for example, one or more operations or tasks of CS1 may be performed) in some embodiments. At least one operation of CS1 may be partitionable or divisible relatively easily in various embodiments, e.g., the operation may sub-divided into sub-operations that can safely be performed or run in parallel if helpers happen to become available to perform the sub-operations. The HEL1 holder, DA1, may set one or more helper function parameters or arguments in various embodiments (block404), e.g., as part of the preparation for a help session in which other data accessors (that have attempted to acquire HEL1 but have not yet acquired it because it is held by DA1) may participate. The arguments/parameters may, for example, include the number of sub-operations that can potentially be performed in parallel, an iterator to be used by a helper to identify that helper's sub-operation(s), the work (e.g., function) to be performed in a given sub-operation, and so on in different embodiments. In at least some embodiments, DA1 may reserve or set aside some of the sub-operations for itself—that is, not all the sub-operations may necessarily be made available for potential helpers. DA1 may initiate a help session, enabling waiters on HEL1 to perform one or more sub-operations of CS1 (block407) in the depicted embodiment. The specific action that cause a help session to be initiated may vary in different embodiments. In some embodiments, for example, invoking a function similar to the askForHelp function introduced above in the context of EPS1 and/or EPS2 (which may include setting some of the helper function parameters corresponding to block404) may constitute initiating the help session. In other embodiments, providing a signal using some other mechanism to data accessors that may be waiting for HEL1 (or may arrive later on while HEL1 is held by DA1) may constitute initiating the help session. After the help session is initiated, zero or more of the CS1 sub-operations may be performed by helper data accessors that were unable to acquire HEL1 because HEL1 was held by DA1 in the depicted embodiment (block413). The number of active helpers (data accessors that actually manage to perform CS1 sub-operations) may depend on various factors in different embodiments, such as limits that may be set (e.g., by DA1) on the maximum number of helpers, the number of processors/cores/NUMA nodes available for the data accessors, the number of sub-operations into which the CS1 operation can be cleanly divided, the relative timing of the attempts by the other data accessors to acquire HEL1, and so on. Several helpers may perform their sub-operations over time periods that overlap at least in part with one another in some embodiments—e.g., at a given point in time, multiple helpers may be working on respective sub-operations. A given helper may perform multiple sub-operations in at least one embodiment. DA1 itself may perform zero or more of the CS1 sub-operations in various embodiments (block410). These sub-operations may, for example, comprise a set of sub-operations that DA1 had reserved for itself in some embodiments, and/or any sub-operations that were not taken up by helpers. Some or all of the sub-operations performed by DA1 may also potentially be performed at least partly in parallel with other sub-operations being performed by helpers. The overall benefit of the parallelization, with respect to shortening or speeding up the critical section CS1, may depend on various factors in different embodiments, such as the number of concurrent helpers, the number of sub-operations into which the parallelizable CS1 operation has been divided, and so on. In various embodiments, DA1 may eventually end the help session (block416), e.g., after it has determined that there are no remaining sub-operations that have (a) not yet been taken up by helpers or (b) been completed (either by helpers or by DA1 itself). DA1 may wait for any in-progress helpers to complete their respective portions of the work in such embodiments. If there are additional (non-parallelizable) portions of CS1 that remain, DA1 may perform them in the depicted embodiment (block419). Note that while in the example workflow illustrated inFIG.4, only a single help session is started by the HEL1 holder, in general a given critical section may include any number of help sessions (and any number of phases of non-parallelizable operations) in various embodiments, as discussed in the context ofFIG.3. After the operations of CS1 are completed, DA1 may release HEL1 in the depicted embodiment (block422). This may enable one of the waiting data accessors (if any) to acquire HEL1 and perform its own critical section (which in turn may comprise zero or more help sessions) in various embodiments. FIG.5illustrates example metadata associated with helper functions of help-enabled locks, according to at least some embodiments. Such metadata may be generated, for example, by a holder of a help-enabled lock (similar to HEL132ofFIG.1), and shared with potential helpers. The metadata502may for example include an indication of the number of sub-operations505which the HEL holder is willing to enable helpers to perform. The number of sub-operations may, for example, correspond to the number of portions/partitions into which a shared data object (SDO) protected using HEL1 is divided for the purposes of a help session. In at least some embodiments, the metadata501may include an iterator510, e.g., a mechanism which can be used by a helper to identify its portion of the work to be done in the critical section. Note that at least in one embodiment, respective sub-operations may not necessarily involve processing distinct portions of a shared data object; instead, for example, the respective sub-operations may involve performing different computations on the same data. In at least some embodiments, the metadata502may include one or more pointers515(similar in concept to the nextRegionIndex variable of EPS1 introduced above) to available partitions or sub-operations, which can be used by helpers to claim specific sub-operations. In various embodiments, such pointers may be modified by individual helpers, e.g., under the protection of a synchronization mechanism, to indicate that they have claimed the corresponding sub-operations. In at least some implementations, an atomic update operation such as a fetch-and-increment or fetch-and-update operation may be used to modify the pointer(s). In at least one embodiment, the metadata501may also include an indication520(such as the logical equivalent of a function pointer, or a function name) of the actual work that is to be done by helpers in their respective sub-operations. Note that at least in some embodiments, not all the helpers may perform the same functionality in their respective sub-operations—e.g., the work-to-be-done metadata may in effect comprise an array of pointers to different functions, with each function to be applied to a respective portion (or all) of a shared data object. Other helper function metadata element combinations may be used in some embodiments than those shown inFIG.5. A number of different data structures may be used to represent help-enabled locks similar to those discussed earlier, such as HEL132ofFIG.1, in various embodiments.FIG.6illustrates contents of an example data structure used to represent a help-enabled lock, according to at least some embodiments. A data structure for a help-enabled lock601may, for example, comprise an instance of an embedded lock605(as in the pseudo-code section EPS2 discussed above). Such an embedded lock605may use a pre-existing lock design of the computing environment in which the data accessors operate in some embodiments. The HEL design may be able to leverage the lock and unlock functionality of the embedded lock in some embodiments, e.g., as discussed above in the context of EPS2, in effect augmenting the embedded lock with help-enabled functionality, without requiring the code of the embedded lock to be modified or even recompiled. A help-session-information element610of the HEL601may include, for example, an indication (session-in-progress612) whether a help session is currently in progress or not, an indication of the maximum number of helpers needed (num-helpers-needed614) during the help session, a count616of helpers currently performing sub-operations of the critical section, and/or a work-to-be-done function pointer618in some embodiments. In some embodiments, the HEL data structure may include a session-completion-indicator615, used by the HEL holder to signal when a help session associated with the HEL is complete. In at least one embodiment, one or more additional synchronization-related primitives, such as a waiting mutex (mutual exclusion lock)610and or a conditional variable620(which may be used in conjunction with the waiting mutex) may be included as part of an HEL data structure. Such a mutex and/or conditional variable may be used, for example (as in EPS2) by the HEL holder to communicate with waiting data accessors regarding the completion of a help session. In at least one embodiment, an HEL data structure may comprise a different combination of elements than those shown inFIG.6. In one embodiment, for example, an embedded lock may not be included, a mutex may not be included, and/or a conditional variable may not be included. FIG.7is a flow diagram illustrating aspects of operations which may be performed to process an acquisition request for a help-enabled lock, according to at least some embodiments. In the depicted embodiment, a help enabled lock design in which a given instance of an HEL includes an embedded lock, similar to the approach illustrated in EPS2, may be used; the operations illustrated inFIG.7correspond approximately to the lock( ) function beginning on line 86 of EPS2. The processing of an acquisition request (e.g., the equivalent of HEL1 lock( )) of a given data accessor DA1 for an instance of a help-enabled lock HEL1 may begin in block701ofFIG.7. DA1 may acquire the embedded lock EL1 of HEL1, e.g., after waiting if needed (block704). After acquiring EL1, DA1 may attempt to determine whether a help session associated with a critical section protected by HEL1 is currently underway (block707). If such a help session is not in progress, this may indicate that DA1 has acquired HEL1 itself; that is, the acquisition of EL1 at a point of time when no help session is currently in progress may correspond to the acquisition of HEL1 in some embodiments. The processing of the acquisition request may be complete at this point (block710), and DA1 may proceed to implement its critical section (and may, at least potentially, initiate one or more help sessions to help speed up the critical section). If a help session HS1 is underway (as determined in operations corresponding to block707), DA1 may be a potential helper for the holder of HEL1 in the depicted embodiment. DA1 may, for example, save a current value of a session completion indicator and increment a helpers-in-progress variable in some embodiments (block713). The incrementing of the helpers-in-progress indicator may be performed in at least some implementations using an atomic operation such as a fetch-and-add operation or a fetch-and-increment operation supported by the computing environment's hardware and/or operating system. In embodiments in which the holder of HEL1 may impose a limit on the maximum number of helpers, DA1 may check whether more helpers are needed in the current help session HS1 (block716). If no more helpers are needed, DA1 may release EL1 (block719), decrement the helpers-in-progress count (block731), and wait for the help session to end (block734) before again attempting to acquire HEL1 (block701onwards) in the depicted embodiment. In at least some embodiments, a mutex associated with a conditional variable (for which a broadcast primitive may be used in some cases by the HEL1 holder to signal to waiting data accessors like DA1 when HS1 ends) may be used to check a session completion indicator. If a result of the test corresponding to block716indicates that more helpers are needed for HS1, DA1 may become an active helper in the depicted embodiment. As such, DA1 may obtain needed information for its portion or sub-operation of the critical section (block722), such as a pointer to a work-to-be-done function, from HS1's metadata, in various embodiments. DA1 may then release the embedded lock EL1, e.g., enabling other helpers to participate in the helping session HS1 (block725). As an active helper, DA1 may then proceed to perform its sub-operation of the session HS1 (block728). After completing its sub-operation, DA1 may decrement HS1's helpers-in-progress indicator (block731), e.g., using an atomic operation such as a fetch-and-decrement or fetch-and-add operation in various embodiments. DA1 may then wait for the current help session HS1 to end (block734), and again attempt to acquire HEL1 after the session ends (e.g., once again attempting to acquire HEL1 by issuing HEL1.lock( ), resulting in operations corresponding to block701onwards) in the depicted embodiment. FIG.8is a flow diagram illustrating aspects of operations which may be performed to terminate a help session associated with a help-enabled lock, according to at least some embodiments. In the depicted embodiment, a help enabled lock design in which a given instance of an HEL includes an embedded lock, similar to the approach illustrated in EPS2, may be used; the operations illustrated inFIG.8correspond approximately to the stopHelping( ) function beginning on line 159 of EPS2. As shown in block801ofFIG.8, the processing of a help session termination function for a helping session HS1 initiated by a data accessor DA1 which is the owner of a help-enabled lock HEL1 may be begun. The embedded lock EL1 of HEL1 may be acquired by DA1, e.g., after waiting if needed (block804). DA1 may then set HS1's session-in-progress flag or indicator to “false” in the depicted embodiment, to signal to any potential new helpers that they cannot participate in HS1 (block807). If no helpers are in the process of performing sub-operations of the critical section, as determined in block810, no further work may be needed, DA1 may return from the function it entered in operations corresponding to block801, and the help session HS1 may be terminated (block813). If there are some helpers that have begun but not yet finished their sub-operations, as also detected in block810, a session completion indicator (e.g., a num-sessions-completed counter) may be updated, to signal to any waiting helpers that the session which they participated in has terminated (block816) in the depicted embodiment. In at least some embodiments, such updates may be performed after acquiring a mutex associated with HEL1 After updating the session completion indicator, in at least some embodiments DA1 may wait for any outstanding unfinished helpers to complete their sub-operations and decrement the num-helpers-in-progress indicator of active helpers to zero (block819). After all the helpers have signaled that they are finished, the help session HS1 may be considered terminated. It is noted that in various embodiments, at least some operations other than those illustrated in the flow diagrams ofFIG.4,FIG.7and/orFIG.8may be performed to implement the help-enabled locking techniques described above. Some of the operations shown may not be implemented in some embodiments, may be implemented in a different order, or in parallel rather than sequentially. In various embodiments, implementations of the help-enabled locking techniques described above may be incorporated into dynamic locking libraries made available within various versions of operating systems (such as versions of Linux). In at least one embodiment, a set of interposition libraries (similar to the LD_PRELOAD libraries of some versions of Linux) that expose standard locking application programming interfaces (APIs) may be used for exposing the algorithms to applications. In an embodiment in which interposition libraries are used, the application code may not have to be modified or recompiled to take advantage of the capabilities of the algorithms described herein; instead, the algorithms may be deployed simply by changing an environment variable (e.g., the LD_PRELOAD environment variable). In at least some embodiments, a server that implements a portion or all of one or more of the technologies described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.FIG.9illustrates such a general-purpose computing device9000. In the illustrated embodiment, computing device9000includes one or more processors9010coupled to a system memory9020(which may comprise both non-volatile and volatile memory modules) via an input/output (I/O) interface9030. Computing device9000further includes a network interface9040coupled to I/O interface9030. In various embodiments, computing device9000may be a uniprocessor system including one processor9010, or a multiprocessor system including several processors9010(e.g., two, four, eight, or another suitable number). Processors9010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors9010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors9010may commonly, but not necessarily, implement the same ISA. In some implementations, graphics processing units (GPUs) may be used instead of, or in addition to, conventional processors. NUMA architectures may be used in some embodiments. System memory9020may be configured to store instructions and data accessible by processor(s)9010. In at least some embodiments, the system memory9020may comprise both volatile and non-volatile portions; in other embodiments, only volatile memory may be used. In various embodiments, the volatile portion of system memory9020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM or any other type of memory. For the non-volatile portion of system memory (which may comprise one or more NVDIMMs, for example), in some embodiments flash-based memory devices, including NAND-flash devices, may be used. In at least some embodiments, the non-volatile portion of the system memory may include a power source, such as a supercapacitor or other power storage device (e.g., a battery). In various embodiments, memristor based resistive random access memory (ReRAM), three-dimensional NAND technologies, Ferroelectric RAM, magnetoresistive RAM (MRAM), or any of various types of phase change memory (PCM) may be used at least for the non-volatile portion of system memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory9020as code9025(which may for example comprise the code for help-enabled lock algorithms) and data9026(which may for example include the shared data objects whose accesses are protected using the help-enabled lock algorithms, locking related metadata and the like). In one embodiment, I/O interface9030may be configured to coordinate I/O traffic between processor9010, system memory9020, and any peripheral devices in the device, including network interface9040or other peripheral interfaces such as various types of persistent and/or volatile storage devices. In some embodiments, I/O interface9030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory9020) into a format suitable for use by another component (e.g., processor9010). In some embodiments, I/O interface9030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface9030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface9030, such as an interface to system memory9020, may be incorporated directly into processor9010. Network interface9040may be configured to allow data to be exchanged between computing device9000and other devices9060attached to a network or networks9050, such as other computer systems or devices as illustrated inFIG.1throughFIG.8, for example. In various embodiments, network interface9040may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface9040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory9020may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1throughFIG.8for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device9000via I/O interface9030. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device9000as system memory9020or another type of memory. In some embodiments, one or more computer-accessible storage media may comprise instructions that when executed on or across one or more processors implement the techniques described. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface9040. Portions or all of multiple computing devices such as that illustrated inFIG.9may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices, or special-purpose computer systems, in addition to or instead of being implemented using general-purpose computer systems. The term “computing device”, as used herein, refers to at least all these types of devices, and is not limited to these types of devices. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. FIG.10illustrates an example cloud computing environment in which help-enabled locking techniques may be employed in at least some embodiments. As shown, cloud computing environment1002may include cloud management/administration resources1022, software-as-a-service (SAAS) resources1030, platform-as-a-service (PAAS) resources1040and/or infrastructure-as-a-service (IAAS) resources1050. Individual ones of these subcomponents of the cloud computing environment1002may include a plurality of computing devices (e.g., devices similar to device9000shown inFIG.9) distributed among one or more data centers in the depicted embodiment, such as devices1032A,1032B,1042A,1042B,1052A, and1052B. A number of different types of network-accessible services, such as database services, customer-relationship management services, machine learning services and the like may be implemented using the resources of the cloud computing environment in various embodiments. In the depicted embodiment, clients or customers of the cloud computing environment1002may choose the mode in which they wish to utilize one or more of the network-accessible services offered. For example, in the IAAS mode, in some embodiments the cloud computing environment may manage virtualization, servers, storage and networking on behalf of the clients, but the clients may have to manage operating systems, middleware, data, runtimes, and applications. If, for example, a client wishes to use IAAS resources1050for some desired application for which locking techniques of the kind described earlier are used, the clients may identify one or more virtual machines implemented using computing devices1052(e.g.,1052A or1052B) as the platforms on which the applications are being run, and ensure that the appropriate lock management libraries/modules1044D are installed/available on those virtual machines. In the PAAS mode, clients may be responsible for managing a smaller subset of the software/hardware stack in various embodiments: e.g., while the clients may still be responsible for application and data management, the cloud environment may manage virtualization, servers, storage, network, operating systems as well as middleware. Lock management libraries/modules such as1044C may be pre-deployed to, and run at, at least some PAAS resources (e.g.,1042A,1042B etc.) for applications on various clients in different embodiments. In the SAAS mode, the cloud computing environment may offer applications as a pre-packaged service (including the underlying lock management components such as1034A or1034B), managing even more of the software/hardware stack in various embodiments—e.g., clients may not even have to explicitly manage applications or data. The administration resources1022may perform resource management-related operations (such as provisioning, network connectivity, ensuring fault tolerance and high availability, and the like) for all the different modes of cloud computing that may be supported in some embodiments. Clients may interact with various portions of the cloud computing environment using a variety of programmatic interfaces in different embodiments, such as a set of APIs (application programming interfaces), web-based consoles, command-line tools, graphical user interfaces and the like. Note that other modes of providing services at which the locking algorithms described earlier may be supported in at least some embodiments, such as hybrid public-private clouds and the like. The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
70,057
11861417
DESCRIPTION OF EMBODIMENTS The following describes embodiments of an operation assist system, an operation assist method, and an operation assist program according to the present application in detail based on the drawings. Note that the present invention is not limited by the embodiments described below. First Embodiment Configuration of First Embodiment First, a configuration of an operation assist system according to a first embodiment will be described usingFIG.1.FIG.1is a diagram showing an example of the configuration of the operation assist system according to the first embodiment. In the present embodiment, the operation assist system is realized by a terminal10that is operated by a user. As shown inFIG.1, the terminal10includes a dialogue interface unit11, a chatbot12, a first application13a,an OS (Operating System)13b,a sensor13c,and a second application14. The chatbot12includes a peripheral information acquisition unit121, a peripheral information accumulation unit122, a scenario control unit123, an application control unit124, and an operation assist definition information holding unit125. The operation assist definition information holding unit125may be provided outside the chatbot12. The dialogue interface unit11accepts input of information from a user and outputs information to the user. Here, the dialogue interface unit11displays a chat screen that includes an input field and an output field in a display of the terminal10or the like, for example. The dialogue interface unit11accepts input of text via an operation that is performed by the user on an input device such as a keyboard. The dialogue interface unit11may also accept input of voice from a microphone and convert the voice to text. Note that the input device referred to here includes not only a physical device but also a virtual keyboard or the like that is displayed in a touch panel display. The chatbot12is a program that automatically responds to input text using text or the like. The chatbot12can also execute an application included in the terminal10, following a request indicated by the input text. The terminal10includes one or more applications as the first application(s)13a.Each first application13aexecutes processing according to a user operation or sensing performed by a predetermined sensor. The terminal10includes one or more applications as the second application(s)14. Each second application14executes processing according to control performed by the chatbot12. Note that the second application(s)14may overlap with the first application(s)13a,and can execute processing according to a user operation or sensing performed by a predetermined sensor, similarly to the first application(s). Here, the peripheral information acquisition unit121of the chatbot12acquires at least any one of information relating to a first application13athat is running in the terminal, information relating to control of the terminal, and information that can be acquired from a sensor included in the terminal, as peripheral information. The peripheral information includes operations performed by the user with respect to the first application13a,information displayed by the first application13a,and the like. Furthermore, the peripheral information may also include information relating to control of the terminal, which can be acquired from the OS13bor the like, such as a date and time, a login name, information of applications running in the terminal, network connection information, and hardware information. Also, the peripheral information may also include acceleration, GPS information, and the like, which can be acquired from the sensor13cincluded in the terminal10. For example, whether or not a predetermined application is running is information that can be acquired from the OS, and is an example of information relating to control of the terminal. The peripheral information acquired by the peripheral information acquisition unit121is accumulated in the peripheral information accumulation unit122. When the peripheral information accumulated in the peripheral information accumulation unit122and information that is input to the dialogue interface unit11satisfy a predetermined condition, the scenario control unit123causes the dialogue interface unit11to output information that is associated with the condition in advance. The scenario control unit123identifies an application that is to be executed, by performing condition determination, and causes the dialogue interface unit11to output information relating to execution of the identified application. For example, the scenario control unit123causes the dialogue interface unit11to display text information that indicates candidates for the application to be executed. Also, the scenario control unit123acquires text information input by the user from the dialogue interface unit11, for example. Also, the scenario control unit123acquires peripheral information from the peripheral information accumulation unit122. Also, the scenario control unit123acquires, from the operation assist definition information holding unit125, determination conditions that are conditions for determining the application to be executed. Also, the scenario control unit123can instruct the application control unit124to execute a second application14according to a condition determination result or input from the user to the dialogue interface unit11. That is, even when the user does not perform an operation, the scenario control unit123can automatically instruct execution of the second application according to the condition determination result. Also, the scenario control unit123can acquire an execution result of the application from the application control unit124. In response to the instruction from the scenario control unit123, the application control unit124executes the second application14using peripheral information accumulated in the peripheral information accumulation unit122. Furthermore, the application control unit acquires an execution result of the second application14and passes the execution result on to the scenario control unit123. Also, the application control unit124acquires, from the operation assist definition information holding unit125, application control information that is information for executing the second application14. EXAMPLE 1-1 Here, processing that is performed by the operation assist system will be described using a specific example. First, an application identification phase for identifying an application to be executed will be described usingFIG.2.FIG.2is a diagram showing the application identification phase. First, the scenario control unit123acquires determination conditions for operation situations from the operation assist definition information holding unit125, and instructs the peripheral information acquisition unit121to start to acquire peripheral information that needs to be acquired to determine the operation situations. In response to the instruction from the scenario control unit123, the peripheral information acquisition unit121acquires peripheral information from the first application13aand the like, and stores the acquired peripheral information in the peripheral information accumulation unit122. Here, assume that no first application13ais being executed. The peripheral information acquisition unit121can confirm whether or not a predetermined application is running, by referring to a task management function of the OS13bor the like, for example. In this case, the terminal10outputs the following message “peripheral information: currently running app (none) is acquired”. Note that the message is output by the terminal10as a result of the scenario control unit123controlling the dialogue interface unit11. Here, as shown inFIG.2, peripheral information accumulated in the peripheral information accumulation unit122includes items such as “date and time”, “peripheral information”, and “value”. In the example shown inFIG.2, it is indicated that “running app” of the peripheral information acquired at “2019/01/01 10:00” was “null”. This means that no first application13awas being executed at the time point at which the collected information was acquired. Here, the scenario control unit123acquires the peripheral information from the peripheral information accumulation unit122. Then, the scenario control unit123determines an operation situation of the terminal10based on the acquired information. Here, assume that determination conditions for an operation situation “work start” are the following: “determination condition 1: user input=null, determination condition 2: peripheral information, running app=null, control app: attendance management app, mail app, case management app”. The above conditions indicate that when there is no input from the user to the dialogue interface unit11and no first application13ais being executed, the attendance management app, the mail app, and the case management app are presented as candidates. The scenario control unit123determines that the operation situation is “work start” and causes the dialogue interface unit11to display the attendance management app, the mail app, and the case management app as candidates. At this time, the user selects an application that the user wants to be executed. Then, the scenario control unit123instructs the application control unit124to execute the application selected by the user. The user can select the application by inputting text, pressing a button, or inputting voice, for example. Note that pressing means an operation such as clicking performed with a mouse or tapping a touch panel. The scenario control unit123determines response information that is to be given as a response to information input to the dialogue interface unit11, based on both the input information and peripheral information acquired by the peripheral information acquisition unit121, and causes the dialogue interface unit11to output the response information. In Example 1-1, the scenario control unit123narrows down applications to be executed based on the peripheral information, and identifies an application to be executed based on input from the user. An application execution phase will be described usingFIG.3, assuming that the attendance management app is selected.FIG.3is a diagram showing the application execution phase. First, as shown inFIG.3, the application control unit124refers to the operation assist definition information holding unit125and finds that information necessary to execute the attendance management app is “employee ID” and “password”. At this time, the “employee ID” and the “password” have not been input by the user, and therefore, the terminal10displays a message that requests input of the “employee ID” and the “password” to supplement the information. The application control unit124executes the attendance management app14ausing an employee ID “123456789” and a password “abcdefghij” that are input by the user. Note that the attendance management app14ais one of the second applications14. As described above, in a case where peripheral information accumulated in the peripheral information accumulation unit122lacks information that is determined in advance as information necessary to execute the second application14, the scenario control unit123causes the dialogue interface unit11to output information that prompts input of the lacking information. Also, the peripheral information acquisition unit121further acquires information that is input to the dialogue interface unit11, as peripheral information. The information acquired as the peripheral information is used when the second application14is executed. EXAMPLE 1-2 Another example will be described usingFIGS.4and5.FIG.4is a diagram showing the application identification phase. First, in response to an instruction from the scenario control unit123, the peripheral information acquisition unit121acquires peripheral information from the first application13aand the like, and stores the acquired peripheral information in the peripheral information accumulation unit122. Here, the peripheral information acquisition unit121acquires, as the peripheral information, the fact that a case management app131and an order management app132are executed, and contents of input to these apps. The peripheral information acquisition unit121acquires input contents of items that are specified in the application control information in the operation assist definition information holding unit125. For example, the peripheral information acquisition unit121acquires, from the case management app131, input contents of items such as “case name”, “case ID”, “organization in charge”, “responsible person”, and “phone No.”. Also, the peripheral information acquisition unit121acquires, from the order management app132, input contents of items such as “product name”, “quantity”, “delivery date”, “status”, “company name”, and “status (of supplier information)”. Then, the peripheral information acquisition unit121accumulates item names and input contents in the peripheral information accumulation unit122. In this case, as shown inFIG.5, upon detecting an order information register button of the order management app132being pressed, the terminal10outputs a message indicating that the operation situation is “quote request creation” and a message for confirming whether or not to execute a quote creation app. Here, determination conditions for the operation situation “quote request creation” are the following: “determination condition 1: peripheral information, running app=order management app, determination condition 2: peripheral information, status=not ordered, determination condition 3: order information register button being pressed, control app: quote request creation app”. The above conditions indicate that when the order information register button of the order management app132is pressed in a state where the status is “not ordered”, the quote request creation app is executed. When an operation for executing the quote request creation app is performed by the user, the application control unit124executes the quote request creation app14bbased on the peripheral information to create a file of a quote request. Processing executed by the quote request creation app14bmay be executed by RPA. EXAMPLE 1-3 An example of a case where the sensor13cis used will be described. The sensor13ccan sense information regarding the user operating the terminal10. For example, the sensor13cdetects the line of sight of the user. At this time, the peripheral information acquisition unit121estimates a position in the screen of the terminal10that is seen by the user from a detection result of the sensor13c,and acquires an estimation result as peripheral information. Here, assume that an icon that corresponds to a second application14is disposed in the screen of the terminal10. At this time, if the position seen by the user, which is indicated by the peripheral information, is included in an area in which the icon corresponding to the second application14is disposed, the scenario control unit123determines that a predetermined condition is satisfied. Alternatively, the sensor13cmay also acquire biological information of the user. For example, the sensor13cacquires the pulse and the body temperature of the user. When the pulse and the body temperature of the user exceed predetermined threshold values, the scenario control unit123determines that determination conditions are satisfied. Processing of First Embodiment A flow of processing that is performed by the operation assist system will be described usingFIGS.6and7. First, processing performed in the application identification phase will be described usingFIG.6.FIG.6is a flowchart showing a flow of the processing performed in the application identification phase. First, the terminal10acquires determination conditions for operation situations from the operation assist definition information holding unit125, and passes the determination conditions on to the peripheral information acquisition unit121(step S101). Next, the terminal10acquires peripheral information necessary for the determination, stores the peripheral information in the peripheral information accumulation unit122, and passes the peripheral information on to the scenario control unit123and the application control unit124(step S102). Then, the terminal10passes the peripheral information on to the dialogue interface unit11(step S103) . Also, the terminal10displays the acquired peripheral information (a configuration is also possible in which the peripheral information is not displayed) (step S104). Here, the terminal10determines operation situations based on the peripheral information (step S105). If operation situations have not been narrowed down based on the acquired peripheral information and information input to the dialogue interface unit11outside this flow (No in step S106), the terminal10returns to step S102. On the other hand, if operation situations have been narrowed down based on the acquired peripheral information and information input to the dialogue interface unit11outside this flow (Yes in step S106), the terminal10presents external applications (second applications14) that are associated with the narrowed operation situations (step S107). Here, the terminal10accepts input from the user as to which of the applications is to be executed (step S108). If an external application to be executed is not selected through input from the user (No in step S109), the terminal10returns to step S102. Also, if the user has performed another operation ignoring the presented applications, the terminal10determines “No” in step S109. On the other hand, if an external application that is to be executed is selected through input from the user (Yes in step S109), the terminal10issues an instruction to execute cooperation with the external application selected by the user, to the application control unit124(step S110). Then, the terminal10proceeds to the application execution phase. FIG.7is a flowchart showing a flow of processing performed in the application execution phase. As shown inFIG.7, the terminal10first acquires application control information from the operation assist definition information holding unit125(step S151). Here, if all information for executing the external application has not been acquired in information passed from the peripheral information acquisition unit121(No in step S152), the terminal10acquires, from the application control unit124, information that indicates which information is necessary to execute the external application (step S153). Then, the terminal10accepts input of information necessary to execute the external application from the user (step S154) until all information for executing the external application is input (No in step S155). When all information for executing the external application is input (Yes in step S155), the terminal10passes information that is input via the dialogue interface on to the application control unit124(step S156). When it is determined in step S152that all information for executing the external application has been acquired in information passed from the peripheral information acquisition unit121(Yes in step S152), or when step S156is executed, the terminal10executes the external application and acquires an execution result (success/failure) (step S157). Then, the terminal10displays the execution result (success/failure) of the external application (step S158). After ending the processing of the application execution phase, the terminal10can return to the processing of the application identification phase. Effects of First Embodiment As described above, the peripheral information acquisition unit121acquires information relating to a first application13arunning in the terminal, information relating to control of the terminal, which can be acquired from the OS and the like, information that can be acquired from the sensor included in the terminal, or the like, as peripheral information. The peripheral information acquired by the peripheral information acquisition unit121is accumulated in the peripheral information accumulation unit122. The dialogue interface unit11accepts input of information from the user and outputs information to the user. When the peripheral information accumulated in the peripheral information accumulation unit122and information input to the dialogue interface unit11satisfy a predetermined condition, the scenario control unit123causes the dialogue interface unit11to output information relating to execution of a second application14that is associated with the condition in advance. As described above, the operation assist system according to the present embodiment automatically identifies an application to be executed, based on the peripheral information. Therefore, the operation assist system reduces user operations. In response to an instruction from the scenario control unit123, the application control unit124executes the second application14using peripheral information accumulated in the peripheral information accumulation unit122. Also, the scenario control unit123instructs the application control unit124to execute the second application14according to input from the user to the dialogue interface unit11. As described above, the operation assist system can execute an application using the peripheral information. Therefore, the operation assist system reduces operations for inputting information necessary to execute the application. In a case where the peripheral information accumulated in the peripheral information accumulation unit122lacks information that is determined in advance as information necessary to execute the second application14, the scenario control unit123causes the dialogue interface unit11to output information that prompts input of the lacking information. Also, the peripheral information acquisition unit121further acquires information that is input to the dialogue interface unit11, as peripheral information. Thus, the operation assist system can prompt the user to input information when information necessary for an application is absent. The scenario control unit123determines response information that is to be given as a response to information input to the dialogue interface unit11, based on both the input information and peripheral information acquired by the peripheral information acquisition unit121, and causes the dialogue interface unit11to output the response information. Thus, the operation assist system executes an application based on both the peripheral information and input from the user. Therefore, the operation assist system can execute an appropriate application according to operation situations of the user and reduce operations performed by the user. Second Embodiment Configuration of Second Embodiment The operation assist system may be configured to present and correct acquired peripheral information. An operation assist system according to a second embodiment includes a function for displaying peripheral information and a function for correcting the peripheral information in addition to the functions described in the first embodiment. As shown inFIG.1, the scenario control unit123provides peripheral information to the dialogue interface unit11. Furthermore, the scenario control unit123acquires correction information indicated by the user from the dialogue interface unit11. Also, the peripheral information acquisition unit121acquires the correction information regarding peripheral information from the scenario control unit123. For example, a case is conceivable in which, when inputting information regarding orders, the user wants to input information regarding an order that differs from an order for which peripheral information has already been acquired, and wants to correct the peripheral information. Also, it is conceivable that in a state where a plurality of windows are open for respective orders, if an operation situation is determined based on peripheral information acquired from a window that differs from a window intended by the user, the user wants to correct the peripheral information. Furthermore, a case is conceivable in which peripheral information is corrected to information regarding a past order in order to execute cooperation processing again for the past order for which the cooperation processing has not been executed. EXAMPLE 2-1 Here, processing for correcting peripheral information will be described giving a specific example usingFIG.8. First, similarly to Example 2-1 described above, the terminal10determines that the operation situation is “quote request creation”, and outputs a message for confirming whether or not to execute the quote request creation app. At this time, if the user performs an operation indicating that the user wants correction, the terminal10displays candidates for an item to be corrected. In the example shown inFIG.8, the terminal10starts processing for correcting peripheral information in response to a message “correct peripheral information” being input by the user. Here, the terminal10displays candidates for peripheral information that can be corrected, such as “employee ID”, “password”, and “case ID”. Then, the user presses a position at which an item that the user wants to correct is displayed. Assume that the user specifies “case ID” in the example shown inFIG.8. At this time, the terminal10displays a message that prompts input of a corrected value for the “case ID” specified by the user. Then, when the corrected value is input by the user, the terminal10corrects peripheral information, and displays a message indicating completion of the input correction and the corrected value. In the example shown inFIG.8, the user specifies the item to be corrected by pressing the position at which “case ID” is displayed. On the other hand, as shown inFIG.9, text may be input to specify the item to be corrected. In the example shown inFIG.9, the user specifies the item to be corrected by inputting the text “case ID”. EXAMPLE 2-2 In a case where an application is executed in a plurality of windows, for example, peripheral information that was acquired and accumulated with respect to a previously opened window may be overwritten as a result of the peripheral information acquisition unit121acquiring peripheral information from a window that is opened later. In such a case, the terminal10can perform processing for restoring the peripheral information in response to a user operation. FIG.10is a diagram showing the processing for restoring peripheral information. Assume that, as shown inFIG.10, after inputting case information of a case that has a case ID of “0123456”, the user opened another window and input case information of a case that has a case ID of “2345678”. Also, assume that in a case where the same application is executed in a plurality of windows, the peripheral information acquisition unit121acquires, as peripheral information, input contents regarding a window for which input was performed last. Here, the terminal10first displays case information of the case having the case ID of “2345678”, as currently acquired peripheral information. Upon detecting the displayed case ID being pressed by the user, the terminal10displays a message that prompts input of a corrected value for the “case ID”. Then, when the corrected value is input by the user, the terminal10corrects the peripheral information, and displays a message indicating completion of the input correction and the corrected value. In a case where key information such as the “case ID” is corrected, the terminal10may also correct case information associated with the key information at the same time. For example, in the example shown inFIG.10, the terminal10can correct “case name” associated with the “case ID” at the same time. In this case, the terminal10executes the quote request creation app14busing the corrected “case ID” and “case name”. Processing of Second Embodiment A flow of the processing for correcting peripheral information will be described usingFIG.11.FIG.11is a flowchart showing a flow of processing performed by the operation assist system according to the second embodiment. First, the terminal10acquires determination conditions for operation situations from the operation assist definition information holding unit125, and passes the determination conditions on to the peripheral information acquisition unit121(step S201). Next, the terminal10acquires peripheral information necessary for the determination, stores the peripheral information in the peripheral information accumulation unit122, and passes the peripheral information on to the scenario control unit123and the application control unit124(step S202). Then, the terminal10passes the peripheral information on to the dialogue interface unit11(step S203). Also, the terminal10displays the acquired peripheral information (a configuration is also possible in which the peripheral information is not displayed) (step S204). Here, the terminal10determines operation situations based on the peripheral information (step S205). If operation situations have not been narrowed down based on the acquired peripheral information and information input to the dialogue interface unit11outside this flow (No in step S206), the terminal10returns to step S202. On the other hand, if operation situations have been narrowed down based on the acquired peripheral information and information input to the dialogue interface unit11outside this flow (Yes in step S206), the terminal10presents external applications (second applications14) that are associated with the narrowed operation situations (step S207). Here, input indicating that the user wants to correct the acquired peripheral information is accepted from the user (step S208). As described above, the user may press the item to be corrected or input the item in text. Alternatively, the terminal10may accept input of a correction command or prepare a dedicated interface for correction such as a button. The terminal10acquires peripheral information from the scenario control unit123and displays the current peripheral information for the user (step S209). Then, the terminal10passes a corrected value that is input by the user based on the displayed peripheral information on to the scenario control unit123, as input information (step S210). Furthermore, the terminal10passes the corrected value on to the peripheral information acquisition unit121(step S211). Then, the terminal10updates peripheral information accumulated in the peripheral information accumulation unit122based on the corrected value (step S212). Effects of Second Embodiment As described above, the scenario control unit123causes the dialogue interface unit11to output peripheral information acquired by the peripheral information acquisition unit121. As a result, the user can check the peripheral information and determine whether or not the peripheral information needs to be corrected. Also, when an operation for correcting the peripheral information output by the dialogue interface unit11is performed by the user, the scenario control unit123corrects the peripheral information based on the user operation. Thus, the user can correct the peripheral information as necessary. Other Embodiments Some functions of the operation assist system may be realized by a server or the like. For example, a configuration is also possible in which, in the operation assist system, the dialogue interface unit11, the first application13a,and the second application14are included in the terminal, and the chatbot12is included in a server. Also, the dialogue interface unit11may be a known external tool such as slack (registered trademark). In this case, the scenario control unit123cooperates with the external tool by passing information on to the external tool using an API (Application Programming Interface) or the like. Also, particularly in a case where the first application is a Web application, the peripheral information acquisition unit121may be replaced with a UI extension function described in Reference Document 1 (JP 2017-72872A (Japanese Patent No. 6514084)). That is, the terminal10displays an extension component so as to be overlaid on a predetermined input item, and acquires a value that is input to the extension component as peripheral information. In this case, a portion of the processing performed in step S102shown inFIG.6is replaced with an information acquisition unit and an input detection unit described in Reference Document 1. Also, at this time, the acquired peripheral information may be stored in a web storage of a web browser or a local file so that the web storage or the local file functions as the peripheral information accumulation unit122. In this case, the operation assist system includes, for example: an information acquisition unit that acquires GUI component specifying information that indicates a displayed position and a displayed size of a GUI component that constitutes a first application running in a terminal; an extension component output generation unit that generates an extension component of the GUI component based on the acquired GUI component specifying information regarding the GUI component; an extension component display unit that displays the generated extension component so as to be overlaid on the GUI component in a screen; an input detection unit that detects input to the extension component; a peripheral information acquisition unit that acquires content of the input detected by the input detection unit as peripheral information; a peripheral information accumulation unit in which the peripheral information acquired by the peripheral information acquisition unit is accumulated; a dialogue interface unit that accepts input of information from a user and outputs information to the user; and a scenario control unit that causes the dialogue interface unit to output information relating to execution of a second application that is associated with a predetermined condition in advance, when the peripheral information accumulated in the peripheral information accumulation unit and information input to the dialogue interface unit satisfy the predetermined condition. The scenario control unit123may perform determination based on peripheral information that is a combination of two or more types of information out of information relating to the first application13a,information relating to control of the terminal, and information that can be acquired from a sensor included in the terminal. For example, the peripheral information acquisition unit121acquires, from the OS13b,information indicating whether or not the second application14is running, and acquires, from the sensor13c,a position in the screen of the terminal10that is seen by the user. If the second application14is not running and the position seen by the user is included in an area in which an icon of the second application14is disposed, the scenario control unit123determines that the predetermined condition is satisfied. System Configuration The constitutional elements of the illustrated device represent functional concepts, and the device does not necessarily have to be physically configured as illustrated. That is, specific manners of distribution and integration of the portions of the device are not limited to those illustrated, and all or some portions of the device may be functionally or physically distributed or integrated in suitable units according to various types of loads or conditions in which the device is used. Also, all or some portions of each processing function executed in the device may be realized using a CPU and a program that is analyzed and executed by the CPU, or realized as hardware using a wired logic. Also, out of the pieces of processing described in the present embodiment, all or some steps of a piece of processing that is described as being automatically executed may also be manually executed. Alternatively, all or some steps of a piece of processing that is described as being manually executed may also be automatically executed using a known method. The processing procedures, control procedures, specific names, and information including various types of data and parameters that are described above and shown in the drawings may be changed as appropriate unless otherwise stated. Program In one embodiment, the terminal10can be implemented by installing an operation assist program for executing the above-described operation assist processing as packaged software or online software on a desired computer. For example, it is possible to cause an information processing device to function as the terminal10by causing the information processing device to execute the operation assist program. The information processing device referred to here encompasses a desktop or notebook personal computer. The information processing device also encompasses mobile communication terminals such as a smartphone, a mobile phone, and a PHS (Personal Handyphone System), and slate terminals such as a PDA (Personal Digital Assistant). FIG.12is a diagram showing an example of a computer that executes the operation assist program. A computer1000includes a memory1010and a CPU1020, for example. The computer1000also includes a hard disk drive interface1030, a disk drive interface1040, a serial port interface1050, a video adapter1060, and a network interface1070. These units are connected via a bus1080. The memory1010includes a ROM (Read Only Memory)1011and a RAM1012. A boot program such as BIOS (BASIC Input Output System) is stored in the ROM1011, for example. The hard disk drive interface1030is connected to a hard disk drive1090. The disk drive interface1040is connected to a disk drive1100. An attachable and detachable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive1100, for example. The serial port interface1050is connected to a mouse1110and a keyboard1120, for example. The video adapter1060is connected to a display1130, for example. An OS1091, an application program1092, a program module1093, and program data1094are stored in the hard disk drive1090, for example. That is, a program that defines processing performed by the terminal10is implemented as the program module1093in which codes that can be executed by the computer are written. The program module1093is stored in the hard disk drive1090, for example. For example, the program module1093for executing processing similar to the functional configuration of the terminal10is stored in the hard disk drive1090. Note that the hard disk drive1090may be replaced with an SSD. Setting data that is used in the processing performed in the above-described embodiments is stored as the program data1094in the memory1010or the hard disk drive1090, for example. The CPU1020reads out the program module1093and the program data1094stored in the memory1010or the hard disk drive1090into the RAM1012as necessary and executes the processing in the above-described embodiments. Note that the program module1093and the program data1094do not necessarily have to be stored in the hard disk drive1090, and may also be stored in an attachable and detachable storage medium and read out by the CPU1020via the disk drive1100or the like, for example. Alternatively, the program module1093and the program data1094may also be stored in another computer that is connected via a network (LAN (Local Area Network), WAN (Wide Area Network) , etc.). The program module1093and the program data1094may also be read out from the other computer by the CPU1020via the network interface1070. REFERENCE SIGNS LIST 10Terminal11Dialogue interface unit12Chatbot13aFirst application13bOS13cSensor14Second application121Peripheral information acquisition unit122Peripheral information accumulation unit123Scenario control unit124Application control unit125Operation assist definition information holding unit
40,171
11861418
DESCRIPTION OF THE EMBODIMENTS Consistent with disclosed embodiments, systems and methods to cluster data are disclosed. Embodiments consistent with the present disclosure may include using a plurality of embedding network layers to cluster data and using meta-clustering model to optimize clustering based on embedding network layer output. As explained above, disclosed systems and methods provide accuracy, efficiency, and cost advantages over conventional approaches to clustering data. Embodiments consistent with the present disclosure may include data (i.e., datasets). Datasets may comprise actual data reflecting real-world conditions, events, and/or measurements. In some embodiments, disclosed systems and methods may fully or partially involve synthetic data (e.g., anonymized actual data or fake data). Datasets may involve time series data, numeric data, text data, and/or image data. For example, datasets may include transaction data, financial data, demographic data, public data, government data, environmental data, traffic data, network data, transcripts of video data, genomic data, proteomic data, and/or other data. Datasets may have a plurality of dimensions, the dimensions corresponding to variables. For example, a dataset may include a time series of 3-dimensional spatial data. Datasets of the embodiments may have any number of dimensions. As an illustrative example, datasets of the embodiments may include time series data with dimensions corresponding to longitude, latitude, cancer incidence, population density, air quality, and water quality. Datasets of the embodiments may be in a variety of data formats including, but not limited to, PARQUET, AVRO, SQLITE, POSTGRESQL, MYSQL, ORACLE, HADOOP, CSV, JSON, PDF, JPG, BMP, and/or other data formats. Datasets of disclosed embodiments may have a respective data schema (i.e., structure), including a data type, key-value pair, label, metadata, field, relationship, view, index, package, procedure, function, trigger, sequence, synonym, link, directory, queue, or the like. Datasets of the embodiments may contain foreign keys, i.e., data elements that appear in multiple datasets and may be used to cross-reference data and determine relationships between datasets. Foreign keys may be unique (e.g., a personal identifier) or shared (e.g., a postal code). Datasets of the embodiments may be “clustered,” i.e., a group of datasets may share common features, such as overlapping data, shared statistical properties, etc. Clustered datasets may share hierarchical relationships (i.e., data lineage). Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings and disclosed herein. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. The disclosed embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosed embodiments. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the disclosed embodiments. Thus, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting. FIG.1depicts exemplary system100for clustering data, consistent with disclosed embodiments. As shown, system100may include a data-clustering system102, a model storage104, a dataset database106, a remote database108, and a client device110. Components of system100may be connected to each other via a network112. In some embodiments, aspects of system100may be implemented on one or more cloud services designed to generate (“spin-up”) one or more ephemeral container instances (e.g., AMAZON LAMBDA instances) in response to event triggers, assign one or more tasks to a container instance, and terminate (“spin-down”) a container instance upon completion of a task. By implementing methods using cloud services, disclosed systems may efficiently provision resources based on demand and provide security advantages because the ephemeral container instances may be closed and destroyed upon completion of a task. That is, the container instances do not permit access from outside using terminals or remote shell tools like SSH, RTP, FTP, or CURL, for example. Further, terminating container instances may include destroying data, thereby protecting sensitive data. Destroying data can provide security advantages because it may involve permanently deleting data (e.g., overwriting data) and associated file pointers. As will be appreciated by one skilled in the art, the components of system100can be arranged in various ways and implemented with any suitable combination of hardware, firmware, and/or software, as applicable. For example, as compared to the depiction inFIG.1, system100may include a larger or smaller number of data-clustering systems, model storages, dataset databases, remote databases, client devices and/or networks. In addition, system100may further include other components or devices not depicted that perform or assist in the performance of one or more processes, consistent with the disclosed embodiments. The exemplary components and arrangements shown inFIG.1are not intended to limit the disclosed embodiments. Data-clustering system102may include a computing device, a computer, a server, a server cluster, a plurality of server clusters, and/or a cloud service, consistent with disclosed embodiments. Data-clustering system102may include one or more memory units and one or more processors configured to perform operations consistent with disclosed embodiments. Data-clustering system102may include computing systems configured to generate, receive, retrieve, store, and/or provide data models and/or datasets, consistent with disclosed embodiments. Data-clustering system102may include computing systems configured to generate and train models, consistent with disclosed embodiments. Data-clustering system102may be configured to receive data from, retrieve data from, and/or transmit data to other components of system100and/or computing components outside system100(e.g., via network112). Data-clustering system102is disclosed in greater detail below (in reference toFIG.5). Model storage104may be hosted on one or more servers, one or more clusters of servers, or one or more cloud services. Model storage104may be connected to network112(connection not shown). In some embodiments, model storage104may be a component of data-clustering system102(not shown). Model storage104may include one or more databases configured to store data models (e.g., machine-learning models or statistical models) and descriptive information of data models. Model storage104may be configured to provide information regarding available data models to a user or another system. Databases may include cloud-based databases, cloud-based buckets, or on-premises databases. The information may include model information, such as the type and/or purpose of a model and any measures of classification error. Model storage104may include one or more databases configured to store indexed and clustered models for use by data-clustering system100. For example, model storage104may store models associated with generalized representations of those models (e.g., neural network architectures stored in TENSORFLOW or other standardized formats). Databases may include cloud-based databases (e.g., AMAZON WEB SERVICES RELATIONAL DATABASE SERVICE) or on-premises databases. Dataset database106may include one or more databases configured to store data for use by system100, consistent with disclosed embodiments. In some embodiments, dataset database may be configured to store datasets and/or one or more dataset indexes, consistent with disclosed embodiments. Dataset database106may include a cloud-based database (e.g., AMAZON WEB SERVICES RELATIONAL DATABASE SERVICE) or an on-premises database. Dataset database106may include datasets, model data (e.g., model parameters, training criteria, performance metrics, etc.), and/or other data, consistent with disclosed embodiments. Dataset database106may include data received from one or more components of system100and/or computing components outside system100(e.g., via network112). In some embodiments, dataset database106may be a component of data-clustering system102(not shown). Remote database108may include one or more databases configured to store data for se by system100, consistent with disclosed embodiments. Remote database108may be configured to store datasets and/or one or more dataset indexes, consistent with disclosed embodiments. Remote database108may include a cloud-based database (e.g., AMAZON WEB SERVICES RELATIONAL DATABASE SERVICE) or an on-premises database. Client device110may include one or more memory units and one or more processors configured to perform operations consistent with disclosed embodiments. In some embodiments, client device110may include hardware, software, and/or firmware modules. Client device110may be a user device. Client device110may include a mobile device, a tablet, a personal computer, a terminal, a kiosk a server, a server cluster, a cloud service, a storage device, a specialized device configured to perform methods according to disclosed embodiments, or the like. At least one of data-clustering system102, model storage104, dataset database106, remote database108, or client device110may be connected to network112. Network112may be a public network or private network and may include, for example, a wired or wireless network, including, without limitation, a Local Area Network, a Wide Area Network, a Metropolitan Area Network, an IEEE 1002.11 wireless network (e.g. “Wi-Fi”) a network of networks (e.g., the Internet), a land-line telephone network, or the like. Network112may be connected to other networks (not depicted inFIG.1) to connect the various system components to each other and/or to external systems or devices. In some embodiments, network112may be a secure network and require a password to access the network. FIG.3illustrates method300for clustering data using a meta-clustering model, consistent with disclosed embodiments. As compared to conventional approaches, method300may produce more accurate results with greater efficiency. As shown, method300may include using a meta-clustering model308to generate a final data cluster based on preliminary data clusters (i.e., preliminary clustered data) which were generated by a plurality of embedding network layers that implement a plurality of clustering methods. By learning from a plurality of individually-trained models and/or embedding network layers, meta-clustering model308may advantageously identify more accurate classifications and/or clusters where traditional metrics, such as confidence levels, may provide incomplete information. Clusters may include information relating to nodes of an embedding network layer. For example, a cluster may include a vector of weights associated with nodes of a layer. A cluster may be grouped by an aspect of a latent space generated by an embedding network layer based on a data sample. In some embodiments, meta-clustering model308may reduce the dimensionality of clustered data produced by embedding layers, leading to improved accuracy and efficiency. Meta-clustering model308may quickly identify an optimal number of data clusters. Accordingly, method300provides advantages by increasing accuracy, lowering costs, and reducing resource use when clustering data.FIG.3is provided for purposes of illustration only is not limiting on the embodiments. Referring toFIG.3in greater detail, method300may include using a plurality of embedding network layers304a,304b,304c,304d,and304nto classify and cluster data302. Method300may include generating a plurality of preliminary data clusters306a,306b,306c,306d,and306ncorresponding to embedding network layers304a,304b,304c,304d,and304n.Method300may include using meta-clustering model308to generate final data clusters310(i.e., final clustered data) based on preliminary data clusters. As one of skill in the art will appreciate, method300may include any number of embedding network layers, data, preliminary data clusters, meta-clustering models, and/or final data clusters, including more or fewer than those depicted inFIG.3. Data302may include any kind of data (e.g., text data, image data, numeric data, time series data, etc.) Data302may include multi-dimensional data. Data302may be organized according to any data schema. Data302may include a plurality of data samples (e.g., a plurality of image files, a plurality of video files, a plurality text files, a plurality of data columns, etc.). Data302may include a number of dimensions (e.g., two-dimensional data, three-dimensional data, four-dimensional data, etc.). Embedding network layers304a,304b,304c,304dand304nmay be configured to accept data as input and return a data classification and/or data clusters as output. As shown, embedding network layers304a,304b,304c,304dand304nmay be configured to generate a plurality of corresponding preliminary data clusters306a,306b,306c,306d,and306nbased on data302. Generating preliminary data clusters may include sampling data302. Generating preliminary clusters may include generating clusters based on node output of a layer (e.g., a vector of weights, activation function values, etc.). Embedding network layers304a,304b,304c,304dmay include any type of embedding network as described herein and/or any other machine learning model. Preliminary data clusters306a,306b,306c,306d,and306nmay include data clusters represented as a node-edge diagrams inFIG.3. As shown,FIG.3represents nodes as discs. A node may include data samples that share a classification (e.g., a tag) and the size of the disc may indicate a relative size the node (i.e., the relative amount of data that belongs to the node).FIG.3represents edges as lines between nodes. An edge may be based on a relationship between nodes. For example, edge data may be based on a similarity metric between data samples, on a hierarchical relationship (e.g., a data lineage, a parent-child relationship), and/or on any other relationship. The distance between nodes represent aspects of data relationships between the nodes (e.g., the strength of a relationship, the similarity of data, etc.). AlthoughFIG.3depicts node-edge diagrams, embodiments may include data clusters organized and/or represented according to any known classification method (e.g., a data table, a relational database, a tree diagram, or a vector diagram). In some embodiments, a node of a cluster may include a data sample grouped by an aspect of a latent space of a layer. In some embodiments, preliminary data clusters may have a number of dimensions (e.g., two-dimensions, three-dimensions, four-dimensions, etc.). A number of dimensions of preliminary data clusters may be the same as a number of dimensions of data302. In some embodiments, one or more layers of an embedding network may generate preliminary data clusters having the same number of dimensions. As shown by way of example, individual ones of preliminary data clusters306a,306b,306c,306d,and306nhave a corresponding number of clusters, k, whose value is 4, 3, 4, 5, and 3, respectively. Hence, the number of clusters generated by one embedding network layer may be the same as or different from another embedding network layer. In addition, individual ones of preliminary data clusters306a,306b,306c,306d,and306nmay generate node-edge relationships which may differ from or which may be the same as one another. For example, embedding network layers304band304nclassify data samples of data302in the same way as each other and generate the same edge relationships between data samples to generate identical preliminary data clusters306band306n.As another example, preliminary data clusters306a,306b,306c,and306dmay differ from each other because their respective embedding networks generate different classifications (nodes) and different edge relationships from each other. As compared to the illustration ofFIG.3, method300may include other preliminary data clusters which may be the same or different from each other. Meta-clustering model308may include a machine learning model. For example, meta-clustering model308may include a deep learning model, a neural network model, an RNN, a CNN, a random forest model, a Support Vector Machine (SVM) model, a Density-based spatial clustering of applications with noise (DBSCAN) model, a k-means clustering model, a distribution-based clustering model, a k-medoids model, and/or any other type of machine learning model. Meta-clustering model308may be trained to generate data clusters based on data clusters produced by embedding network layers. In some embodiments, meta-clustering model308may be configured to encode preliminary data clusters306a,306b,306c,306d,and306n.For example, meta-clustering model308may perform a principal component analysis (PCA), an independent component analysis (ICA), a non-negative matrix factorization method (NMF), a Factor Analysis (FA), and/or any other algorithm to reduce dimensionality of latent variable generated by a model based on data samples of preliminary data dusters306a,306b,306c,306d,and306n.Encoding may include implementing an autoencoder (e.g., a variational autoencoder) model. By encoding preliminary data clusters, meta-clustering model308may reduce the complexity of the preliminary data clusters and more efficiently produce final data clusters310. In some embodiments, meta-clustering model308may be configured to generate a data map of data302based on preliminary data clusters306a,306b,306c,306d,and306n.In some embodiments, generating a data map may be unsupervised. In some embodiments, generating a data map may include tracking data samples in a plurality of preliminary data clusters and determining relationships between the data samples. For example, meta-clustering model308may learn to predict the frequency with which two or more data samples appear in a same preliminary data cluster and generate a data map based on the predictions. In some embodiments, meta-clustering model308may generate a data map based on encoded preliminary-data-clusters (e.g., based on principal components of the preliminary data clusters). A data map may include a plurality of data points in a latent space representing transitions of a data sample between the embeddings. An embedding layer may convert a data sample into a latent space, and a data map may include a visual representation of a data conversion into a latent space. A data map may be based on weights of an embedding layer. In some embodiments, generating a data map may be supervised. For example, generating a data map may include providing data samples to a user and receiving user feedback. Meta-clustering model308may identify a conflict between preliminary data clusters (e.g. embedding network layer304amay classify the same data sample differently from embedding network layer304b), and meta-clustering model308may request user feedback based on the conflict. In some embodiments, meta-clustering model308may determine a performance metric of one or more embedding network layers. For example, meta-clustering model may determine a performance metric of an embedding network layer based on an intra-cluster variance of preliminary data clusters generated by the embedding network layer. In some embodiments, generating a data map may be based on a performance metric. In some embodiments, meta-clustering model308may determine a number of clusters based on a data map and/or a performance metric. Determining a number of clusters may be based on relationships (e.g., edge relationships) between data clusters. In some embodiments, meta-clustering model308is trained to determine a number of clusters that optimizes a property of clustered data (e.g., trained to optimize a measure of variance of a cluster, a ratio of intra-cluster variance to inter-cluster variance, etc.) Determining a number of data clusters may include implementing methods such as a k-means algorithm, a k-medoids algorithm, an elbow method, an X-means clustering method, an information criterion approach, a silhouette method, a cross-validation method, a method based on a kernel matrix, and/or any other methods of determining a number of clusters in data. In some embodiments, meta-clustering model308limits and/or reduces a number of layers of an embedding network to lead to greater processing efficiencies. In some embodiments, meta-clustering model308may generate final data clusters310. In some embodiments, meta-clustering model308may generate final data clusters based on a data map (e.g., the final data clusters310may be the same as the data map). In some embodiments, generating final data clusters310may include updating one or more embedding network layers by training the embedding network layers based on a number of clusters (e g., a number of clusters determined based on a data map). In the example ofFIG.3, a final data cluster has a number of final data clusters, k, whose value is 7. In some embodiments, the number of final data clusters may be based on a relationship between clusters of preliminary data clusters. In some embodiments, a number of final data clusters, k, may be fixed at one greater than the maximum number of clusters in a plurality of preliminary data clusters. In some embodiments, generating final data clusters310may include generating updated data clusters using one or more updated embedding network layers. In some embodiments, final data clusters310may include an updated data cluster generated by an updated embedding network layer. In some embodiments, final data clusters310may include a number of dimensions that is greater than the number of dimensions of one or more of preliminary data clusters306a,306b,306c,and/or306d.In some embodiments, final data clusters310may include a number of dimensions equal to n times a number of dimensions of one or more preliminary data clusters, where n may be a number of embedding network layers. As an example, an embedding network may have 5 layers (n=5), data302may and preliminary data clusters306a,306b,306c,and/or306d,may have three-dimensions, and final data clusters may have 15 dimensions (i.e., five layers times three dimensions). In some embodiments, generating final data clusters310may include repeatedly updating one or more embedding network layers until a performance metric of the one or more embedding network layers is satisfied. During individual rounds of training of an embedding network layer, meta-clustering model308may determine a number of clusters and train the embedding network layer based on the determined number of clusters (e.g., by specifying the number of clusters as a model parameter of the embedding network layer). In this way, meta-clustering model308may be trained to accept one or more preliminary clusters, generate a data map, and quickly converge on an optimal solution by determining an optimal number of clusters. Accordingly, in subsequent implementations, a trained meta-clustering model308may quickly and efficiently generate accurate final data clusters310. FIG.4illustrates method400for clustering data using a meta-clustering model, consistent with disclosed embodiments. As described above in reference toFIG.3, method400may include using a plurality of embedding network layers304a,304b,304c,304d,and304nto classify and cluster data302. Method400may include encoding data302prior to classification, consistent with disclosed embodiments. Method400may include generating a plurality of preliminary data clusters306a,306b,306c,306d,and306ncorresponding to embedding network layers304a,304b,304c,304d,and304n.Method400may include using meta-clustering model308to generate final data clusters310(i.e., final clustered data) based on preliminary data clusters. As one of skill in the art will appreciate, method400may include any number of embedding network layers, data, preliminary data clusters, meta-clustering models, and/or final data clusters, including more or fewer than those depicted inFIG.4. Embedding network layers, data, preliminary data clusters, meta-clustering models, and/or final data clusters ofFIG.4may be configured to perform methods as described above in reference toFIG.3. In an embodiment of method400, embedding network layer outputs comprising clustered data may be passed as inputs to subsequent embedding network layers. For example, embedding network layer304amay generate preliminary data clusters306abased on data302. As shown, an embedding network layer304bmay generate preliminary data clusters306bbased on preliminary data clusters306a.Further, an embedding network layer304cmay generate preliminary data clusters306cbased on preliminary data clusters306b.An embedding network layer304dmay generate preliminary data clusters306dbased on preliminary data clusters306c.In turn, an embedding network layer304nmay generate preliminary data clusters306nbased on preliminary data clusters306d.Accordingly, in the method ofFIG.4, generating preliminary clustered data based on the received data may include passing an embedding network layer output comprising clustered data to subsequent embedding network layers. Method400may include using meta-model308to generate final data clusters310, updating one or more embedding network layers, and/or generating updated data clusters in a substantially similar manner as described in reference to method300(FIG.3) but with outputs of embedding network layers being passed as inputs to subsequent embedding network layers. For example, meta-model308may generate final data clusters310based on a data map and/or one or more preliminary data clusters306a,306b,306c,306dand306nin substantially the same manner in method400as in method300. FIG.5depicts exemplary data-clustering system102, consistent with disclosed embodiments. Data-clustering system102may include a computing device, a computer, a server, a server cluster, a plurality of clusters, and/or a cloud service, consistent with disclosed embodiments. As shown, data-clustering system102may include one or more processors510, one or more I/O devices520, and one or more memory units530. In some embodiments, some or all components of data-clustering system102may be hosted on a device, a computer, a server, a cluster of servers, or a cloud service. In some embodiments, data-clustering system102may be a scalable system configured to efficiently manage resources and enhance security by provisioning computing resources in response to triggering events and terminating resources after completing a task (e.g., a scalable cloud service that spins up and terminates container instances). FIG.5depicts an exemplary configuration of data-clustering system102. As will be appreciated by one skilled in the art, the components and arrangement of components included in data-clustering system102may vary. For example, as compared to the depiction inFIG.5, data-clustering system102may include a larger or smaller number of processors, I/O devices, or memory units. In addition, data-clustering system102may further include other components or devices not depicted that perform or assist in the performance of one or more processes consistent with the disclosed embodiments. The components and arrangements shown inFIG.5are not intended to limit the disclosed embodiments, as the components used to implement the disclosed processes and features may vary. Processor510may comprise known computing processors, including a microprocessor. Processor510may constitute a single-core or multiple-core processor that executes parallel processes simultaneously. For example, processor510may be a single-core processor configured with virtual processing technologies. In some embodiments, processor510may use logical processors to simultaneously execute and control multiple processes. Processor510may implement virtual machine technologies, or other known technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. In another embodiment, processor510may include a multiple-core processor arrangement (e.g., dual core, quad core, etc.) configured to provide parallel processing functionalities to allow execution of multiple processes simultaneously. One of ordinary skill in the art would understand that other types of processor arrangements could be implemented that provide for the capabilities disclosed herein. The disclosed embodiments are not limited to any type of processor. Processor510may execute various instructions stored in memory530to perform various functions of the disclosed embodiments described in greater detail below. Processor510may be configured to execute functions written in one or more known programming languages. I/O devices520may include at least one of a display, an LED, a router, a touchscreen, a keyboard, a microphone, a speaker, a haptic device, a camera, a button, a dial, a switch, a knob, a transceiver, an input device, an output device, or another I/O device to perform methods of the disclosed embodiments. I/O devices520may be components of an interface522(e.g., a user interface). Interface522may be configured to manage interactions between system100and other systems using network112. In some aspects, interface522may be configured to publish data received from other components of system100. This data may be published in a publication and subscription framework (e.g., using APACHE KAFKA), through a network socket, in response to queries from other systems, or using other known methods. Data may be synthetic data, as described herein. As an additional example, interface522may be configured to provide information received from other components of system100regarding datasets. In various aspects, interface522may be configured to provide data or instructions received from other systems to components of system100. For example, interface522may be configured to receive instructions for generating data models (e.g., type of data model, data model parameters, training data indicators, training parameters, or the like) from another system and provide this information to programs535. As an additional example, interface522may be configured to receive data including sensitive data from another system (e.g., in a file, a message in a publication and subscription framework, a network socket, or the like) and provide that data to programs535or store that data in, for example, data531, model storage104, dataset database106, and/or remote database108. In some embodiments, interface522may include a user interface configured to receive user inputs and provide data to a user (e.g., a data manager). For example, interface522may include a display, a microphone, a speaker, a keyboard, a mouse, a track pad, a button, a dial, a knob, a printer, a light, an LED, a haptic feedback device, a touchscreen and/or other input or output devices. Memory530may be a volatile or non-volatile, magnetic, semiconductor, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium, consistent with disclosed embodiments. As shown, memory530may include data531, including one of at least one of encrypted data or unencrypted data. Consistent with disclosed embodiments, data531may include datasets, model data (e.g., model parameters, training criteria, performance metrics, etc.), and/or other data. Programs535may include one or more programs (e.g., modules, code, scripts, or functions) used to perform methods consistent with disclosed embodiments. Programs may include operating systems (not shown) that perform known operating system functions when executed by one or more processors. Disclosed embodiments may operate and function with computer systems running any type of operating system. Programs535may be written in one or more programming or scripting languages. One or more of such software sections or modules of memory530may be integrated into a computer system, non-transitory computer-readable media, or existing communications software. Programs535may also be implemented or replicated as firmware or circuit logic. Programs535may include a model optimizer536, an embedder537, a clusterer538, and/or other components (e.g., modules) not depicted to perform methods of the disclosed embodiments. In some embodiments, modules of programs535may be configured to generate (“spin up”) one or more ephemeral container instances (e.g., an AMAZON LAMBDA instance) to perform a task and/or to assign a task to a running (warm) container instance, consistent with disclosed embodiments. Modules of programs535may be configured to receive, retrieve, and/or generate models, consistent with disclosed embodiments. Modules of programs535may be configured to perform operations in coordination with one another. In some embodiments, programs535may be configured to conduct an authentication process, consistent with disclosed embodiments. Model optimizer536may include programs (e.g., scripts, functions, algorithms) to train, implement, store, receive, retrieve, and/or transmit one or more machine-learning models. Machine-learning models may include a neural network model, an attention network model, a generative adversarial model (GAN), a recurrent neural network (RNN) model, a deep learning model (e.g-., a long short-term memory (LSTM) model), a random forest model, a convolutional neural network (CNN) model, an RNN-CNN model, an LSTM-CNN model, a temporal-CNN model, a support vector machine (SVM) model, a Density-based spatial clustering of applications with noise (DBSCAN) model, a k-means clustering model, a distribution-based clustering model, a k-medoids model, a natural-language model, and/or another machine-learning model. Models may include an ensemble model (i.e., a model comprised of a plurality of models). In some embodiments, training of a model may terminate when a training criterion is satisfied. Training criterion may include a number of epochs, a training time, a performance metric (e.g., an estimate of accuracy in reproducing test data), or the like. Model optimizer536may be configured to adjust model parameters during training. Model parameters may include weights, coefficients, offsets, or the like. Training may be supervised or unsupervised. Model optimizer536may be configured to train machine learning models by optimizing model parameters and/or hyperparameters (i.e., hyperparameter tuning) using an optimization technique, consistent with disclosed embodiments. Hyperparameters may include training hyperparameters, which may affect how training of a model occurs, or architectural hyperparameters, which may affect the structure of a model. An optimization technique may include a grid search, a random search, a gaussian process, a Bayesian process, a Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a derivative-based search, a stochastic hill-climb, a neighborhood search, an adaptive random search, or the like. Model optimizer536may be configured to optimize statistical models using known optimization techniques. In some embodiments, model optimizer536may be configured to generate models based on instructions received from another component of system100and/or a computing component outside system100(e.g., via interface522, from client device110, etc.). For example, model optimizer536may be configured to receive a visual (e.g., graphical) depiction of a machine learning model and parse that graphical depiction into instructions for creating and training a corresponding neural network. Model optimizer536may be configured to select model training parameters. This selection can be based on model performance feedback received from another component of system100. Model optimizer536may be configured to provide trained models and descriptive information concerning the trained models to model storage104. Model optimizer536may be configured to train data models to generate synthetic data based on an input dataset (e.g., a dataset comprising actual data). For example, model optimizer536may be configured to train data models to generate synthetic data by identifying and replacing sensitive information in a dataset. In some embodiments, model optimizer536may be configured to train data models to generate synthetic data based on a data profile (e.g., a data schema and/or a statistical profile of a dataset). For example, model optimizer536may be configured to train data models to generate synthetic data to satisfy a performance criterion. Performance criteria may be based on a similarity metric representing a measure of similarity between a synthetic dataset and another dataset. Embedder537may include programs (e.g., scripts, functions, algorithms) to encode data to classify data, and/or to cluster data, consistent with disclosed embodiments. Embedder537may include any embedding network layers as described herein. Embedding network layers may comprise machine learning models configured to classify data. For example, an embedding network layer may include a natural language processing model, a binary classification model, a convolutional neural network model, a deep learning model, a directional Encoder Representations from Transformers (BERT) model, an Embeddings from Language Models (ELMo) representation model, or any other model configured to classify data. In some embodiments, embedder537may include programs to transform string data (e.g., character data or other non-numeric data) into numeric data (e.g., to transform letters, words, or other strings into numbers according to a table). embedder537may be configured to perform methods of character encoding (e.g., one-hot encoding). In some embodiments, embedder537may be configured to receive, train, and/or implement a machine learning model configured for natural-language processing (i.e., a natural-language model). In some embodiments, embedder537may be configured to implement a natural-language model to encode string data as numeric data. For example, embedder537may transform words and/or phrases into numbers by applying a lexicon, a parser, and a grammar rule system. In some embodiments, embedder537may be configured to receive, train, and/or implement an autoencoder model or components of an autoencoder model (e.g., an encoder model or a decoder model). In some embodiments, embedder537may be configured to implement an autoencoder model to reduce the dimensionality of a dataset. Embedder537may be configured to tag classified and/or clustered data, consistent with disclosed embodiments. Embedder537may include programs configured to cluster data by analyzing properties of data and/or data models. For example, Embedder537may include or be configured to implement one or more data-profiling models. A data-profiling model may include machine-learning models and statistical models to determine a data schema and/or a statistical profile of a dataset (i.e., to profile a dataset), consistent with disclosed embodiments. A data-profiling model may include an RNN model, a CNN model, or other machine-learning model. In some embodiments, embedder537may include algorithms to determine a data type, key-value pairs, row-column data structure, statistical distributions of information such as keys or values, or other property of a data schema may be configured to return a statistical profile of a dataset (e.g., using a data-profiling model). In some embodiments, embedder537may be configured to implement univariate and multivariate statistical methods. Embedder537may include a regression model, a Bayesian model, a statistical model, a linear discriminant analysis model, or other classification model configured to determine one or more descriptive metrics of a dataset. For example, embedder537may include algorithms to determine an average, a mean, a standard deviation, a quantile, a quartile, a probability distribution function, a range, a moment, a variance, a covariance, a covariance matrix, a dimension and/or dimensional relationship (e.g., as produced by dimensional analysis such as length, time, mass, etc.) or any other descriptive metric of a dataset. In some embodiments, embedder537may be configured to return a statistical profile of a dataset (e.g., using a data-profiling model or other model). A statistical pro e may include a plurality of descriptive metrics. For example, the statistical profile may include an average, a mean, a standard deviation, a range, a moment, a variance, a covariance, a covariance matrix, a similarity metric, or any other statistical metric of the selected dataset. In some embodiments, embedder537may be configured to generate a similarity metric representing a measure of similarity between data in a dataset A similarity metric may be based on a correlation, covariance matrix, a variance, a frequency of overlapping values, or other measure of statistical similarity. In some embodiments, embedder537may be configured to classify data. Classifying data may include determining whether a data sample is related to another data sample. Classifying a dataset may include clustering datasets and generating information indicating whether a dataset belongs to a cluster of datasets. In some embodiments, classifying a dataset may include generating data describing a dataset (e.g., a dataset index), including metadata, an indicator of whether data element includes actual data and/or synthetic data, a data schema, a statistical profile, a relationship between the test dataset and one or more reference datasets (e.g., node and edge data), and/or other descriptive information. Edge data may be based on a similarity metric. Edge data may and indicate a similarity between datasets and/or a hierarchical relationship (e.g., a data lineage, a parent-child relationship). In some embodiments, classifying a dataset may include generating graphical data, such as a node diagram, a tree diagram, or a vector diagram of datasets. Classifying a dataset may include estimating a likelihood that a dataset relates to another dataset, the likelihood being based on the similarity metric. Embedder537may be configured to classify a dataset based on data-model output, consistent with disclosed embodiments. For example, embedder537may be configured to classify a dataset based on a statistical profile of a distribution of activation function values. In some embodiments, embedder537may be configured to classify a dataset at least one of an edge, a foreign key, a data schema, or a similarity metric, consistent with disclosed embodiments. In some embodiments, the similarity metric represents a statistical similarity between data-model output of a first dataset and a second dataset, consistent with disclosed embodiments. As another example, data classification module may classify a dataset as a related dataset based on determination that a similarity metric between a dataset and a previously classified dataset satisfies a criterion. Clusterer538may include programs to encode data, to classify data, and/or to cluster data based on output of data classification models and/or data clustering models (i.e., based on preliminary clustered data). Clusterer538may be configured to receive, generate, train, and/or implement a meta-clustering model, consistent with disclosed embodiments. A meta-clustering model may include a machine learning model. For example, a meta-clustering model may include a deep learning model, a neural network model, an RNN, a CNN, a random forest model, a Support Vector Machine (SVM) model, a Density-based spatial clustering of applications with noise (DBSCAN) model, a k-means clustering model, a distribution-based clustering model, a k-medoids model, and/or any other type of machine learning model. A meta-clustering model may be trained to generate data clusters based on preliminary data clusters produced by embedding network layers. In some embodiments, a meta-clustering model may be configured to encode data (e.g., using a principal component analysis). Encoding data may include a principal component analysis (PCA), an independent component analysis (ICA), a non-negative matrix factorization method (NMF), a Factor Analysis (FA), and/or any other algorithm to reduce dimensionality of latent variable generated by a model. In some embodiments, meta-clustering model may be configured to generate a data map of data based on preliminary data clusters generated by embedding network layers. Generating a data map may be supervised or unsupervised. Generating a data map may include tracking data samples in a plurality of preliminary data clusters and determining relationships between the data samples. In some embodiments, meta-clustering mode may be configured to generate a data map based on encoded data. A meta-clustering model may be configured to identify a conflict between preliminary data clusters. In some embodiments, a meta-clustering model may be configured to determine a performance metric of one or more embedding network layers. In some embodiments, generating a data map may be based on a performance metric. Meta-clustering model may be configured to determine a number of clusters based on a data map and/or a performance metric. Determining a number of clusters may be based on relationships (e.g., edge relationships) between data clusters. A meta-clustering model may be configured to determine a number of clusters by implementing methods such as a k-means algorithm, a k-medoids algorithm, an elbow method, an X-means clustering method, an information criterion approach, a silhouette method, a cross-validation method, a method based on a kernel matrix, and/or any other methods of determining a number of clusters in data. In some embodiments, a meta-clustering model may be configured to generate final data clusters. Generating final data clusters may be based on a data map. In some embodiments, generating final data clusters may include updating one or more embedding network layers by training the embedding network layers based on a number of clusters (e.g., a number of clusters determined based on a data map). In some embodiments, generating final data clusters may include generating updated data clusters using one or more updated embedding network layers. A final data cluster may include an updated data cluster generated by an updated embedding network layer. In some embodiments, a meta-clustering model may be configured to repeatedly update one or more embedding network layers until a performance metric of the one or more embedding network layers is satisfied (i.e., meta-clustering model may train an embedding network layer). During individual rounds of training of an embedding network layer, a meta-clustering model may be configured to determine a number of clusters and train the embedding network layer based on the determined number of clusters (e.g., by specifying the number of clusters as a model parameter of the embedding network layer). FIG.6depicts exemplary process600for training an embedding network layer to cluster data, consistent with disclosed embodiments. In some embodiments, data-clustering system102may perform process600using programs535. One or more of model optimizer536, embedder537, clusterer538or other components of programs535may perform operations of process600, consistent with disclosed embodiments. It should be noted that other components of system100, including, for example, client device110may perform operations of one or more steps of process600. Consistent with disclosed embodiments, steps of process600may be performed on one or more cloud services using one or more ephemeral container instances (e.g., AMAZON LAMBDA). For example, at any of the steps of process600, data-clustering system102may generate (spin up) an ephemeral container instance to execute a task, assign a task to an already-running ephemeral container instance (warm container instance), or terminate a container instance upon completion of a task. As one of skill in the art will appreciate, steps of process600may be performed as part of an application interface (API) call. At step602, data-clustering system102may receive training data, consistent with disclosed embodiments. In some embodiments, step602may include receiving training data from data631, one or more client devices (e.g., client device110), dataset database106, remote database108, and/or a computing component outside system100. Step602may include retrieving training data from a data storage (e.g., from data531, dataset database106, and/or remote database108). Training data of step602may include any of the types of data previously described or any other type of dataset. Training data of step602may have a range of dimensions, formats, data schema, and/or statistical profiles. Training data of step602may include time series data. Training data may include clustered data (e.g., a preliminary data cluster). At step604, data-clustering system102may generate or receive an embedding network layer, consistent with disclosed embodiments. Retrieving an embedding network layer may be based on received data (e.g., based on a data profile of a received dataset). Retrieving an embedding network layer may include retrieving a model from data531, model storage104, remote database108, and/or another data storage. At step606, data-clustering system102may train an embedding network layer to classify training data, consistent with disclosed embodiments. Training an embedding network layer to classify data may include any method of model training (e.g., as described in reference to model optimizer536). Classifying data at step606may include generating tags and/or any other method of classifying data. Step606may include training an embedding network to classify data based on training data (e.g., as described in reference to method300,FIG.3) and/or based on an output of another embedding network layer (e.g., as described in reference to method400,FIG.4). At step608, data-clustering system102may train an embedding network layer to cluster training data (i.e., to generate data clusters), consistent with disclosed embodiments (e.g., as described in reference to method300,FIG.3). Step608may include training an embedding network to cluster data based on an output of another embedding network layer (e.g., as described in reference to method400,FIG.4). Training an embedding network layer to cluster data may include any method of model training (e.g., as described in reference to model optimizer536). Clustering data at step608may include generating tags, nodes, edges, and/or any other method of classifying data. Step608may include training an embedding network layer to generate preliminary data clusters (e.g., preliminary data clusters306a,306b,306c,306d,and/or306n). In some embodiments, step608includes performing step606(i.e., classifying and clustering training data may be overlapping processes), consistent with disclosed embodiments. In some embodiments, preliminary data clusters may have a number of dimensions equal to a number of dimensions of training data. FIG.7depicts exemplary process700for clustering data using embedding network layers, consistent with disclosed embodiments. Process700may be performed to generate a plurality of embedding network layers (e.g., as described in relation toFIG.3,4,8or9). In some embodiments, process700is directed by a meta-clustering model or other component of cluster538. In some embodiments, data-clustering system102may perform process700using programs535. One or more of model optimizer536, embedder537, clusterer538, and/or other components of programs535may perform operations of process700, consistent with disclosed embodiments. It should be noted that other components of system100, including, for example, client device110may perform operations of one or more steps of process700. Consistent with disclosed embodiments, steps of process700may be performed on one or more cloud services using one or more ephemeral container instances (e.g., AMAZON LAMBDA). For example, at any of the steps of process700, data-clustering system102may generate (spin up) an ephemeral container instance to execute a task, assign a task to an already-running ephemeral container instance (warm container instance), or terminate a container instance upon completion of a task. As one of skill in the art will appreciate, steps of process700may be performed as part of an application interface (API) call. At step702, data-clustering system102may receive data, consistent with disclosed embodiments. Data received at step702may include any type of data in any format, with any number of dimensions, as previously described. In some embodiments, data-clustering system102may receive training parameters or hyperparameters at step702. In some embodiments, data-clustering system102may receive an identifier of an embedding network layer or a selection criterion for selecting an embedding network layer at step702. At step704, data-clustering system102may add an embedding network layer, consistent with disclosed embodiments. In some embodiments, adding an embedding network layer may include adding a first embedding network layer to a plurality of embedding network layers (e.g., embedding network layers304a,304b,304c,304d,and304nas depicted inFIG.3). As previously described, an embedding network layer may include a machine learning model trained to classify and/or cluster data. In some embodiments, adding an embedding network layer includes selecting and retrieving an embedding network layer from a model storage based on an identifier or a selection criterion. At step706, data-clustering system102may generate clustered data using the added embedding network layer, consistent with disclosed embodiments. Generating clustered data may include performing any methods of data classification or data clustering, consistent with disclosed embodiments. In some embodiments, generating clustered data at step706includes training an added embedding-network (e.g., by performing steps of process600). In some embodiments, generating clustered data at step706includes implementing a trained, added embedding-network. Clustered data may include a number of dimensions which may be the same as a number of dimensions of received data. Step706may include generating clustered data using, for example, method300or method400(FIG.3,FIG.4). At step708, data-clustering system102may tag clustered data, consistent with disclosed embodiments. Tagging clustered data may include providing data samples to a user (e.g via interface522or by transmitting data samples) and receiving data tags in response. In some embodiments, generating clustered data and tagging clustered data may be performed concurrently (i.e., steps606and608may be performed at the same time as part of a single process). At step710, data-clustering system102may determine a performance metric of one or more embedding network layers, consistent with disclosed embodiments. For example, a performance metric may be based on a measure of intra-cluster variance as compared to an inter-cluster variance in clustered data. A ratio of intra-cluster variance to inter-duster may indicate how well an embedding network layer classifies data. A high ratio may indicate inaccurate data classification, while a low ratio may indicate accurate data classification. The performance metric at step710may be based on a plurality of individual performance metrics associated with individual embedding network layers (e.g., an average, a maximum of a performance metrics etc.). A performance metric at step710may be based on a comparison of the number of clusters generated by a plurality of embedding network layers (e.g., a variance, a percent agreement, etc.). A high variance or low percent agreement may indicate that inaccurate data classification, while a low variance or high percent agreement may indicate accurate data classification. A performance metric at step710may be based on a k-means algorithm, a k-medoids algorithm, an elbow method, an X-means clustering method, an information criterion approach, a silhouette method, a cross-validation method, a method based on a kernel matrix, and/or any other methods of determining a number of clusters in data. In some embodiments, a meta-clustering model determines a performance criterion at step710. In some embodiments, a performance criterion may be a threshold based on one or more performance metrics of embedding network layers. A threshold may be based on an average or any other statistical measure of one or more performance metrics of embedding network layers. For example, a performance criterion at step710may be based on a minimum performance metric (e.g., the performance criterion may include determining whether at least one embedding network layer meets a minimum performance metric). At step712, data-clustering system102may determine whether to add an embedding network layer, consistent with disclosed embodiments. Determining at step712may be based on a performance criterion (e.g., a performance criterion of step710). For example, if the performance criterion indicates disagreement or inaccurate classifications among the plurality of embedding network layers, data-clustering system102may determine to add an embedding layer. Conversely, if the performance criterion indicates agreement or accurate classification data-clustering system102may determine to not add an embedding layer. In some embodiments, determining at step712may be based on a predetermined number of network layers. Determining at step712may be based on an input (e.g., a manual input received via interface522and/or an input received from client device110). Determining at step712may be based on data received at step702(e.g., a list of embedding network layer identifiers). As shown, data-clustering system102may repeat steps604through610if data-clustering system102determines to add another embedding network layer (i.e., if the determination at step712is “yes”). Alternatively, data-clustering system102may proceed to step714if data-clustering system102determines not to add another embedding network layer (i.e., if the determination at step712is “no”). At step714, data-clustering system102may provide clustered data and/or embedding network layers, consistent with disclosed embodiments. Providing clustered data may include storing data (e.g., in data531, dataset database106, and/or remote database108). Providing clustered data may include transmitting a data to another component of system100(e.g., client device110) and/or a component outside system100. Providing clustered data may include displaying a visual representation of clustered data in an interface (e.g., interface522), such as a table, a graph, a node diagram, etc. Providing an embedding network layer may include storing an embedding network layer (e.g., in data531and/or model storage104). Providing an embedding network layer may include transmitting an embedding network layer to another component of system100(e.g., client device110) and/or a component outside system100. Providing embedding network layers may include displaying a visual representation of networks layers in an interface (e.g., interface522), such as a table, a graph, etc. FIG.8depicts exemplary process800for training a meta-clustering model to cluster data, consistent with disclosed embodiments. In some embodiments, data-clustering system102may perform process800using programs535. One or more of model optimizer536, embedder537, clusterer538, and/or other components of programs535may perform operations of process800, consistent with disclosed embodiments. It should be noted that other components of system100, including, for example, client device110may perform operations of one or more steps of process800. Consistent with disclosed embodiments, steps of process800may be performed on one or more cloud services using one or more ephemeral container instances (e.g., AMAZON LAMBDA). For example, at any of the steps of process800, data-clustering system102may generate (spin up) an ephemeral container instance to execute a task, assign a task to an already-running ephemeral container instance (warm container instance), or terminate a container instance upon completion of a task. As one of skill in the art will appreciate, steps of process800may be performed as part of an application interface (API) call. At step802, data-clustering system102may receive clustered data from a plurality of embedding network layers, consistent with disclosed embodiments. Clustered data may include node-edge data and/or any other classified and/or clustered data. Clustered data received at step802may include preliminary clustered data and/or updated clustered data, as described herein. Clustered data may have a number of dimensions, consistent with disclosed embodiments. At step804, data-clustering system102may generate a meta-clustering model, consistent with disclosed embodiments. A meta-clustering model may include a deep learning model, a neural network model, an RNN, a CNN, a random forest model, a Support Vector Machine (SVM) model, a Density-based spatial clustering of applications with noise (DBSCAN) model, a k-means clustering model, a distribution-based clustering model, a k-medoids model, and/or any other type of machine learning model. Generating a meta-clustering model may include retrieving a model from a data storage (e.g., data531and/or model storage104), consistent with disclosed embodiments. Retrieving a model may be based on user input, data received at step802, and/or a search strategy. At step806, data-clustering system102may generate encoded data based on the clustered data, consistent with disclosed embodiments. Generating encoding data may include performing an encoding method. The encoding method may include a principal component analysis, an independent component analysis (ICA), a non-negative matrix factorization method (NMF), a Factor Analysis (FA), and/or any other algorithm to reduce dimensionality of a latent variable generated by a model. In some embodiments, a meta-clustering model generates encoded data at step806. At step808, data-clustering system102may generate a data map using a meta-clustering model, consistent with disclosed embodiments. The data map may be based on clustered data (e.g., preliminary data clusters) and/or on encoded data (e.g., principal components of the preliminary data clusters). In some embodiments, generating a data map may be unsupervised. In some embodiments, generating a data map may include tracking data samples in a plurality of data clusters and determining relationships between the data samples. In some embodiments, generating a data map may be supervised. For example, generating a data map may include providing data samples to a user and receiving user feedback. At step808, meta-clustering model may identify a conflict between preliminary data clusters generated by different embedding network layers. Data-clustering system102may request user feedback based on a conflict. In some embodiments, a data map may include a representation of a data sample in a latent space comprised of a number of dimensions (e.g., a number of dimensions may be equal to a number of layers of an embedding network). In some embodiments, a dimension may correspond to a vector associated with neural nodes of an embedding network layer (e.g., a vector of weights, activation function values, etc.). At step810, data-clustering system102may determine whether a performance criterion is met, consistent with disclosed embodiments. In some embodiments, at step810, a meta-clustering model determines a performance metric and determines whether a performance criterion is met based on the performance metric. Data-clustering system102may determine a performance metric of one or more embedding network layers, consistent with disclosed embodiments. For example, a performance metric may be based on a measure of intra-cluster variance as compared to an inter-cluster variance in clustered data. A ratio of intra-cluster variance to inter-cluster may indicate how well an embedding network layer classifies data. A high ratio may indicate inaccurate data classification, while a low ratio may indicate accurate data classification. The performance metric at step810may be based on a plurality of individual performance metrics associated with individual embedding network layers (e.g., an average, a maximum of a performance metrics etc.). A performance metric at step810may be based on a comparison of the number of clusters generated by a plurality of embedding network layers (e.g., a variance, a percent agreement, etc.). A high variance or low percent agreement may indicate that inaccurate data classification, while a low variance or high percent agreement may indicate accurate data classification. A performance criterion may include a threshold of a performance metric. In some embodiments, a meta-clustering model is trained to identify a performance criterion. As shown, in some embodiments, if the performance criterion is met (i.e., if the determination at step810is “yes”), step820follows step810. In some embodiments, if the performance criterion is not met (i.e., if the determination at step810is “no”), one or more of steps712through718follows step810. At step812, data-clustering system102may determine a number of clusters using a meta-clustering model, consistent with disclosed embodiments. In some embodiments, determining a number of clusters may be based on a data map and/or a performance metric. Determining a number of dusters may be based on relationships (e.g., edge relationships) between data clusters. In some embodiments, step812includes implementing a meta-clustering model trained to determine a number of clusters that optimizes a property of clustered data (e.g., trained to optimize a measure of variance of a cluster, a ratio of intra-cluster variance to inter-cluster variance, etc.). At step812, data-clustering system102may determine a number of clusters by implementing methods such as a k-means algorithm, a k-medoids algorithm, an elbow method, an X-means clustering method, an information criterion approach, a silhouette method, a cross-validation method, a method based on a kernel matrix, and/or any other methods of determining a number of clusters in data. At step814, data-clustering system102may generate one or more updated embedding network layers, consistent with disclosed embodiments. Step814may include generating an updated embedding network layer by training the embedding network layer based on a number of clusters. Step814may include performing steps of process600and/or process700. Step814may include adding an embedding network layer, consistent with disclosed embodiments. Step814may include generating one or more updated embedding network layers as described in reference to method300(FIG.3) and/or as described in reference to method400(FIG.4). At step816, data-clustering system102may include generating updated cluster data, consistent with disclosed embodiments. Step816may include implementing one or more network embedding layers to generate updated clustered data, including implementing an updated embedding network layer. Step816may include generating updated cluster data as described in reference to method300(FIG.3) and/or as described in reference to method400(FIG.4). At step818, data-clustering system102may update a meta-clustering model, consistent with disclosed embodiments. In some embodiments, step818includes updating model parameters based on updated cluster data. In this way, a meta-clustering model may be trained to optimize data clusters based on a number of clusters and/or other parameters. Step818may include repeatedly updating one or more embedding network layers until a performance metric of the one or more embedding network layers is satisfied. As shown inFIG.8, step806and/or step808may follow step818. In some embodiments, data-clustering system102may repeat steps706,708,710,712,714,716, and/or718to train the meta-clustering model to determine a number of clusters based on the data map and a performance criterion, consistent with disclosed embodiments. At step820, data-clustering system102may generate final clustered data, consistent with disclosed embodiments. In some embodiments, meta-clustering model may generate final data clusters based on a data map. In some embodiments, final clustered data may be the same as the data map. In some embodiments, generating final clustered data (i.e., final data clusters) may include an updated data cluster generated by an updated embedding network layer. In some embodiments, generating final clustered data may include selecting a data cluster generated by an embedding network layer. For example, data-clustering system102may select preliminary clustered data or updated clustered data based on a performance metric of an embedding network layer. Final clustered data may have a number of dimensions, which may be equal to a number of embedding layers multiplied by a number of dimensions of clustered data and/or encoded data. At step822, data-clustering system102may provide final clustered data, a data map, a number of clusters, and/or a meta-clustering model, consistent with disclosed embodiments. Providing final clustered data, a data map, a number of clusters, and/or a meta-clustering model may include storing data (e.g., in data531, model storage104, dataset database106, and/or remote database108). Providing final clustered data, a data map, a number of clusters, and/or a meta-clustering model may include transmitting a data to another component of system100(e.g., client device110) and/or a component outside system100. Providing final clustered data, a data map, a number of clusters, and/or a meta-clustering model may include displaying a visual representation of final clustered data, a data map, a number of clusters, and/or a meta-clustering model in an interface (e.g., interface522), such as a table, a graph, a node diagram, etc. FIG.9depicts exemplary process900for clustering data using a meta-clustering model, consistent with disclosed embodiments. In some embodiments, data-clustering system102may perform process900using programs535. One or more of model optimizer536, embedder537, clusterer538, and/or other components of programs535may perform operations of process900, consistent with disclosed embodiments. It should be noted that other components of system100, including, for example, client device110may perform operations of one or more steps of process900. Consistent with disclosed embodiments, steps of process900may be performed on one or more cloud services using one or more ephemeral container instances (e.g., AMAZON LAMBDA). For example, at any of the steps of process900, data-clustering system102may generate (spin up) an ephemeral container instance to execute a task, assign a task to an already-running ephemeral container instance (warm container instance), or terminate a container instance upon completion of a task. As one of skill in the art will appreciate, steps of process900may be performed as part of an application interface (API) call. At step902, data-clustering system102may receive a clustering request, consistent with disclosed embodiments. A clustering request may include data (e.g., data to be clustered). A clustering request may include clustered data. A clustering request may include an identifier of an embedding network layer and/or a meta-clustering model. A clustering request may include tags or other classification data. Data received at step902may include any type of data with any number of dimensions, consistent with disclosed embodiments. At step904, data-clustering system102may generate preliminary clustered-data based on received data using a plurality of embedding network layers, consistent with disclosed embodiments. Preliminary clustered-data may have a number of dimensions. Generating preliminary clustered-data may include performing steps of process600and/or process700. Step904may include generating preliminary clustered-data as described in reference to method300(FIG.3) and/or as described in reference to method400(FIG.4). At step906, data-clustering system102may generate a data map using a meta-clustering model, consistent with disclosed Generating a data map may include any of the methods of generating a data map previously described. Generating a data map may include encoding preliminary clustered data, consistent with disclosed embodiments. At step908, data-clustering system102may determine whether to request user input, consistent with disclosed embodiments. For example, data-clustering system102may determine to request user input to classify (e.g., tag) a data sample. Determining whether to request user input may be based on a predetermined command (e.g., a command to perform a supervised or unsupervised model training). As shown, data-clustering system102may perform step910if data-clustering system102determines not to request user input (i.e., if the determination at step908is “no”). Alternatively, data-clustering system102may perform step912and/or step914if data-clustering system102determines to request user input (i.e., if the determination at step908is “yes”). At step910, data-clustering system102may generate data sample tags, consistent with disclosed embodiments. In some embodiments, generating data sample based on preliminary data-clusters and/or a data map (e.g., data samples may be tagged (classified) based on learned classifications of the meta-clustering model, the learned classifications being based on preliminary data-clusters and/or a data map). Step910may include unsupervised data tagging (i.e., tagging without user input). At step912, data-clustering system102may transmit clustered data samples to client device110and/or display clustered data samples at interface522, consistent with disclosed embodiments. For example, data-clustering system102may transmit and/or display a data sample with a query for user input to identify a data classification category and/or a data tag associated with the data sample (e.g., to label an image as containing an object class such as a “hairless cat”). At step914, data-clustering system102may receive data sample tags, consistent with disclosed embodiments. Receiving data sample tags may be based on user input received from client device110and/or via interface522. Data samples received at step914may correspond to data samples transmitted and/or displayed at step912. At step916, data-clustering system102may determine a number of clusters using a meta-clustering model, consistent with disclosed embodiments. Determining a number of clusters may include performing any of the methods of determining a number of clusters as previously described. In some embodiments, step922follows step916. For example, if the number of clusters as determined by a meta-clustering model matches a number of clusters of a preliminary data cluster, step922may follow step916. As another example, if a performance criterion of the preliminary data clusters is met, step922may follow step916. At step918, data-clustering system102may generate one or more updated embedding network layers, consistent with disclosed embodiments. Step918may include implementing steps of process700. Step918may include repeatedly updating an embedding network layer until a performance criterion is met. Step918may include generating one or more updated embedding network layers as described in reference to method300(FIG.3) and/or as described in reference to method400(FIG.4). At step920, data-clustering system102may generate updated clustered-data using one or more updated embedding network layers, consistent with disclosed embodiments. Step920may include any of the methods of generating updated clustered-data previously described. Step920may include generating updated clustered-data as described in reference to method300(FIG.3) and/or as described in reference to method400(FIG.4). At step922, data-clustering system102may generate final clustered-data using a meta-clustering model, consistent with disclosed embodiments. Step922may include any of the methods of generating final clustered-data previously described. Final clustered-data may include a number of dimensions, consistent with disclosed embodiments. At step924, data-clustering system102may provide final clustered data, a data map, a number of clusters, and/or a meta-clustering model, consistent with disclosed embodiments. Providing final clustered data, a data map, a number of clusters, and/or a meta-clustering model may include any of the previously described methods of providing final clustered data, a data map, a number of clusters, and/or a meta-clustering model. FIG.10depicts an exemplary process to supervise data clustering by a meta-clustering model, consistent with disclosed embodiments. In some embodiments, client device110may perform steps of process1000. In some embodiments, client device110may be connected to data-clustering system102to perform steps of process1000. In some embodiments, client device110may be a component of data clustering system102and perform steps of process1000. and It should be noted that other components of system100, including, for example, data-clustering system102may perform operations of one or more steps of process1000. At step1002, client device110may transmit a clustering request to data-clustering system102, consistent with disclosed embodiments. A clustering request may include data (e.g., data to be clustered). A clustering request may include clustered data. A clustering request may include an identifier of an embedding network layer and/or a meta-clustering model. A clustering request may include tags or other classification data. Data transmitted at step1002may include any type of data with any number of dimensions, consistent with disclosed embodiments. At step1004, client device110may receive clustered data samples from data-clustering system102consistent with disclosed embodiments. Clustered data samples may include embedding network layer output. Clustered data samples may include preliminary and/or final clustered data. At step1006, client device110may tag clustered data samples, consistent with disclosed embodiments. Tagging a clustered data sample may include providing text data, numeric data, and/or any other data associated with the clustered data samples. A tag may be associated with a category or class of data. At step1008, client device110may transit tags, consistent with disclosed embodiments. Transmitting tags may include transmitting tags to data-clustering system102, dataset database106, and/or remote database108. At step1010, client device110may receive clustered data, a data map, a number of clusters, and/or a eta-clustering model, consistent with disclosed embodiments. Receiving clustered data, a data map a number of clusters, and/or a meta-clustering model may include receiving data from data-clustering system102. Systems and methods disclosed herein involve unconventional improvements over conventional approaches to synthetic data generation. Descriptions of the disclosed embodiments are not exhaustive and are not limited to the precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. Additionally, the disclosed embodiments are not limited to the examples discussed herein. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure may be implemented as hardware alone. Computer programs based on the written description and methods of this specification are within the skill of a software developer. The various functions, scripts, programs, or modules can be created using a variety of programming techniques. For example, programs, scripts, functions, program sections or program modules can be designed in or by means of languages, including JAVASCRIPT, C, C++, JAVA, PHP, PYTHON, RUBY, PERL, BASH, or other programming or scripting languages. One or more of such software sections or modules can be integrated into a computer system, non-transitory computer-readable media, or existing communications software. The programs, modules, or code can also be implemented or replicated as firmware or circuit logic. Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
82,122
11861419
DETAILED DESCRIPTION Disclosed herein are systems, methods, and devices for improved information routing in a network computing environment. An embodiment of the disclosure is an asynchronous object manager configured to track the life cycle of different control plan information. The asynchronous object manager can be deployed in a network routing environment and may be included in a software stack for controlling operations of a networking device such as a switch or router. In a computer network environment, a networking device such as a switch or router may be used to transmit information from one destination to a final destination. In an embodiment, a data package and a message may be generated at a first location such as computer within a person's home. The data package and the message could be generated from the person interacting with a web browser and requesting information from or providing information to a remote server accessible over the Internet. In an example, the data package and the message could be information the person input into a form accessible on a webpage connected to the Internet. The data package and the message may need to be transmitted to the remote server that may be geographically located very far from the person's computer. It is very likely that there is no direct communication between the router at the person's home and the remote server. Therefore, the data package and the message must travel by “hopping” to different networking devices until reaching the final destination at the remote server. The router at the person's home must determine a route for transmitting the data package and the message thru multiple different devices connected to the Internet until the data package and the message reach the final destination at the remote server. The processes of determining a best bath from a first location to a final destination and forwarding data packages and messages to a next destination are significant functions performed by a networking device such as a switch or router. Disclosed herein are systems, methods, and devices for improving the operations of a networking device. An embodiment of the disclosure is encompassed in a software stack that operates on the routing chip hardware of a switch or router. One portion of the software stack is the asynchronous object manager as discussed herein. The asynchronous object manager enables numerous benefits in the network routing environment. First, because of the operations performed by the asynchronous object manager, producers of messages no longer need to speak with one another and can instead transmit messages to be organized by the asynchronous object manager. Further, the asynchronous object manager provides a means to express relationships and dependencies between different objects. Further, a state machine within the asynchronous object manager allows the asynchronous object manager to wait for all required objects to arrive before issuing an order to a different portion of the software stack. In an embodiment, the asynchronous object manager is software that sits on top of and manages routing chip hardware in a networking device such as a router or switch. Notably, the asynchronous object manager may be used in a router or switch without modification to the software language. In an embodiment, the asynchronous object manager in combination with other software stacks can be used to convert a switch to a router and vice versa. The asynchronous object manager collects information from a computer network and digests the information in a form that can be programmed by routing chip hardware. The asynchronous object manager causes the package forwarding function to be performed by the routing chip hardware. The asynchronous object manager is the lowest layer of the software stack that operates on the networking device. The asynchronous object manager is responsible for interacting with the underlying routing chip hardware and for interacting with application program interfaces (APIs). The asynchronous object manager may work in conjunction with a data plan adaptation layer (DPAL). The DPAL may have multiple clients and may need to wait for different pieces of information from different clients. The DPAL cannot perform its functions unless the objects are arranged in a specific order. In an embodiment, the primary task of the asynchronous object manager is to reorder objects so the objects can be processed by the DPAL. The asynchronous object manager can be used outside the context of DPAL and may be applied as VPN software for building layered segments on top of a layer3 network, and others. In an embodiment, the asynchronous object manager is a state machine. The asynchronous object manager is designed to build in certain states within the conditions of predefined parameters. The asynchronous object manager may be configured to rearrange messages that carry programming information rather than managing information packages. In an example, the asynchronous object manager is configured to tell the underlying routing chip hardware that, for example, interphase A needs to be transferred to interphase B, and so forth. In an embodiment, the asynchronous object manager interacts with the DPAL and provide DPAL with a route for transferring messages from a first location to a final destination. In an example, the DPAL may receive messages over multiple communication channels. The DPAL can query the asynchronous object manager and request a route for transmitting a message. The asynchronous object manager creates the route, records the route, and provides the route to the DPAL. The asynchronous object manager is unique in that it expresses relationship between different objects. The state machine built inside the asynchronous object manager allows the asynchronous object manager to wait for all objects to arrive before issues a route to the DPAL or allowing changes to be made to any objects. The asynchronous object manager eliminates the need for object producers to communicate with one another. For purposes of furthering understanding of the disclosure, some explanation will be provided for numerous networking computing devices and protocols. A BGP instance is a device for routing information in a network. A BGP instance may take the form of a route reflector appliance. The BGP instance may run on a switch, router, or BGP speakers on a switch. At a high level, the BGP instance sends all the paths it has learnt for a prefix to the best path controller. The best path controller responds with a set of best path from amongst those paths. The best path controller is permitted to modify the next-hop and attributes for any of the paths. Once the best paths are received, the BGP instance updates the local Routing Information Base (RIB) and advertises the best path out to its neighbors. A switch (may alternatively be referred to as a switching hub, bridging hub, or MAC bridge) creates a network. Most internal networks use switches to connect computers, printers, phones, camera, lights, and servers in a building or campus. A switch serves as a controller that enables networked devices to talk to each other efficiently. Switches connect devices on a computer network by using packet switching to receive, process, and forward data to the destination device. A network switch is a multiport network bridge that uses hardware addresses to process and forward data at a data link layer (layer 2) of the Open Systems Interconnection (OSI) model. Some switches can also process data at the network layer (layer 3) by additionally incorporating routing functionality. Such switches are commonly known as layer-3 switches or multilayer switches. A router connects networks. Switches and routers perform similar functions, but each has its own distinct function to perform on a network. A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the Internet, such as a web page, email, or other form of information, is sent in the form of a data packet. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g., the Internet) until the packet reaches its destination node. Routers are connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Then, using information in the router's routing table or routing policy, the router directs the packet to the next network on its journey. A BGP speaker is a router enabled with the Border Gateway Protocol (BGP). A routing table or routing information base (RIB) is a data table stored in a router or a networked computer that lists the routes to particular network destinations. In some cases, a routing table includes metrics for the routes such as distance, weight, and so forth. The routing table includes information about the topology of the network immediately around the router on which it is stored. The construction of routing tables is the primary goal of routing protocols. Static routes are entries made in a routing table by non-automatic means and which are fixed rather than being the result of some network topology discovery procedure. A routing table may include at least three information fields, including a field for network ID, metric, and next hop. The network ID is the destination subnet. The metric is the routing metric of the path through which the packet is to be sent. The route will go in the direction of the gateway with the lowest metric. The next hop is the address of the next station to which the packet is to be sent on the way to its final destination. The routing table may further include quality of service associate with the route, links to filtering criteria lists associated with the route, interface for an Ethernet card, and so forth. For purposes of illustrating the concept of a routing table, the routing table may be analogized to using a map for delivering a package. A routing table is similar to the use of a map for delivering a package to its final destination. When a node needs to send data to another node on a network, the node must first know where to send the data. If the node cannot directly connect to the destination node, the node must send the data to other nodes along a proper route to the destination node. Most nodes do not try to figure out which routes might work. Instead, a node will send an IP packet to a gateway in the LAN, which then decides how to route the data to the correct destination. Each gateway will need to keep track of which way to deliver various packages of data, and for this it uses a routing table. A routing table is a database that keeps track of paths, like a map, and uses these paths to determine which way to forward traffic. Gateways can also share the contents of their routing table with other nodes requesting the information. For hop-by-hop routing, each routing table lists, for all reachable destinations, the address of the next device along the path to that destination, i.e. the next hop. Assuming the routing tables are consistent, the algorithm of relaying packets to their destination's next hop thus suffices to deliver data anywhere in a network. Hop-by-hop is a characteristic of an IP Internetwork Layer and the Open Systems Interconnection (OSI) model. The Open Systems Interconnection (OSI) model is a conceptual model that characterizes and standardizes the communication functions of a computing system without regard to its underlying internal structure and technology. The goal of the OSI model is the interoperability of diverse communication systems with standard communication protocols. The OSI model partitions a communication system into abstraction layers. A layer serves the layer above it and is served by the layer below. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that constitute the contents of that path. Two instances at the same layer are visualized as connected by a horizontal connection in that layer. Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to an (N)-layer by an (N-1)-layer, wherein N is one of the layers of protocols operating in the local host. Route control is a type of network management that aims to improve Internet connectivity and reduce bandwidth cost and overall internetwork operations. Some route control services include a suite of hardware-based and software-based products and services that work together to improve overall Internet performance and finetune the use of available Internet bandwidth at minimal cost. Route control can be successful in scenarios where a network or autonomous system is sourcing Internet bandwidth from multiple providers. Route control can aid in the selection of the most optimal path for data transmission. Some network communication systems are large, enterprise-level networks with thousands of processing nodes. The thousands of processing nodes share bandwidth from multiple Internet Service Providers (ISPs) and can process significant Internet traffic. Such systems can be extremely complex and must be properly configured to result in acceptable Internet performance. If the systems are not properly configured for optimal data transmission, the speed of Internet access can decrease, and the system can experience high bandwidth consumption and traffic. To counteract this problem, a set of services may be implemented to remove or reduce these concerns. This set of services may be referred to as routing control. An embodiment of a routing control mechanism is composed of hardware and software. The routing control mechanism monitors all outgoing traffic through its connection with an Internet Service Provider (ISP). The routing control mechanism aids in selecting the best path for efficient transmission of data. The routing control mechanism may calculate the performance and efficiency of all ISPs and select only those ISPs that have performed optimally in applicable areas. Route control devices can be configured according to defined parameters pertaining to cost, performance, and bandwidth. A known algorithm for determining the best path for the transmission of data is referred to as the Border Gateway Protocol (BGP). BGP is a path-vector protocol that provides routing information for autonomous systems on the Internet. When BGP is configured incorrectly, it can cause sever availability and security issues. Further, modified BGP route information can permit attackers to redirect large blocks of traffic so the traffic travels to certain routers before reaching its intended destination. The BGP best path algorithm can be implemented to determine the best path to install in an Internet Protocol (IP) routing table for traffic forwarding. BGP routers may be configured to receive multiple paths to the same destination. The BGP best path algorithm assigns a first valid path as the current best path. The BGP best path algorithm compares the best path with the next path in the list until the BGP reaches the end of the list of valid paths. The list provides the rules that are used to determine the best path. For example, the list may include an indication that the path with the highest weight is preferred, the path without a local preference is preferred, the path that was locally originated by way of a network or aggregate BGP is preferred, a shortest path is preferred, a path with the lowest multi-exit discriminator is preferred, and so forth. The BGP best path selection process can be customized. In the context of BGP routing, each routing domain is known as an autonomous system (AS). BGP assists in selecting a path through the Internet to connect two routing domains. BGP typically selects a route that traverses the least number of autonomous systems, referred to as the shortest AS path. In an embodiment, once BGP is enabled, a router will pull a list of Internet routes from BGP neighbors which may be ISPs. BGP will then scrutinize the list to find routes with the shortest AS paths. These routes may be entered in the router's routing table. Generally, a router will choose the shortest path to an AS. BGP uses path attributes to determine how to route traffic to specific networks. For the purposes of promoting an understanding of the principles in accordance with the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Any alterations and further modifications of the inventive features illustrated herein, and any additional applications of the principles of the disclosure as illustrated herein, which would normally occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the disclosure claimed. Before the structure, systems and methods for tracking the life cycle of objects in a network computing environment are disclosed and described, it is to be understood that this disclosure is not limited to the particular structures, configurations, process steps, and materials disclosed herein as such structures, configurations, process steps, and materials may vary somewhat. It is also to be understood that the terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting since the scope of the disclosure will be limited only by the appended claims and equivalents thereof. In describing and claiming the subject matter of the disclosure, the following terminology will be used in accordance with the definitions set out below. It must be noted that, as used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. As used herein, the terms “comprising,” “including,” “containing,” “characterized by,” and grammatical equivalents thereof are inclusive or open-ended terms that do not exclude additional, unrecited elements or method steps. As used herein, the phrase “consisting of” and grammatical equivalents thereof exclude any element or step not specified in the claim. As used herein, the phrase “consisting essentially of” and grammatical equivalents thereof limit the scope of a claim to the specified materials or steps and those that do not materially affect the basic and novel characteristic or characteristics of the claimed disclosure. Referring now to the figures,FIG.1illustrates a schematic diagram of a system100for connecting devices to the Internet. The system100includes multiple local area network160connected by a switch106. Each of the multiple local area networks160can be connected to each other over the public Internet by way of a router162. In the example system100illustrated inFIG.1, there are two local area networks160. However, it should be appreciated that there may be many local area networks160connected to one another over the public Internet. Each local area network160includes multiple computing devices108connected to each other by way of a switch106. The multiple computing devices108may include, for example, desktop computers, laptops, printers, servers, and so forth. The local area network160can communicate with other networks over the public Internet by way of a router162. The router162connects multiple networks to each other. The router162is connected to an internet service provider102. The internet service provider102is connected to one or more network service providers104. The network service providers104are in communication with other local network service providers104as shown inFIG.1. The switch106connects devices in the local area network160by using packet switching to receive, process, and forward data to a destination device. The switch106can be configured to, for example, receive data from a computer that is destined for a printer. The switch106can receive the data, process the data, and send the data to the printer. The switch106may be a layer-1 switch, a layer-2 switch, a layer-3 switch, a layer-4 switch, a layer-7 switch, and so forth. A layer-1 network device transfers data but does not manage any of the traffic coming through it. An example of a layer-1 network device is an Ethernet hub. A layer-2 network device is a multiport device that uses hardware addresses to process and forward data at the data link layer (layer 2). A layer-3 switch can perform some or all of the functions normally performed by a router. However, some network switches are limited to supporting a single type of physical network, typically Ethernet, whereas a router may support different kinds of physical networks on different ports. The router162is a networking device that forwards data packets between computer networks. In the example system100shown inFIG.1, the routers162are forwarding data packets between local area networks160. However, the router162is not necessarily applied to forwarding data packets between local area networks160and may be used for forwarding data packets between wide area networks and so forth. The router162performs traffic direction functions on the Internet. The router162may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. The router162can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers162may also be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. The router162can provide connectivity within an enterprise, between enterprises and the Internet, or between internet service providers' networks as shown inFIG.1. Some routers162are configured to interconnecting various internet service providers or may be used in large enterprise networks. Smaller routers162typically provide connectivity for home and office networks to the Internet. The router162shown inFIG.1may represent any suitable router for network transmissions such as an edge router, subscriber edge router, inter-provider border router, core router, internet backbone, port forwarding, voice/data/fax/video processing routers, and so forth. The internet service provider (ISP)102is an organization that provides services for accessing, using, or participating in the Internet. The ISP102may be organized in various forms, such as commercial, community-owned, non-profit, or privately owned. Internet services typically provided by ISPs102include Internet access, Internet transit, domain name registration, web hosting, Usenet service, and colocation. The ISPs102shown inFIG.1may represent any suitable ISPs such as hosting ISPs, transit ISPs, virtual ISPs, free ISPs, wireless ISPs, and so forth. The network service provider (NSP)104is an organization that provides bandwidth or network access by providing direct Internet backbone access to Internet service providers. Network service providers may provide access to network access points (NAPs). Network service providers104are sometimes referred to as backbone providers or Internet providers. Network service providers104may include telecommunication companies, data carriers, wireless communication providers, Internet service providers, and cable television operators offering high-speed Internet access. Network service providers104can also include information technology companies. It should be appreciated that the system100illustrated inFIG.1is exemplary only and that many different configurations and systems may be created for transmitting data between networks and computing devices. Because there is a great deal of customizability in network formation, there is a desire to create greater customizability in determining the best path for transmitting data between computers or between networks. In light of the foregoing, disclosed herein are systems, methods, and devices for offloading best path computations to an external device to enable greater customizability in determining a best path algorithm that is well suited to a certain grouping of computers or a certain enterprise. FIG.2is a schematic diagram of an object structure200including three types of objects A, B, and C in an environment managed by an asynchronous object manager as discussed herein. The object instances of type A, B, and C are identified by keys. The object type A is represented by key A1. The objects type B are represented by the keys B1and B2. The objects type C are represented by the keys C1and C2. The arrows represent dependencies between objects. The object type A with key A1is dependent on both objects type B with keys B1and B2. The object type B with key B1depends on the object type C with key C1. The object type B with key B2depends on the object type C with key C2. In an example implementation, it is assumed that objects type A and B are produced by one producer and objects type C are produced by another producer. The information in these objects can be programmed in the data plan in order of C, then B, then A. However, because the producers push these objects asynchronously, the objects might arrive out of order. The asynchronous object manager as discussed herein can be used to solve the ordering requirement illustrated in the above example implementation. In an embodiment, all objects are added to the asynchronous object manager in whichever order the objects are received. The asynchronous object manager detects whether the dependencies of a given object are resolved. In response to determining the dependencies are resolved, the asynchronous object manager calls back an application provided callback and passes the object and its application state for further action. In the example implementation, the actual arrival sequence of the objects is B(B1), A(A1), B(B2), C(C1), and C(C2). This is illustrated inFIG.3. The asynchronous object manager reorders this sequence and initiates a callback on the application in a suitable order. The ordering of the objects may differ for different implementations. An example ordering of the objects is: C(C1), C(C2), B(B1), B(B2), A(A1). This is illustrated inFIG.4. Likewise, a deletion sequence from the producers may look like: C(C1), C(C2), B(B1), B(B2), A(A1). The asynchronous object manager would reorder the deletion sequence to be: A(A1), B(B1), B(B2), C(C1), C(C2). This is illustrated inFIG.5. The dependencies may be defined according to a unique key that identifies the object in the system space of the asynchronous object manager. The dependency may indicate the key of the parent object that the child object adopts. For example, a route object may have a field called “nexthopID” and the nexthopID is the key of the NextHop object. When a router object is added to the asynchronous object manager, a separate dependency list includes the nexthopID. This in turn uniquely identifies the NextHop object it depends upon. FIG.6is a schematic diagram of communications facilitated by a BGP instance610. In an embodiment, there is a datastore602local to the BGP instance that stores pertinent information for the system. The datastore602may be a database storing best path information for one or more routers or switches. The datastore602may further store system state information such as CPU utilization, temperature, fan speed, and state information for peripherals such as LEDs or other devices. The datastore602may store a variety of information that may be useful to a monitoring agent. The information in the datastore602can be streamed out to another controller or device that could want such information. The datastore602may include a database index and may include multiple hosts. Each of the multiple hosts may include a processor and cache memory. In the example embodiment shown inFIG.6, the datastore602includes a database host1, a database host2, a database host3, and so on thru database host n. The datastore602is in communication with a producer604, a producer consumer606, and a consumer608. The producer604is a process that produces information is consumed by a consumer606. The forwarding information base (FIB) (see710) produces routes and next hops. The next hop produced by the FIB has a dependency on the interface produced by an interface manager. FIG.7is a schematic diagram of a networking device702. The networking device702may be a switch106or a router112. The networking device702includes one or more of hardware704, an asynchronous object manager706, a data plan adaptation layer (DPAL)708, a forwarding information base (FIB)710, a routing information base (RIB)712, a configuration agent, and a Border Gateway Protocol (BGP)716. It should be appreciated the networking device702may have additional components that are not illustrated herein. The software portions of the networking device702may be included in the software stack414shown inFIG.4. This software stack414works in conjunction with hardware to perform operations of networking device702such as a switch or router. In an embodiment, the software portions of the networking device702are not located locally within the BGP instance but are instead offloaded to cloud storage. The software stack414may be offloaded to cloud storage and copied locally on networking device702device. In an embodiment, the asynchronous object manager706provides one or more application program interfaces (APIs) to track the life cycle of different control plane information (represented as objects in, for example,FIGS.2-5). The asynchronous object manager706further tracks dependencies between objects. The asynchronous object manager706is located in the lowest layer to help sequence and order information asynchronously flowing down from different producers. In an embodiment, the asynchronous object manager706implements a state machine for reordering objects. In an embodiment, if objects from different producers do not arrive in the required order, the state machine holds back the creation or updates of the dependent objects until the parent objects are created. Likewise, the state machine holds back deletion of an object until its dependents have been updated or deleted. The asynchronous object manager706organizes the objects using a data structure resembling the graph and can recursively ripple the effect of creating, updating, and/or deleting an object up or down the dependency graph. SeeFIGS.2-5. The asynchronous object manager706includes a framework comprising declarative language. The declarative language expresses the dependencies between different kinds of objects. The declarative language can be used to define the schemas for different object types. Further, the declarative language can be used along with attributes that make up the object's key and other attributes to identify the object's dependencies. In an embodiment, after the schemas are defined, a code generator can be run to generate code for the different objects. The generated code covers on or more of programming language constructs to represent the objects, APIs to add or delete an object, auto generation of code to extract keys and dependent keys, or object state observation APIs. Further in an embodiment, the framework of the asynchronous object manager706enables a multiple threaded architecture at the lowest layer by allowing multiple threads to program the data plane. The work distribution can be done in numerous different methods. One method includes causing each thread to manage care of data plane programming for one or more data plane devices (commonly ASIC chips). This method is straightforward because work from producers are evenly distributed across one or more worker threads. Further, each worker thread has an instance of the asynchronous object manager706framework to hold information from the producers and to program the assigned devices. Another method includes causing each thread to perform programming for a selected set of data plane tables on the same device. This method is more involved and requires mapping the worker threads to one or more feature objects. A distributor thread may be responsible for distributing the feature objects to the respective worker thread. Here, relationship between the objects may span across threads. The asynchronous object manager706may require extended capabilities to keep track of object dependencies across threads. Synchronization is achieved by message passing between threads. The asynchronous object manager706may ensure that an object is deleted only once and there are no objects in the other threads that depend on that object. The forwarding information base (FIB)710may alternatively be referred to as a forwarding table. The FIB is configured to identify the proper output network interface to which an input interface should forward an object. The FIB is a dynamic table that maps media access control (MAC) addresses to ports. The routing information base (RIB)712may alternatively be referred to as a routing table. The RIB is a data table stored in the networking device702that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with the routes. The RIB712includes information about the topology of the network surrounding the networking device702. The construction of routing tables is the primary goals of routing protocols such as the Border Gateway Protocol (BGP)716. FIG.8is a schematic diagram of system800for communication between a node804of a device810and controller logic812stored in a cloud network828. The controller logic812may include logic for operations of the asynchronous object manager. The device810may be a switch106or a router162as discussed herein. The node804includes a configuration agent806and a monitoring/telemetry agent808. The device810includes a datastore802. The datastore802may be stored locally with the node804, may be stored in the device810and made accessible to multiple nodes, may be offloaded to cloud storage, or may be stored externally and made accessible to multiple devices802. The configuration agent806receives instructions in the form of controller logic812. The controller logic812is stored in cloud-based storage on a cloud network and is made accessible to the device810over a network. The configuration agent806provides instructions to the monitoring/telemetry agent808. The monitoring/telemetry agent808receives information from the datastore802. The datastore802may include information for multiple applications such as application 1, application 2, and up thru application N as shown. FIG.9is a schematic diagram of an architecture900of a multiple node datastore. The example architecture900includes two nodes shown as discrete blocks Node1 and NodeN. It should be appreciated there may be any number of nodes suitable to different embodiments and implementations of the disclosure. Node1 includes a duplicator agent910in communication with the internal fabric926that provides communication with the replicator agent904of NodeN. The configuration agent906of Node1 is in communication with the cloud928network which includes the controller logic912for the datastore902. Each of the nodes includes a copy of the datastore902. The information in the datastore902may be accessed and used by multiple applications such as application1, application2, and up thru applicationN. The monitoring and telemetry agent908of NodeN is in communication with the datastore902and the cloud928network. The configuration agent906of Node1 can be in communication with the monitoring and telemetry agent908of NodeN by way of the cloud928network. FIG.10is a schematic block diagram of a method1000for asynchronously receiving and reordering data to be transmitted with a networking device. The method1000can be performed by an asynchronous object manager706as discussed herein or any suitable computing device. The method1000begins and a computing device asynchronously receives at1002a plurality of objects from one or more producers. The objects may include, for example, routes, next hops, equal-cost multipath groups, interfaces, ACL, ACEs, QoS classes, sub-interfaces, virtual local area networks, and so forth. The method1000continues and a computing device identifies at1004one or more dependencies between two or more of the plurality of objects. The method1000continues and a computing device reorders at1006the plurality of objects according to the one or more dependencies. The method1000continues and a computing device determines at1008whether the one or more dependences is resolved. The method1000includes, in response to determining the one or more dependencies is resolved, calling back at1010an application and providing one or more of the plurality of objects to the application. FIG.11is a schematic block diagram of a method1100for improving operations of a networking device. The method1000can be performed by an asynchronous object manager706as discussed herein or any suitable computing device. The method1100begins and a computing devices stores at1102a state for a plurality of routes known to a networking device. The method1100includes receiving at1104an indication that a first route is offline. The method1100continues and a computing device identifies at1106a first interphase link associated with the first route. The method1100continues and a computing device identifies at1108a replacement interphase link to be associated with the first route. This identification may be performed by performing the Border Gateway Protocol (BGP) algorithm for determining a best path between two locations. The method1100continues and a computing device provides at1110an indication to routing chip hardware of the networking device that the first route should be processed with the replacement interphase link rather than the first interphase link. Referring now toFIG.12, a block diagram of an example computing device1200is illustrated. Computing device1200may be used to perform various procedures, such as those discussed herein. In one embodiment, the computing device1200can function to perform the functions of the asynchronous object manager and can execute one or more application programs. Computing device1200can be any of a wide variety of computing devices, such as a desktop computer, in-dash computer, vehicle control system, a notebook computer, a server computer, a handheld computer, tablet computer and the like. Computing device1200includes one or more processor(s)1202, one or more memory device(s)1204, one or more interface(s)1206, one or more mass storage device(s)1208, one or more Input/output (I/O) device(s)1202, and a display device1230all of which are coupled to a bus1212. Processor(s)1202include one or more processors or controllers that execute instructions stored in memory device(s)1204and/or mass storage device(s)1208. Processor(s)1202may also include various types of computer-readable media, such as cache memory. Memory device(s)1204include various computer-readable media, such as volatile memory (e.g., random access memory (RAM)1214) and/or nonvolatile memory (e.g., read-only memory (ROM)1216). Memory device(s)1204may also include rewritable ROM, such as Flash memory. Mass storage device(s)1208include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown inFIG.12, a particular mass storage device is a hard disk drive1224. Various drives may also be included in mass storage device(s)1208to enable reading from and/or writing to the various computer readable media. Mass storage device(s)1208include removable media1226and/or non-removable media. Input/output (I/O) device(s)1202include various devices that allow data and/or other information to be input to or retrieved from computing device1200. Example I/O device(s)1202include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, and the like. Display device1230includes any type of device capable of displaying information to one or more users of computing device1200. Examples of display device1230include a monitor, display terminal, video projection device, and the like. Interface(s)1206include various interfaces that allow computing device1200to interact with other systems, devices, or computing environments. Example interface(s)1206may include any number of different network interfaces1220, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface1218and peripheral device interface1222. The interface(s)1206may also include one or more user interface elements1218. The interface(s)1206may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, or any suitable user interface now known to those of ordinary skill in the field, or later discovered), keyboards, and the like. Bus1212allows processor(s)1202, memory device(s)1204, interface(s)1206, mass storage device(s)1208, and I/O device(s)1202to communicate with one another, as well as other devices or components coupled to bus1212. Bus1212represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE bus, USB bus, and so forth. For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device1200and are executed by processor(s)1202. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure. Further, although specific implementations of the disclosure have been described and illustrated, the disclosure is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the disclosure is to be defined by the claims appended hereto, if any, any future claims submitted here and in different applications, and their equivalents. EXAMPLES The following examples pertain to further embodiments. Example 1 is a system. The system includes routing chip hardware and an asynchronous object manager in communication with the routing chip hardware. The asynchronous object manager is configurable to execute instructions stored in non-transitory computer readable storage media. The instructions include asynchronously receiving a plurality of objects from one or more producers. The instructions include identifying one or more dependencies between two or more of the plurality of objects. The instructions include reordering the plurality of objects according to the one or more dependencies. The instructions include determining whether the one or more dependencies is resolve. The instructions include, in response to determining the one or more dependencies is resolved, calling back an application and providing one or more of the plurality of objects to the application. Example 2 is a system as in Example 1, wherein the asynchronous object matter comprises a state machine. Example 3 is a system as in any of Examples 1-2, wherein the asynchronous object matter is the bottom most layer of a software stack for managing operations of a networking device. Example 4 is a system as in any of Examples 1-3, wherein the one or more producers comprise one or more of an application, a process, a thread, or a function. Example 5 is a system as in any of Examples 1-4, wherein the instructions further comprise providing a message to the routing chip hardware indicating that a first route needs to be processed through a first interphase link. Example 6 is a system as in any of Examples 1-5, further comprising a Data Plan Adaptation Layer (DPAL) in communication with the asynchronous object manager and the routing chip hardware, and wherein the instructions for the asynchronous object manager further comprise: receiving a message from the DPAL to create a route for a message to be transmitted from a first location to a final destination; creating the route for the message; and providing the route to the DPAL. Example 7 is a system as in any of Examples 1-6, wherein the instructions further comprise: storing a state for a plurality of routes known to the asynchronous object manager; receiving an indication that a first route is offline; identifying a first interphase link associated with the first route; identifying a replacement interphase link to be associated with the first route; and providing an indication to the routing chip hardware that the first route should be processed with the replacement interphase link rather than the first interphase link. Example 8 is a system as in any of Examples 1-7, wherein the asynchronous object manager provides a means for a first producer to provide a message to the asynchronous object manager in lieu of providing a message directly to a second producer of a next hop. Example 9 is a system as in any of Examples 1-8, wherein the asynchronous object manager is compatible for operating on a switch or a router. Example 10 is a system as in any of Examples 1-9, wherein the instructions further comprise: receiving a deletion sequence for the plurality of objects from the one or more producers; and reordering the deletion sequence according to the one or more dependencies. Example 11 is one or more processors configurable to execute instructions stored in non-transitory computer readable storage media. The instructions include asynchronously receiving a plurality of objects from one or more producers. The instructions include identifying one or more dependencies between two or more of the plurality of objects. The instructions include reordering the plurality of objects according to the one or more dependencies. The instructions include determining whether the one or more dependencies is resolved. The instructions include, in response to determining the one or more dependencies is resolved, calling back an application and providing one or more of the plurality of objects to the application. Example 12 is one or more processors as in Example 11, wherein the instructions further comprise providing a message to the routing chip hardware indicating that a first route needs to be processed through a first interphase link. Example 13 is one or more processors as in any of Examples 11-12, wherein the instructions further comprise: storing a state for a plurality of routes known to the asynchronous object manager; receiving an indication that a first route is offline; identifying a first interphase link associated with the first route; identifying a replacement interphase link to be associated with the first route; and providing an indication to the routing chip hardware that the first route should be processed with the replacement interphase link rather than the first interphase link. Example 14 is one or more processors as in any of Examples 11-13, wherein the instructions further comprise providing a means for a first producer to provide a message to the asynchronous object manager in lieu of providing a message directly to a second producer of a next hop. Example 15 is one or more processors as in any of Examples 11-14, wherein the instructions for the one or more processors are compatible for operating on a switch or a router. Example 16 is a method. The method includes asynchronously receiving a plurality of objects from one or more producers. The method includes identifying one or more dependencies between two or more of the plurality of objects. The method includes reordering the plurality of objects according to the one or more dependencies. The method includes determining whether the one or more dependencies is resolved. The method includes, in response to determining the one or more dependencies is resolved, calling back an application and providing one or more of the plurality of objects to the application. Example 17 is a method as in Example 16, further comprising providing a message to the routing chip hardware indicating that a first route needs to be processed through a first interphase link. Example 18 is a method as in any of Examples 16-17, wherein the asynchronous object matter comprises a state machine and is located at the bottom most layer of a software stack for managing operations of a networking device. Example 19 is a method as in any of Examples 16-18, further comprising: storing a state for a plurality of routes known to the asynchronous object manager; receiving an indication that a first route is offline; identifying a first interphase link associated with the first route; identifying a replacement interphase link to be associated with the first route; and providing an indication to the routing chip hardware that the first route should be processed with the replacement interphase link rather than the first interphase link. Example 20 is a method as in any of Examples 16-19, further comprising: receiving a deletion sequence for the plurality of objects from the one or more producers; and reordering the deletion sequence according to the one or more dependencies. It is to be understood that any features of the above-described arrangements, examples, and embodiments may be combined in a single embodiment comprising a combination of features taken from any of the disclosed arrangements, examples, and embodiments. It will be appreciated that various features disclosed herein provide significant advantages and advancements in the art. The following claims are exemplary of some of those features. In the foregoing Detailed Description of the Disclosure, various features of the disclosure are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the disclosure. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the spirit and scope of the disclosure and the appended claims are intended to cover such modifications and arrangements. Thus, while the disclosure has been shown in the drawings and described above with particularity and detail, it will be apparent to those of ordinary skill in the art that numerous modifications, including, but not limited to, variations in size, materials, shape, form, function and manner of operation, assembly and use may be made without departing from the principles and concepts set forth herein. Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure. Further, although specific implementations of the disclosure have been described and illustrated, the disclosure is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the disclosure is to be defined by the claims appended hereto, any future claims submitted here and in different applications, and their equivalents.
53,838
11861420
DESCRIPTION OF EMBODIMENTS A method and apparatus for concurrency control in an asynchronous event-loop based program environment is described. The flow of events into a program implemented with an asynchronous event-loop is controlled, and/or the flow of outgoing messages from the program are controlled. For example, the program may be a piece of JavaScript and may be implemented in an isolated execution environment such as an isolate of the V8 JavaScript engine. When the program is executing a storage operation, no events are delivered to the program except for storage completion events. Any other event is deferred until the program is no longer executing code and the program is not waiting for a storage operation to complete. To control outgoing messages from the program, when a storage write operation is in progress, any new outgoing network messages are prevented from being sent until the write operation has completed (e.g., confirmed to be written to disk). If the write operation fails, the outgoing network messages are discarded and replaced with errors. An input event may be an incoming request (e.g., an HTTP/S request), a response (e.g., an incoming HTTP/S response received from a previous outgoing request), an internal event such as a scheduled job, a timer event (e.g., a JavaScript timer event such as setTimeout( ) or setInterval( )), a cache API operation event, a key value store read/write event, a TCP I/O event, or other network event, a keyboard input event, a mouse input event, etc. For instance, consider a program that initiates a read operation from storage and an HTTP request to a remote server, and the HTTP response is received before the storage read completes. The HTTP response is prevented from being delivered to the object worker until the read result is delivered first. If the result of the read operation initiates another read operation, the HTTP response remains blocked until the second read completes, and so on. The HTTP response is delivered to the object worker only once that object worker has no storage operations (e.g., storage requests or storage writes) in-flight and it is not executing code in response to another storage operation event. Thus, the input event may be controlled so that an asynchronous storage operation can be performed without inadvertently allowing a concurrent operation on the single-threaded event loop to run in the meantime that may change the program state in unexpected ways. Controlling outgoing messages allows the program to continue executing concurrently with a storage write without running the risk of data loss after confirmation (by preventing other parties from being falsely informed that the data was stored). To the program, it appears as if the write operation finishes relatively instantly even though the actual write operation may not be completed (or even complete) and the object worker can continue to execute code. However, outgoing network messages are prevented from being sent until the write operation is complete. Thus, the program can assume the storage write operation succeeded and continue executing the code. If the storage operation fails, then no outgoing message is delivered and an error message is in place. Thus, in the rare event that a write operation fails, a premature confirmation of a successful write operation is not received by remote parties. This means that although the write is assumed to be confirmed, no other entity will receive that confirmation until the write is confirmed. In the meantime, the program can execute other code concurrently that it would otherwise have had to wait to run for the confirmation that the storage write completed. In an embodiment, an in-memory caching layer is used. The in-memory caching layer may cache data directly in memory in the process where the program runs. When a read operation requests a key that is in the cache, the operation returns the value from the cache. The value may be returned without context-switching out of the thread and isolate where the program is hosted. If the key is not in the cache, then a storage request is needed. A storage operation writes to the in-memory caching layer. The output control described herein prevents the premature confirmation of writes to any external entity. Write operations may be coalesced (even if they are ‘await’ed) such that the output control waits only for O(1) network round trips of latency, not O(n). In an embodiment, the code may be written to bypass the controlling of the events with specific syntax that indicates that the controlling of events will not occur. In an embodiment, data of the program is separated into one or more units referred herein as objects, where a single object is owned by a single instantiation of a piece of code that can read and/or modify the object while the single piece of code is executing. Other entities that wish to read and/or modify the object communicate with the single instantiation of the piece of code that owns the object. As referred herein, an object worker includes a combination of the single instantiation of a piece of code and the object that belongs to the single instantiation of the piece of code. Each instance of an object worker has its own private and persistent data that the object worker can read and/or modify and which no other object worker can directly access. Thus, the single instantiation of the piece of code solely controls reading and/or writing access to the object in which it controls. The piece of code can be, for example, a piece of JavaScript or other interpreted language, a WebAssembly (WASM) compiled piece of code, or other compiled code. In an embodiment, the piece of code is written against standard Web Platform APIs such as the W3C standard ServiceWorker API for receiving HTTP requests. For purposes of this description, each piece of code is referred to as an object worker script, and each single instantiation of the piece of code is referred to as an instantiated object worker script. The object of an object worker may be persistently located in storage (e.g., object storage). An object worker locks the data such that it is the sole owner of the data while it is being executed. Other entities that wish to interact with the data send messages to the object worker that owns the data. The object worker may be a program based on a single-threaded event loop. FIG.1illustrates an exemplary embodiment of a cloud computing platform that executes a system for concurrency control in an asynchronous event-loop based program environment in a distributed cloud computing network. The system100includes the client devices110A-L, the compute servers120A-N, the data store160, the origin server180, the control server185, and the third-party device190. Each client device is a computing device (e.g., laptop, workstation, smartphone, mobile phone, tablet, gaming system, set top box, wearable device, Internet of Things (IoT) device, etc.) that can transmit and/or receive network traffic. Each client device may execute a client network application such as a web browser, native application, or other application that can access network resources (e.g., web pages, images, word processing documents, PDF files, movie files, music files, or other computer files). The compute servers120A-N are part of the distributed cloud computing network105. The compute servers120A-N are geographically distributed (e.g., in different locations throughout the world). There may be hundreds or more compute servers120. Each compute server120may include one or more physical servers that are part of the same PoP. Although not illustrated inFIG.1, the compute servers120A-N may be part of PoPs that may include other physical servers (e.g., one or more compute servers, one or more control servers, one or more DNS servers (e.g., one or more authoritative name servers, one or more proxy DNS servers), and one or more other pieces of network equipment such as router(s), switch(es), and/or hub(s)). Each PoP (and each compute server) may be part of a different data center and/or colocation site. Although not illustrated inFIG.1, there are other physical devices between the compute servers120A-N such as routers, switches, etc. Each compute server may execute a program implemented with an asynchronous event-loop. An example of such a program is the object worker150. As described above, each object worker includes a combination of an instantiation of a piece of code and an object that belongs to the instantiation of the piece of code. Each instance of an object worker has its own private and persistent data that the object worker can read and/or modify and which no other object worker can directly access. The piece of code can be, for example, a piece of JavaScript or other interpreted language, a WebAssembly (WASM) compiled piece of code, or other compiled code. In an embodiment, the piece of code is written against standard Web Platform APIs such as compliant with the W3C standard ServiceWorker API for receiving HTTP requests. An object worker locks the data such that it is the sole owner of the data while it is being executed. Other entities that wish to interact with the data send messages to the object worker that owns the data. In an embodiment, each instantiated object worker script is run in an isolated execution environment, such as run in an isolate of the V8 JavaScript engine. For instance, in the example ofFIG.1, the object worker150may execute in an isolated execution environment, such as run in an isolate of the V8 JavaScript engine. The isolated execution environment can be run within a single process. In an embodiment, the instantiated object worker scripts are not executed using a virtual machine or a container. In an embodiment, a particular object worker script is loaded and executed on-demand (when and only if it is needed) at a particular compute server of the distributed cloud computing network. The origin server180, which may be owned or operated directly or indirectly by a customer of the cloud computing platform, is a computing device on which a network resource resides and/or originates (e.g., web pages, images, word processing documents, PDF files movie files, music files, or other computer files). In an embodiment, the origin server180is not required such that a compute server can respond to a request without querying an origin server. The control server185is operated by the cloud computing platform and provides a set of tools and interfaces for a customer to, among other things, configure object workers to be run in the cloud computing platform. The third-party device190is a computing device (e.g., laptop, workstation, smartphone, mobile phone, tablet, etc.) that is used by third parties such as a customer, among other things, interact with the control server185. For instance, the control server185may allow the customer to indicate how the data is to be split into one or more units. The customer can split the data into units that tend to be accessed by the same client or sets of clients. This allows the object to naturally migrate to near where the client(s) are accessing the data thereby providing fast, low-latency access. The following are examples of how the data can be split. If the customer is providing a collaborative document editing system, each document of the system may be a separate object. If the customer is providing an online gaming service, each game session may be a separate object. For an online email service, each user's mailbox may be a separate object. For a calendar service, each user's calendar may be a separate object. For a team chat product, each channel may be a separate object. The control server185may allow the customer to upload one or more object worker scripts and specify when the object worker script(s) are to be run. For instance, the customer may associate a rule that indicates when an object worker script is to be run. By way of example, the control server185may allow the customer to configure a URL matching pattern that indicates the URL(s) for which the object worker script is to run. The control server185may allow the customer to delete and update previously uploaded object worker script(s). In an embodiment, the control server185deploys each object worker script to each of the compute servers120A-N automatically (without the customer selecting which of the compute servers120A-N in which to deploy the object worker script). In another embodiment, the control server185allows the customer to indicate which of the compute servers120A-N are to be deployed to a particular worker script. The control server185creates an identifier for each unique object worker script. In an embodiment, the identifier is created by hashing the content of the object worker script (e.g., using a cryptographic hash function such as SHA-256), where two scripts with identical content will have the same identifier even if uploaded by different customers and even if applied to different zones. FIG.2illustrates a system for concurrency control in an asynchronous event-loop based program environment according to an embodiment.FIG.2shows an example of the system where the program is an object worker150. However, the program could be any type of program implemented with an asynchronous event-loop. The object worker150includes an object worker script instance (“worker instance”)165that is an instantiated object worker script and the object170. The object170is private and persistent data that only the worker instance165can read and/or modify and which no other object worker can directly access. Thus, the worker instance165controls reading and/or writing to the object170. Other entities that wish to interact with the data send messages to the object worker that owns the data. The object170may be persistently located in storage (e.g., object storage) remote to the compute server120A. The object worker150is associated with the input gate205and the output gate210. The input gate205is a piece of code that controls the flow of events into a program, such as the object worker150. The input gate205may control input events so that an asynchronous storage operation can be performed without inadvertently allowing a concurrent operation on the single-threaded event loop to run in the meantime that may change the program state in unexpected ways. The input gate is different from a traditional file lock. For example, the input gate does not enforce mutual-exclusive access to a resource like a file lock would. An input event may be an incoming request (e.g., an HTTP/S request), a response (e.g., an incoming HTTP/S response received from a previous outgoing request), an internal event such as a scheduled job, a timer event (e.g., a JavaScript timer event such as setTimeout( ) or setInterval( ), a cache API operation event, a key value store read/write event, a TCP I/O event, or other network event, a keyboard input event, a mouse input event, etc. For instance, the event242is received at the input gate205. To control events into the object worker150, the input gate205determines whether to delay the delivery of events at operation244. For instance, the input gate205may prevent the delivery of events to the worker instance165when the worker instance165is executing a storage operation, except for storage completion events. Any other event is deferred until the worker instance165is no longer executing code and is not waiting for any storage operation to complete. The storage completion events do not block each other. Thus, the object worker may execute multiple storage operations executing concurrently. In an embodiment, each storage operation of the code executed by the worker instance165is registered with the input gate205. Thus, the storage operation(s) to be initiated by the worker instance165are registered with the input gate205at operation240. The input gate205is notified when the storage operations are complete. The input gate205tracks all pending storage operations in the storage operation state215. If there is a pending storage operation as indicated in the storage operation state215, the input gate205delays sending the event to the worker instance165. For instance, the input gate205queues the event in the event queue220. Thus, the input gate205tracks the pending storage operations and events that are waiting to be delivered to the object worker150. When a storage operation resolves, the event(s) that are queued (if any) associated with that storage operation are delivered to the object worker150. In an embodiment, the input gate205is notified of each storage operation completion. For instance, each storage operation may hold a reference to a lock object. While a lock exists, the input gate205does not deliver events to the object worker (e.g., requests, responses, etc.). When the storage operation completes, it stops referencing the lock object. When the lock object's last reference is destroyed, the input gate205is notified. Thus, the input gate205can control race conditions. As previously described, it is possible, even with single-threaded programs, to have certain race conditions.FIG.3illustrates a race condition that can be prevented with the input gate205. The code inFIG.3shows two requests that may be received at approximately the same time by an object worker (before the use of the input gate205). If each request calls the function ‘getUniqueNumber( )’, then the two calls may become interleaved. Each time one of the requests performs an ‘await’, execution may switch to the other call. An example of this is shown inFIG.3. At a time T1, request1begins executing the getUniqueNumber( ) function, and before it is finished, at a time T3, request2begins executing the GetUniqueNumber( ) function. As shown inFIG.3, the call for request1calls ‘get(“counter”)’ at time T2and the call for request2calls ‘get(“counter”)’ at time T3before either of them calls ‘put(“counter”, val+1)’. This means that both calls return the same value. However, use of the input gate can prevent concurrency for storage operations. As described above, while a storage operation is executing, no events are delivered to the object worker except for storage completion events. Any other event is deferred until such a time as the object worker is no longer executing code and is no longer waiting for any storage operations. An example of this is shown inFIG.4. The functions inFIG.3are the same as inFIG.3. As shown inFIG.4, at a time T3, the request2is received but delivery of the request is blocked because request1is waiting for storage. When the ‘get(“counter”)’ returns for request1, the ‘put(“counter”, val+1)’ is called. The delivery of request2continues to be blocked because request1continues to wait for a storage operation (in this case, a “put” storage operation). The result is that these two calls return unique numbers as expected. The input gate does not preclude making multiple concurrent requests to storage. For instance, the following piece of code has a ‘get( )’ and ‘put( )’ storage operation executing concurrently. let promise1 = this.storage.get(“foo”);let promise2 = this.storage.put(“bar”, 123);await promise1;frob( );await promise2; The ‘get( )’ and ‘put( )’ storage operations execute concurrently. Also, the call to ‘frob( )’ may execute before the ‘put( )’ has completed, but strictly after the ‘get( )’ completes because that is awaited that promise. However, no other event, such as receiving a new request, can happen in the meantime. The input gate protects not just against concurrent incoming requests. For instance, the input gate protects against concurrent responses to outgoing requests. For example, the following piece of code launches two ‘fetch( )’ calls concurrently. After each returns, getUniqueNumber is invoked. async function task1( ) {await fetch(“https://example.com/api1”);return await this.getUniqueNumber( );}async function task2( ) {await fetch(“https://example.com/api2”);return await this.getUniqueNumber( );}let promise1 = task1( );let promise2 = task2( );let val1 = await promise1;let val2 = await promise2; These two ‘fetch( )’ calls do not interfere with each other. The completion of a ‘fetch( )’ is an event subject to the control of the input gate. When the first of the two fetches returns, the function ‘getUniqueNumber( )’ is called which performs two storage operations. If the second ‘fetch( )’ also returns while these storage operations are outstanding, the return of the second ‘fetch( )’ will be deferred until after these storage operations are performed. FIGS.5and6are flow diagrams that illustrate exemplary operations for controlling the flow of events into a program according to an embodiment. The operations ofFIGS.5and6are described with reference to the exemplary embodiment ofFIG.2. However, the operations ofFIGS.5and6can be performed by different embodiments than that ofFIG.2, and the exemplary embodiment ofFIG.2can perform different operations than that ofFIGS.5and6. At operation510, the input gate205receives an event for a program that is implemented with an asynchronous event loop, such as the object worker150. The event may be an HTTP/S request originating from a client or another program, an incoming HTTP/S response received from a previous outgoing request, or an internal event triggered by an internal operation of the compute server. Next, at operation515, the input gate205determines whether the event is a storage completion event. If the event is a storage completion event, then flow moves to operation530where the input gate205delivers the event to the program. If the event is not a storage completion event, then flow moves to operation520. At operation520, the input gate205determines whether there is a pending storage operation. For instance, the input gate205accesses the storage operation state215to determine whether there is a pending storage operation. In an embodiment, each storage operation of the program is registered with the input gate205. If there is a pending storage operation, then flow moves to operation525where the event is queued in the event queue220. If there is not a pending storage operation, then flow moves to operation530where the input gate205delivers the event to the program for processing. The pending storage operations typically complete without error. At operation610, the input gate205determines that the pending storage operations have completed. In an embodiment, the input gate205is notified of each storage operation completion. For instance, each storage operation may hold a reference to a lock object. While a lock exists, the input gate205does not deliver events to the program (e.g., requests, responses, etc.). When the storage operation completes, it stops referencing the lock object. When the lock object's last reference is destroyed, the input gate205is notified. Next, at operation615, the input gate205delivers the queued event(s) to the program one at a time. By way of example, if the first event that is released from the queue begins a new storage operation, the input gate205will prevent any of the other events that were on the queue from being dequeued until that storage operation has completed. Although the input gate205was described with respect to storage operations, the input gate can be used to make any asynchronous operation appear as if it were a synchronous operation from the perspective of the program whose events are controlled by the input gate. Such asynchronous operation may include an outgoing fetch, an outbound network request, writing data to disk, etc. Referring back toFIG.2, the output gate210is a piece of code that controls the flow of messages (e.g., outgoing messages252) out of the program such as the object worker150. The output gate210is either part of the object worker150or is associated with the object worker150through which all outgoing messages must pass. An outgoing message may be any output including an HTTP/S request, an HTTP/S response, audio, video, etc. The outgoing message is destined to a destination external to the object worker150. The output gate210defers the transmission of any new outgoing network messages until a pending storage write operation has completed except for outgoing network messages that are storage write operations. If the write fails, the outgoing network messages are discarded and replaced with errors and the object worker150is shut down and restarted. This allows the object worker150to continue executing concurrently with a storage write without running the risk of data loss after confirmation (by preventing other parties from being falsely informed that the data was stored). To the object worker150, it appears as if the write operation finishes relatively instantly even though the actual write operation may not be completed (or even complete) and the object worker can continue to execute code. However, outgoing network messages are prevented from being sent until the write operation is complete. Thus, the object worker can assume the storage write operation succeeded and continue executing the code. If the storage operation fails, then no outgoing message is delivered and an error message is in place. Thus, in the rare event that a write operation fails, a premature confirmation of a successful write operation is not received by remote parties. This means that although the write is assumed to be confirmed, no other entity will receive that confirmation until the write is confirmed. In the meantime, the object worker can execute other code concurrently that it would otherwise have had to wait to run for the confirmation that the storage write completed. Thus, the output gate allows the application to continue execution in parallel with the write being synched to disk without the risk of prematurely confirming a failed write to remote parties. The output gate210is notified of pending write operations250and is notified of completed write operations251. For instance, the write operation may provide a promise that will resolve when the storage operation is complete. The output gate210tracks the state of the storage writes. In an embodiment, the worker instance165notifies the output gate210of each pending write operation. In another embodiment, the worker instance165batches a set of two or more pending write operations and notifies the output gate of the batch of writes. The output gate210queues outgoing messages254in the outgoing message queue230while a write operation is pending as indicated in the storage write state225. When the write operation has completed, then the queued message can be sent. The output gate210applies to outgoing requests that include responses (e.g., HTTP/S responses sent to a client) and/or outgoing requests (e.g., using a ‘fetch( )’ call). These outgoing requests are delayed from being sent until all writes are confirmed. In an embodiment, if a new write operation is received after an outgoing message is queued, the existing queued message(s) do not need to wait for the new write operation to complete before being transmitted. To say it another way, an outgoing message that is queued does not need to wait for any new write operations to complete. However, any new outgoing message that is received after a write operation is pending will be queued. In an embodiment, an in-memory caching layer is used. The in-memory caching layer may cache data directly in memory in the process where the object worker runs. When a read operation requests a key that is in the cache, the operation returns the value from the cache. The value may be returned without context-switching out of the thread and isolate where the object is hosted. If the key is not in the cache, then a storage request is needed. A storage operation writes to the in-memory caching layer. The output control described herein prevents the premature confirmation of writes to any external entity. Write operations may be coalesced (even if they are ‘await’ ed) such that the output control waits only for O(1) network round trips of latency, not O(n). FIGS.7and8are flow diagrams that illustrate exemplary operations for controlling the flow of messages out of a program according to an embodiment. The operations ofFIGS.7and8are described with reference to the exemplary embodiment ofFIG.2. However, the operations ofFIGS.7and8can be performed by different embodiments than that ofFIG.2, and the exemplary embodiment ofFIG.2can perform different operations than that ofFIGS.7and8. At operation710, the program detects a write operation. The write operation will cause the output gate210to lock any new outgoing messages until the write operation successfully completes. Thus, at operation715, the program notifies the output gate210of the pending write operation. The notification of the pending write operation causes the output gate210to delay the sending of any outgoing message received while the write operation is pending. At operation720, the program determines whether the write operation successfully completes. Most write operations successfully complete. If the write operation successfully completes, then operation725is performed where the program notifies the output gate210that the write operation is complete. In the rare event that the write operation does not complete successfully, then operation730is performed where the program notifies the output gate210that the write operation has failed. Then, at operation735, the program is restarted. At operation810, the output gate210receives an outgoing message from the program. The outgoing message can be an outgoing request (e.g., using a ‘fetch( )’ call) or a response that is for a client. Next, at operation812, the output gate210determines whether the outgoing message is a storage write operation. If the message is a storage write operation, then flow moves to operation825where the outgoing message is transmitted. If the outgoing message is not a storage write operation, then flow moves to operation815. At operation815, the output gate210determines whether there is a storage write in progress. The output gate210may access the storage write state225to determine whether there is a storage write in progress. For instance, the output gate210may receive a notification from the worker instance165regarding a pending write operation that is tracked in the storage write state225. If there is not a pending write operation in progress, then flow moves to operation825and the outgoing message is transmitted. If there is a pending write operation in progress, then flow moves to operation820. At operation820, the output gate210queues the outgoing message in the outgoing message queue230. Next, at operation830, the output gate210determines whether it has received a notification that write(s) in progress when the outgoing message was queued have completed. For instance, the output gate210may receive a notification from the program regarding the completion of a pending write operation that is tracked in the storage write state225. If the write(s) in progress when the outgoing message was queued have completed, then flow moves to operation835where those outgoing queued message(s) are sent. If those write(s) in progress have not completed, then flow moves to operation840where the output gate210determines whether it has received a notification that one of those write(s) has failed. If it has, then flow moves to operation845where all outgoing queued message(s) are discarded. If it has not, then flow moves back to operation830. In an embodiment, the code may be written to bypass the controlling of the events with specific syntax that indicates that the controlling of events will not occur. FIG.9illustrates a block diagram for an exemplary data processing system900that may be used in some embodiments. One or more such data processing systems900may be utilized to implement the embodiments and operations described with respect to the compute servers and/or client devices. Data processing system900includes a processing system920(e.g., one or more processors and connected system components such as multiple connected chips). The data processing system900is an electronic device that stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media910(e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals—such as carrier waves, infrared signals), which is coupled to the processing system920. For example, the depicted machine-readable storage media910may store program code930that, when executed by the processor(s)920, causes the data processing system900to execute the object worker150, and/or any of the operations described herein. The data processing system900also includes one or more network interfaces940(e.g., a wired and/or wireless interfaces) that allows the data processing system900to transmit data and receive data from other computing devices, typically across one or more networks (e.g., Local Area Networks (LANs), the Internet, etc.). The data processing system900may also include one or more input or output (“I/O”) components950such as a mouse, keypad, keyboard, a touch panel or a multi-touch input panel, camera, frame grabber, optical scanner, an audio input/output subsystem (which may include a microphone and/or a speaker), other known I/O devices or a combination of such I/O devices. Additional components, not shown, may also be part of the system900, and, in certain embodiments, fewer components than that shown in One or more buses may be used to interconnect the various components shown inFIG.9. The techniques shown in the figures can be implemented using code and data stored and executed on one or more computing devices (e.g., client devices, servers, etc.). Such computing devices store and communicate (internally and/or with other computing devices over a network) code and data using machine-readable media, such as machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and machine-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals, etc.). In addition, such computing devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices, user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic respectively represent one or more machine-readable storage media and machine-readable communication media. Thus, the storage device of a given computing device typically stores code and/or data for execution on the set of one or more processors of that computing device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. In the preceding description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the preceding description and the claims, the terms “coupled” and “connected,” along with their derivatives, may be used. These terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. While the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
38,039
11861421
DETAILED DESCRIPTION This disclosure relates to techniques for communicatively coupling services and/or applications in a serverless computing environment. For example, a computing device can configure a pipe to integrate two services in an event-driven architecture. The pipe can represent a virtual communication channel for transmitting data between services and/or applications. To configure a pipe, a developer can provide input via a user interface indicating a source of an event, a target of the event, and filter information describing how to modify the event for sending to the target. The pipe may also be configured to enrich or transform an event to modify how a service processes an event, control timing of event transmissions using the pipe, define an event structure for an event, and/or batch events. By configuring a pipe as described herein, services and applications can be communicatively coupled independent of (i.e., without) requiring a human to provide computer-readable instructions to integrate the services and applications. Pipe(s) enable an application associated with the developer to connect various services provided by the serverless computing environment while controlling what type of data is generated, stored, or transmitted. An event-driven architecture can include services that send or receive an event which can be thought of as change in a resource state (e.g., change in a dataset). For example, the event can represent a notification (e.g., an item was placed in a shopping cart, an order was shipped, a new entry was added to a dataset, a new file was added to a directory, etc.) and/or a state (e.g., item information, price information, metadata, etc.). In various examples, a pipe can be configured to send, receive, or exchange events (or portions thereof) between an event source to an event destination. After configuring the pipe, events can be routed, filtered, transformed, batched, and/or buffered automatically and without receiving user provided code. In some examples, the pipe can facilitate the transmission of events that match a predetermined event structure. Using pipes to integrate or control event transmissions between services as described herein decouples the services thereby improving scaling and deployment of services over time. Generally, a pipe can represent functionality of a control plane and a data plane for transmitting an event synchronously or asynchronously between two entities of a serverless computing environment. The pipe can be thought of as a virtual communication channel for exchanging event data associated with an event. One end of the pipe is a source of an event and the other end of the pipe is a destination (or target/consumer) for the event. In some examples, the source or the destination can include an application, a service, or another pipe. By way of example and not limitation, multiple pipes in the serverless computing environment can transmit event data from a single source to multiple different destination services, and different portions of the event data can be sent to the different destination services depending upon how each pipe is configured. For instance, an event can represent a notification that an order has shipped, and a first pipe can facilitate sharing the event with a first destination service (e.g., an order service) and a second pipe can facilitate sharing the event with a second destination service (e.g., an analytics service). To configure a pipe, a user (e.g., a developer of an application or service) can provide input (e.g., via an API) to a computing device indicating an event source (e.g., a service, an application, or another pipe that is a source of an event) and an event destination (e.g., a service, an application, or another pipe that is a target of the event). The user can also specify a filter and/or a transformation to apply to the event such as a size of a payload or a type of data for transferring using the pipe. In this way, a subset of the event data associated with an event can be transmitted over the pipe. In some examples, the user can indicate a type of computing resource for processing event data by a source service or a target service of the pipe. For instance, the user may also or instead provide an input indicating that a service processes an event (or event data) using a specific API, kernel, compute service, or data format. In some examples, configuring the pipe can include a user providing an input indicating whether to associate the pipe with a queue or buffer. The user can, for example, interact with one or more input controls of a user interface output for display on a display device to indicate whether to buffer events associated with the pipe. In some examples, a buffer component of the serverless computing environment can implement a buffer, such as a queue (e.g., a first-in first-out queue), between the event source and the event destination to capture, store, or archive event data. In some examples, the queue can store event data associated with events to selectively pause, resume, or replay processing of event data through the pipe without losing event data (e.g., the event, if not transmitted or “pushed” for a period of time may be deleted). Replaying event data from the buffer can enable a developer to test functionality of an application (e.g., identify and fix bugs in code). Implementing the buffer can further enable a developer to pause operation of an application (to perform maintenance for example) while still capturing events that would otherwise be lost. In some examples, responsive to receiving an indication to resume sending of events using the pipe (e.g., maintenance is complete and the application is back online) events stored in the buffer can begin from a time that the pipe was paused, or from a current time. Additional details for associating the pipe with a buffer or queue are discussed throughout this disclosure including in relation toFIG.2. In various examples, the integration techniques described herein can include defining an event structure (also referred to herein as a schema) for an event and using the event structure to generate, store, filter, batch, buffer, or transform events associated with various services or applications in the serverless computing environment. For example, the event structure can define types of data to include as an event or computational resources to apply to the event. In some examples, the event structure can represent a data format that includes fields for storing values that represent characteristics of the event (e.g., an event name, a time, a version, an account, a source, a destination, metadata, a pipe name, a batch size, a batch interval, and/or a data format, just to name a few). The event structure can also identify how to manage events that differ from the event structure (e.g., forward to the buffer, modify a compute resource, and so on). In some examples, the pipe can be configured to transmit events having a same event structure as the defined event structure and to refrain from transmitting events having a different event structure from the defined event structure over the pipe. In various examples, a can store data associated with each event structure including storing multiple versions of the event structure as the event structure is updated over time. By defining an event structure for an event, only relevant event data is exchanged between a source and a target of the pipe thereby improving the quality or usefulness of events and enabling more efficient use of resources (e.g., network bandwidth). In some examples, defining the event structure can include receiving input from a computing device associated with a developer of an application or service. For instance, the developer can interact with one or more controls of a user interface to indicate a preference for archiving (or not archiving) events associated with an event source and/or an event destination of a pipe. In some examples, the developer can indicate a particular data format or API to generate, transmit, or otherwise process an event. The event structure can include fields that store values to indicate the input, or preferences, provided by the developer at a first time, and can be used by one or more components of the serverless computing environment to detect events representing a change in a field associated with the event structure. The event structure may also, or instead, indicate whether a version of the event structure is forward compatible with a future version or backward compatible with a previous version. In various examples, the event structure can represent a data file format (e.g., a JavaScript Object Notation (JSON) file, binary data, string data, etc.), a Java language, a data serialization language, a human-readable data serialization language, etc. Additional details for using an event structure in association with one or more pipes are discussed throughout this disclosure including in relation toFIG.2. In some examples, the serverless computing environment can implement a pipe component to configure a pipe with optional batching of events associated with an event source and/or an event destination. In some examples, a batching component can determine a batch size and/or a batch interval that maximizes throughput of events with a minimal number of transmission errors between the source and the destination of the pipe. For example, a machine learned model can receive historical batch size and batch interval data as well as previous transmission error data associated with a pipe as input, and determine an output indicating a batch size and/or a batch interval for efficient processing of events by both the source and the destination of the pipe. In other words, the machine learned model can determine batching characteristics with consideration to differences in a batch size or batch interval of the source and a batch size or batch interval of the destination. In various examples, batching can be responsive to verifying one or more batching rules associated with a pipe (stored as part of the event structure or otherwise associated with an event). In some examples, the pipe can expose a batch API to cause batching of events associated with the source or the destination in accordance with the batch size or the batch interval (output by the machine learned model or in association with one or more batching rules). As described herein, a source or a destination of a pipe can comprise a variety of different services and/or applications. For example, the source or the destination can represent a Software as a Service (SaaS), a payment service, an order service, a fulfillment service, a forecasting service, a security service, a database service, or a compute service, just to name a few. In various examples, a pipe component can generate a pipe to communicatively couple a source and a destination having different event processing capabilities (e.g., the source and the destination are associated with different APIs, data formats, etc.). The pipe can, for instance, determine an API, a programming language, or other processing information usable for the destination to process the event data (e.g., send a message acknowledging receipt of the event from the source). In some examples, the pipe can invoke an API to cause an event in a first data format (e.g., JSON, XML, etc.) or language (e.g., Python, Java, C #, Go, Ruby, etc.) to transform to a second data format or language. In such examples, the transformation associated with the pipe can be performed independent of a developer providing computer-readable instructions to the source (e.g., without the developer writing code that enables communication between the source and the destination). Using pipes to communicate events among services enables a developer to provide input on what functionality to include in a pipe without being required to send computer-readable instructions (e.g., integration code) to the source or the destination. An event-driven architecture can represent a cloud platform that provides or hosts various types of services (also referred to as backend service). For instance, backend services may include business-application services, financial-institution services, healthcare services, and so forth. Client devices often interact or access these backend services over a network, such as the Internet, using API calls that define an operation or interaction that the client device is requesting be performed. For example, an application or agent may be running locally on a client device, and upon receiving input indicating an interaction to be performed with the backend service, the local agent may send a REST API call over the Internet to an API gateway of the service provider network that is hosting the backend service. The API gateway generally routes the API call to business logic that invokes the API call and performs the requested operation on the backend service. Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIG.1illustrates a system-architecture diagram of an example environment100in which a service provider network102communicates with one or more client devices104and configures a pipe106to exchange data between services108of the service provider network102. For example, the pipe106can communicatively connect a first entity110and a second entity112in a serverless computing environment114. In some examples, a user116of the client device(s)104can interact with the service108of the service provider network102and/or provide input indicating desirable functionality for consideration when configuring the pipe106. In some examples, the service provider network102may comprise clusters of managed servers stored in data centers located across geographic areas. The service provider network102may be a distributed network through which users (often customers) may interact via the client device104to manage or otherwise interact with services108provided by the service provider network102. The service provider network102may be managed by a service provider, and may provide various types of services108, such as an on-demand computing service, a message-queuing service, a managed-database service, a software-execution service, application-hosting services, business-application services, financial-institution services, and/or other services. The services108may be a collection of computing resources configured to instantiate VM instances, containers, network functions, etc., and to provide other types of computing resources on demand. Other applications for the services108may be to support database applications, electronic commerce applications, business applications and/or other applications. The services108may represent a managed message queuing service that enables users to send, store, and receive messages between software components at any volume without losing messages or requiring that other services108be available. The services108described above, and any other services, may be provided in one particular implementation by one or more data centers operated by the service provider. As known to those skilled in the art, data centers are facilities utilized to house and operate computing resources, such as computer systems and associated components. Data centers also typically include redundant and backup power, communications, cooling, and security systems. The data centers might be located in geographically disparate regions, and might also be connected to various other facilities, such as co-location facilities, and various wide area networks (“WANs”), such as the Internet. The computing resources associated with the services108can be provisioned and de-provisioned as needed in an automated fashion. For example, the service provider network102might be configured to instantiate a new instance of a computing resource, such as a VM instance, in response to an increase in demand for a network service or other condition. Other types of computing resources might also be provisioned and de-provisioned in a similar manner. Services108in the service provider network102might also provide functionality for automatically scaling and/or de-scaling the computing resources based upon demand for the resources and/or other factors. In some examples, a user(s)116may interact with services108using the client device(s)104. Generally, the client device(s)104may be any type of computing device capable of connecting to the service provider network102via a suitable data communications network118such as, but not limited to, a laptop or desktop computer, a tablet computing device, a server computer, or a mobile telephone. Administrative users employed by the operator of the service provider network102, such as administrators managing the operation of the service provider network102, might also connect with, manage, and utilize resources provided by the service provider network102in a similar fashion. According to the techniques described herein, users116of the service provider network102may subscribe for an account with the service provider network102to utilize the computing infrastructure (e.g., computing resources in data centers) supporting the services108(e.g., memory, processing power, auto-scaling, networking and content delivery, etc.) provided for and managed by the service provider network102. The service provider operating the service provider network102may charge a fee for utilization of the computing resources to a subscriber that have computing resources provisioned to support and use the services108. Generally, the user(s)116may interact with the client device(s)104to receive or employ a service108. The users116may be one or more of individual users, groups of users, organizations, businesses, or other entities that interact with the service provider network102via respective client devices104. The client devices104may be any type of computing device capable of connecting to the service provider network102via a suitable data communications network118such as, but not limited to, a laptop or desktop computer, a tablet computing device, a server computer, or a mobile telephone. Additionally, the client devices104may have various components, algorithms, software, client applications, and so forth, to perform authentication methods with identity service providers. For instance, the client devices104may have a software client for communicating using various network protocols, including cryptographic network protocols, such as FTPS protocol STP protocol, Web Authentication (WebAuthn) protocol, Universal 2ndFactor (U2F) protocol, Universal Authentication Framework (UAF) protocol, and/or any other authentication protocol. To utilize the services108or to otherwise provide input to the service provider network102, a user116may access a local agent120running on the client device104, where the local agent120can represent software that is associated with the services108. The local agent120may also, or instead, represent a user interface having one or more controls (or input controls) for the user116to provide input usable by a pipe component122to generate, update, or otherwise manage the pipe106. For instance, the user116can be a developer that provides input to one or more controls of the local agent120to indicate a source of an event, a target of an event, and whether to filter, modify, archive, buffer, and/or batch an event(s) at the source or the target. In some examples, the pipe component122can generate or configure the pipe106based at least in part on receiving data from the client device104indicating preferences of the developer. In some examples, the serverless computing environment114can represent an event-driven architecture and the pipe106can represent a point to point integration of an event source and an event destination (or target). The pipe106can serve as a virtual communication channel, a control plane, and/or a data plane between the first entity110(e.g., a first service, first application, or first Software as a Service (SaaS) application) and the second entity112(e.g., a second service, second application, or second Software as a Service (SaaS) application). The first entity110can send an event to the second entity112, and the pipe106can apply one or more rules to filter, batch, queue, archive, or otherwise modify the event. In some embodiments, the pipe106need not include a filter and may instead facilitate passing, transmitting, copying, etc., the event (or data associated with the event) from the first entity110to the second entity112. In this embodiment, the event/data may not be modified when being passed through (or via) the pipe106. In some examples, the first entity110can be configured to send an event to the pipe106, and the pipe106can be configured to transmit the event to the second entity112without the first entity110having explicit knowledge of the second entity112(e.g., not storing an address or information associated with the destination). The pipe106can, for example, cause the event to be sent from the first entity110to the second entity112based on the pipe106being configured with the destination to automatically transfer the event. In some examples, the first entity110and/or the second entity112can push an event using the pipe106without the first entity110and/or the second entity112knowing an event source or an event destination. In some examples, the pipe106can pull an event from the first entity110(e.g., the source) based at least in part on an API exposed by the first entity110. For example, the pipe106can pull data from the first entity110without the first entity110being “aware of” the pipe106(e.g., without the first entity110receiving data indicating a pipe name for sending data). Thus, events can be pushed to and/or pulled from an entity based on how the pipe is configured and independent of a first end of the pipe106being explicitly aware of the pipe and/or the second end of the pipe (e.g., the entity may not store source and/or destination information to route the event but instead be instructed to send the event to the pipe). In various examples, the first entity110and/or the second entity112can be a third-party entity (e.g., a third-party service, a third-party application, a third-party SaaS, etc.). The pipe106can, in some examples, include a third-party entity as a source and/or a destination. Further, the pipe106can be associated with a third-party entity between the source and the destination in some examples (e.g., data can be exchanged between the source or the destination and another service or application invoked by an API associated with the event). In some examples, the client device104may send an API call over the network(s)118to an API gateway of the service provider network102. The API gateway may send the API call to the pipe component122for invocation to interact with the pipe106. In some instances, the pipe component122may access a database (not shown) that stores an event structure for the pipe106. For instance, the pipe component122may query the database to identify an event structure usable for generating or transmitting events in the pipe106. Generally, the database may store event structures for events associated with multiple pipes. Additional details for associating the pipe with an event structure can be found throughout this disclosure. By way of example and not limitation, a developer can determine an application architecture for providing an application over the Internet using various services available from a service provider network. The developer can be responsible for designing the application architecture, maintaining operation of the application, and directing how services (e.g., microservices in a serverless computing environment) communicate with one another. Because services and applications can be associated with different data formats, not all services and applications can communicate without the developer writing code, or computer-readable instructions to integrate or enable data transmissions among services. In particular, the developer can configure the application to make use of various services provided by the service provider network. Using the techniques described herein, the developer can provide input to a user interface to convey what type of data is generated, stored, or transmitted between the services. For example, one or more pipe(s) enable services or applications to exchange data that would otherwise require the developer to write code. In examples when the developer manages an application selling a product, the pipe can be configured to capture information related to the sale of the product as events (e.g., a sale price, a customer name, a shipping address, etc.). The developer can organize how events are transferred among the services by providing information suitable for the pipe component122to determine a pipe data plane for transmitting events with consideration to how each service can store, batch, buffer, filter, and/or process the event. The developer can, for example, specify implementing a data streaming service along with a batching service and an order fulfilment service and a pipe can be configured between two services to ensure that each respective service sends or receives changes to certain types of data. FIG.2illustrates a diagram200of an example service provider network implementing an example pipe component to configure a pipe. For example, the service provider network102can implement the pipe component122to configure the pipe106. As shown inFIG.2, the pipe component122comprises a filter component202, a data format component204, a buffer component206, a batch component208, and an event structure component210. Though depicted inFIG.2as separate components of the pipe component122, the functionality associated with the filter component202, the data format component204, the buffer component206, the batch component208, and/or the event structure component210can be included in a different component of the service provider network102or the pipe106. In some instances, the components described herein may comprise a pluggable component, such as a virtual machine, a container, a serverless function, etc., that is capable of being implemented in any service provider network102and/or in conjunction with any API gateway. Generally, the pipe component122can employ one or more of: the filter component202, the data format component204, the buffer component206, the batch component208, and the event structure component210to generate and/or update features of the pipe106. For instance, the pipe component122can initially configure a pipe or make updates to the pipe configuration using one or more of the aforementioned components. In various examples, the pipe component122can cause functionality associated with one or more of the filter component202, the data format component204, the buffer component206, the batch component208, and the event structure component210to be performed at the first entity110, the second entity112, or another service (e.g., a compute service, streaming service, and so on) between the first entity110(e.g., the source) and the second entity112(e.g., the destination). In some examples, the source and/or the destination of a pipe can include another pipe. Thus, the pipe106can act as a virtual communication channel for exchanging event data (or filtered portions thereof) between the source and the destination, as well as with other services invoked as part of the event (extensions of the pipe to other services to facilitate performing one or more of: filtering, modifying a data format, batching, buffering, archiving, or defining an event structure of an event). In various examples, the pipe component122can receive configuration data212from the client device(s)104usable to configure the pipe106. Configuring the pipe106can include, for instance, generating a new pipe or updating a previously generated pipe. The pipe106can include characteristics based at least in part on information associated with the configuration data212(e.g., information associated with a filter, a data format, a buffer, a batch, and so on). In some examples, the user116can interact with the local agent120to indicate a source and a destination (target) of an event in the serverless computing environment114. The client device(s)104can determine the configuration data212based at least in part on input from the user116identifying the source of the event for sending via the pipe106and a destination for receiving the event, and optionally, one or more of: filter information, data format information, buffer information, batch information, and event characteristics. In various examples, the first entity110ofFIG.1can represent an event source and the second entity112can represent an event destination. In some examples, the configuration data212can include a key for encrypting events and/or a resource name or API of an encryption service usable to encrypt the events. A customer managed key can be provided by the user116, for example, as part of the configuration data to associate or otherwise cause a preferred encryption for events using the pipe106. In various examples, the pipe component122can define the pipe106to include one or more characteristics. For example, based at least in part on the configuration data212, the pipe component122can define the pipe106to include values that represent one or more of: a pipe name, a pipe description, a pipe state (e.g., initialized, created, updated, etc.), a source name, source characteristics, a destination name, destination characteristics, a role identifier, role parameters, one or more tags, filter information, transformation information, buffering information, batching information, and so on. The source characteristics or the destination characteristics can represent parameters associated with a preferred batch size, a maximum batch size, a data format, a buffer preference, or some other parameter. The filter information, the transformation information, the buffering information, and the batching information can identify a type of filtering, transforming, etc. to perform in relation to the pipe (if configured to perform such functionality). The role identifier can, for example, represent permissions associated with the pipe such as credentials usable by the pipe to access data from a source, a destination, and/or an intermediary service (e.g., a transformation service, a compute service, a filter service, etc.). The role parameters can represent a credential, a security key, or other data usable to determine a level of permission for exchanging data over the pipe (e.g., across different entities of the pipe). Tags, or a tag list, can represent a text string that defines a label, a caption, or other information. Generally, defining the pipe106can include associating the one or more characteristics in a data structure that can be updated to change the definition of the pipe over time. Merely for illustrative purposes, the request to create a new pipe and the associated configuration information could be formatted in accordance with the following:CreatePipeRequest {Name: PipeName,Description: PipeDescription,InitialState: PipeInitialState,SourceID: SourceName,SouceParameters: PipeSourceParameters,EnrichmentID: EnrichmentCorponentName,EnrichmentParameters: PipeEnrichmentParameters,DestinationID: DestinationName,DestinationParameters: PipeDestinationParameters,RoleID: RoleName,Tags: TagList,} In some examples, an entity (e.g., a service, an application, a SaaS, etc.) that is included in the service provider network102can be associated with an identifier that uniquely identifies the entity. For example, the first entity110, the second entity112, and the pipe106(e.g., the source name, the destination name, the pipe name, etc.) may comprise different identifiers to be uniquely identified in the service provider network102. Different identifiers can include different formats and some formats can include entity specific information. A data format of an identifier can have, for example, different portions to represent information such as one or more of: a partition (e.g., a location of the entity), a service (e.g., a service nameplate), a region (e.g., a region code), an account id (e.g., an identifier of an account associated with the service provider network), a resource id (e.g., a name or path of an entity), and so on. In one example, the identifier can include a format “entity name: first portion: second portion: third portion,” though another number of portions can also be used. By assigning the various identifiers, the pipe106can compose and/or exchange data in predefined chunks associated with a customer account. Generally, the filter component202can represent functionality to apply a filter to event data associated with an event to determine filtered event data. The filter component202can determine the filtered event data based at least in part on one or more filters that refrain from sending an amount of data associated with the event over the pipe. For instance, the configuration data212can include one or more filters to apply to the event such as a payload filter that causes the event data to adhere to a specific payload size for transferring using the pipe. An event can include a variety of data as specified in an event structure determined by the event structure component210. The event structure can include different fields to define an event, and the filter component202can apply a filter to trim, determine, or otherwise select portions of the event data to include or exclude when transmitting the filtered event data. In this way, less than all the data associated with an event can be sent to the event destination via the pipe106thereby saving computational resources and/or network resources to send the filtered event data (relative to not filtering the event data). The data format component204can represent functionality to determine a data format (e.g., a programming language, an API, a kernel, and the like) usable by the event source or the event destination to process, send, or receive an event. For instance, the data format component204can, for example, determine the data format for the event based at least in part on the configuration data212indicating a compute service or program interface to process event data at the event destination. In various examples, the user116can indicate for the pipe106to send the event to a compute service which can transform the event from a first data format to a second data format determined by the compute service. By implementing the data format component204, the pipe106can be configured to cause an event generated at the source to transform from an initial data format to another data format usable by the event destination (e.g., to reduce an amount of data associated with the event). In some examples, the data format component204can receive an indication to transform the event data from a first data format to a second data format (e.g., an event source can generate an event for a pipe using a type of data format that is different from a type of data format available to the event destination). The indication to transform the event data can, for example, be received from a computing device such as the client device104or from a component of the service provider network102(e.g., the pipe component122or a component thereof). In various examples, a user (e.g., a developer or administrator) can provide data format information as part of the configuration data212that indicates a compute service or program interface (e.g., an API, a programming language, a kernel, or other computing resource) for the event destination to process the event (e.g., acknowledge receipt of the event sent from the source). Thus, the configuration data212can cause the data format component204to identify, detect, or otherwise determine a data format usable by the event source or the event destination to transmit (e.g., send or receive) event data associated with an event. The data format component204can, in some examples, determine a data format usable with the pipe106(e.g., a data format usable by the first entity110or the second entity112) based on identifying that the source and the destination operate using different types of data formats. For example, the data format component204can determine a difference between a first API associated with the event source and a second API associated with the event destination, and determine a representational state transfer (REST) API for the event destination to process the event data associated with the event. In some examples, the data format component204can determine that a first data format associated with event data or an event source is incompatible with a second data format associated with the event destination. In such examples, the data format component204can modify, transform, or convert at least a portion of the event data from the first data format to the second data format. For instance, the data format component204can identify the second data format based at least in part on the configuration data212or by using a function to access conversion data from a database or component that maps services to usable data formats (e.g., the database312ofFIG.3). The data format component204can also, or instead, be implemented to apply a template language to the event to modify event data associated with an event. For example, the data format component204can use one or more different types of templating languages to change a format of the event data to a desired template format. In various examples, the template language can be determined based at least in part on the configuration data212received from the client device(s)104. In some examples, the user116can determine a customer template for formatting an event, and provide an indication to use the custom template for events transmitted using the pipe. For example, the user116can cause the pipe component122to configure the pipe106to include templating functionality based at least in part on selecting a source, a destination, and a template language or template service. In some examples, the template service can include a resource name, and the pipe can use the resource name to route an event from an event source to the template service. By using templating languages, specific event data can be transformed and sent to the event destination using the pipe. Generally, the buffer component206represents functionality to store event data associated with the first entity110and/or the second entity112. For example, the pipe106can be associated with or otherwise include a buffer to capture events that the first entity110is unable to send to the second entity112due to a time period elapsing, multiple unsuccessful attempts to send the event, an incompatibility between the source and the destination (e.g., different processor interfaces, etc.), or some other reason. In some examples, the buffer can represent a first-in first-out queue such as a dead letter queue. In various examples, the buffer component206can store events for a time period until the events can be transmitted to the event destination. For example, the buffer component206can store events due to an error in a communication between the first entity110and the second entity112, an operational error of the first entity110or the second entity112, or a request from a user (e.g., the user116pauses an application or service to perform maintenance, etc.). The buffer component206can be configured to transmit the events to the event destination in various ways. For example, the events can be transmitted responsive to the event destination being able to receive data (e.g., the event destination sent a response to the buffer component206to send the event data), responsive to operation being restored to the first entity110or the second entity112, or responsive to another request from the user (e.g., the user resumes functionality of the service or application). Storing event data or sending stored event data to the event destination may also, or instead, be based at least in part on receiving an input via a control of a user interface to pause event transmissions associated with the pipe106and/or receiving an input to resume event transmissions based at least in part on receiving another input via the control of the user interface. Further, resuming sending of stored event data can begin from a current time or from a previous time at which the storage began. Thus, the buffer component206can be implemented to pause, resume, or replay events thereby providing a subscriber of the service provider network102flexibility to check events for errors in code or other issues affecting performance of the service from which the event generated. In some examples, the buffer component206can determine that the first entity110and/or the second entity112is unable to send and/or receive event data associated with an event and store event data associated with the event based at least in part on the determination. For instance, the buffer component206can store the event based at least in part on determining that a first queue associated with the first entity110is full or rejected the event data, a second queue associated with the second entity112is full or rejected the event data, and/or the first entity110and/or the second entity112is not associated with the queue. In some examples, the buffer component206can determine that a time period for sending event data from a source to a destination has expired without the event data being sent, and automatically store the event data in the buffer based at least in part on determining that the time period for sending the event data expired. In some examples, the user116can provide a retention period for storing the event data as well as whether to archive events before and/or after a particular service or application (e.g., before delivery to a compute service, after the compute service but before delivery to the destination, etc.). The user116can also indicate for inclusion in the configuration data212, a maximum event age (without being sent) or a maximum number of delivery attempts and buffering of the event data can be based at least in part on the maximum event age and/or the maximum number of delivery attempts indicated Accordingly, event data can be archived and available for accessing at a later time. In some examples, the buffer component206represents a buffer service that is associated with the pipe106based at least in part on input from the user116indicating to include this functionality for the pipe106. In other words, if a user provides an input indicating that a buffer be added to a pipe, then configuring the pipe106includes associating a pipe name of the pipe106with an indication to perform buffering, archiving, etc. such as in the event structure that defines the event which is further described in relation to the event structure component210and elsewhere. To configure a buffer for the pipe106, the buffer component206can generate a communication channel between the buffer and the first entity110(e.g., a first event service) for exchanging the event data. In such examples, the communication channel can represent a path for transmitting the event data between the first entity110and the buffer. For instance, the communication channel can enable storage of the event data in the buffer, and in some instances, enable the first entity110to receive the event data from the buffer for sending to the second entity112. However, the buffer component206can also generate an additional communication channel between the buffer and the second entity112(e.g., a second event service) for exchanging the event data. In this way, a first communication channel can direct the event data to the buffer from the first entity110and the second communication channel can direct the event data from the buffer to the second entity112. In various examples, the buffer component206can maintain a copy of the event data in the buffer for a time period after sending the event data from the buffer to the second entity112. For example, in addition to ensuring that event data is not lost due to not being able to send or receive the event data, the pipe106can be configured to implement a buffer or other database or storage device to store the event data after the second entity112receives the event data. Thus, the client device(s)104and/or the user116can evaluate the event data to improve how a service or application operates (e.g., fewer data errors, faster upload or download speeds, etc.) The batch component208can perform functionality to batch events for sending using the pipe106. For example, the batch component208can cause the first entity110to batch events based at least in part on the configuration data212including batch information such as a batch size (e.g., a number of events in a batch), a batch interval, an invocation payload size, or other batch characteristic. In some examples, the first entity110and/or the second entity112can batch events based at least in part on an event structure received from the event structure component210. The event structure can include information (e.g., batch rules) describing how and when a particular entity (source, destination) is to batch an event. Accordingly, the pipe106can perform batching for the event source and/or the event destination independently to account for differences in batch processing capabilities between different entities. In some examples, the batch component208can determine a batch size and/or a batch interval that maximizes throughput of events in the pipe106with a minimal number of transmission errors between the first entity110and the second entity112. The batch component208can, in some examples, implement a machine learned model to receive, as input, historical data describing a batch size, a batch interval, and transmission error data associated with the pipe. The machine learned model can determine an output indicating a batch size, a batch interval, or other batch rule for efficient processing of events by both the first entity110and the second entity112of the pipe106. Generally, the machine learned model can determine batching rules for a particular entity with consideration to differences in a batch size and/or a batch interval between the first entity110and/or the second entity112. In various examples, the batch component208can determine one or more batching rules for applying to events for transmitting using the pipe106. In some examples, the pipe106can expose a batch API to cause batching of events associated with the first entity110and/or the second entity112based at least in part on the one or more batch rules. For example, the batch API can cause the first entity110and/or the second entity112to batch in accordance with the one or more batch rules derived from the configuration data212and/or output data from the machine learned model. In some examples, the one or more batch rules comprise a first maximum payload of the first entity110, a second maximum payload of the second entity112, a first maximum batch size of the first entity110, or a second maximum batch size of the second entity112. In one nonlimiting example, exposing the batch API comprises the pipe106exposing a Fork-join algorithm to batch two or more events, though other batching techniques can be used. By way of example and not limitation, the pipe106can support batching the events with approximately 10,000 per API call or 2 MiB of data, though other sizes are also contemplated depending upon the capabilities of an entity to process a received batch. The batch component208can, in some examples, send a batch associated with a first batch size from the first entity110to the second entity112, and receive a runtime error (or other error) indicating that the second entity112is unable to receive the first batch associated with the first batch size. Based at least in part on the runtime error, the batch component208can determine a second batch size that is smaller than the first batch size and send a second batch from the first entity110to the second entity112based on the second batch size. For example, the batch component208can automatically bisect batches responsive to determining the runtime error, and send the bisected batches to the second entity112using the pipe106. In various examples, the batch component208can determine the second batch size based at least in part on a batch size associated with the second entity112and/or a batch size associated with a service performing a transformation to the event data. For instance, the second batch size can represent a minimum, an average, or a maximum batch size for a transformation service to transform events associated with the batch. In some examples, the batch component208can determine a response to the runtime error based at least in part on batch information associated with another entity or service (e.g., the second entity112or the transformation service can direct the response to the runtime error by the batch component208). In various examples, a user (e.g., the user116) may indicate for a pipe to be configured with batching by providing an input to a user interface associated with the client device(s)104. Thus, performing batching in association with the pipe106can be based at least in part on receiving the input from a control of the user interface to enable batching multiple events at the first entity110or the second entity112. In one or more examples, the batch component208can perform various batching techniques based on the input from the control of the user interface including but not limited to causing the pipe to expose the batch API to the multiple events. In some examples, batching implemented by the batch component208can include in-memory batching (e.g., writing the event data to one or more memories) or file-system batching (e.g., storing the event data in a file system). For instance, the batch component208can determine whether to perform in-memory batching, file-system batching, or some other batching technique, depending on the complexity, latency, or scaling associated with the events. In some examples, the pipe component122can determine a configuration value to indicate whether or not the pipe106includes batching functionality (or other functionality associated with another component). For example, input from the user116can determine whether or not a service or application associated with the client device(s)104batches events using the batch component208, and the configuration value can indicate to the batching component208that the pipe106is, or is not, configured to include batching. For example, a configuration value can be associated with each pipe to enable other components of the pipe component122to determine attributes of the pipe106. In some examples, the configuration value can be stored in the event structure output by the event structure component210. Responsive to determining that the configuration value indicates the pipe106includes batching, the batching component208can access or determine one or more rules for batching events and use the batch rule(s) in association with the batch API to perform batching. In some examples, the batch component208can determine a batch based at least in part on functionality to be performed on the batch by a downstream service. For instance, a first event service can represent a source of the multiple events and a second event service can represent a destination of the multiple events and a third event service can represent an intermediary service that provides functionality in association with the pipe106and before the destination receives the batch. For instance, the pipe component122can determine a third event service for receiving the batch based at least in part on another configuration value or another rule associated with another component (e.g., the filter component202, the data format component204, the buffer component206, and/or the event structure component210). For instance, the batch can be sent from the first event service to the third event service using the pipe, modified by the third service, and sent as a modified batch from the third event service to another intermediary service or to the second event service (e.g., the destination). In some examples, the batch is modified based on which components are implemented in association with the pipe106. Generally, the event structure component210represents functionality to define an event structure (also referred to herein as schema) for an event and using the event structure to generate, store, filter, batch, or modify events at various services or applications in the serverless computing environment114. For example, the event structure can define types of data to include as an event and/or computational resources, a filter, batching, and/or buffering to apply to or perform relative to the event. In some examples, the event structure can represent a data format that includes fields for storing values that represent characteristics of the event (e.g., an event name, a time, a version, an account, a source, a destination, metadata, a pipe name, a batch size, a batch interval, and/or a data format, just to name a few). The event structure can also identify how to manage events that differ from the event structure (e.g., forward to the buffer, modify a compute resource, and so on). In some examples, the pipe106can be configured to transmit events having a same event structure as the defined event structure and to refrain from transmitting events over the pipe106having a different event structure from the defined event structure. In various examples, the event structure component210can include or access a database, a container registry, a memory, or other storage device to store an event structure to define each event. Some events can include multiple versions as a user changes the desired structure of the event over time, and each of the multiple versions of the event structure can be saved in the database. By defining an event structure for an event as described herein, only relevant event data is exchanged between a source and a target of the pipe, and various components of the pipe component122can access the event structure from the database to identify a type of filtering, buffering, batching, and so on to perform for events that use the pipe106. In some examples, defining the event structure can include receiving input from a computing device associated with a developer of an application or service (e.g., the client device(s)104). For instance, the developer can interact with one or more controls of a user interface to indicate a preference for whether or not to archive, buffer, batch, or modify events associated with the first entity110and/or the second entity112of the pipe106. In some examples, the developer can indicate a particular data format or API to generate, transmit, or otherwise process an event. The event structure can include fields that store values to indicate the input, or preferences, provided by the developer prior to detecting one or more events. The event structure may also, or instead, indicate whether a version of the event structure is forward compatible with a future version or backward compatible with a previous version. In various examples, the event structure can represent a data file format (e.g., a JavaScript Object Notation (JSON) file, binary data, string data, etc.), a Java language, a data serialization language, a human-readable data serialization language, etc. As mentioned, the event structure can represent a schema defining characteristics of the event and can identify a schema name, a schema format, schema compatibility (e.g., control how schemas can or cannot evolve over time by designating a compatibility field as: none, disabled, backward, backward all, forward, forward all, full, full all), a status, a create time, an update time, etc. In some examples, the event structure can identify a schema version such as a version number, a schema format, a version create time, and so on. In some examples, the schema, or event structure, can define metadata to include in the event such as a security key, information about the event (e.g., a product, a price, a customer, a customer address, and so on). In some examples, the event structure can indicate a validation configuration comprising information to validate event data associated with an event. For example, the event structure component210can determine that the event data matches the event structure and output a first indication to send the event data to the event destination or determine that the event data does not match the event structure and output a second indication to refrain from sending the event data to the event destination. Thus, the validation configuration can be used to determine whether to transmit the event data or refrain from transmitting the event data using the pipe. The event structure can, for example, indicate authentication information and/or security information to validate, verify, or authenticate an event source as a source of an event and the event destination as a destination of the event. For example, an API can verify user credentials, use a key or other authentication technique to validate an identity of the event source and the event destination. The authentication information and/or security information can represent a field in the event structure that identifies a security protocol, a security key, or other details for performing authorization and security (e.g., establishing and maintaining secure transmissions with authorized users and services). In some examples, the event structure component210can update the event structure based at least in part on receiving an input to modify at least one of the fields of the event structure. For example, the client device(s)104can send a request to update how an event is defined based on input from a user via the local agent120. As a result, the event structure component210can generate a new association between the pipe and the updated event structure in the container registry (e.g., map the pipe to the updated event structure). In this way, an event received after the update can transmit in the pipe based at least in part on the new association. Different event structures can be associated with different pipes to transmit events. For example, upon detecting an occurrence of an event at a first entity, a first event structure can be used to send first event data to a second entity using a first pipe and a second event structure can be used to send second event data to a third entity using a second pipe. The first event data and/or the second event data can represent different modifications (e.g., filtering, buffering, batching, etc.) to the event based at least in part on the respective event structure. In various examples, sending the first event data from the first entity to the second entity using the first pipe and sending the second event data from the first entity to the third entity using the second pipe is performed substantially in parallel. As discussed, the event structures (or schemas) are associated with a file format and can therefore be stored in a database or other storage device with the file format. For instance, the file format of the event structure can comprise JavaScript Object Notation (JSON), Protocol Buffers, a Java language, a data serialization language, a human-readable data serialization language, or the like. In some examples, the event structure component210can store conversion data usable to convert between different data formats (e.g., between Protocol Buffers and JSON). The conversion data can be used by the event structure component210to identify and store event structures in different data formats which are then made available to connect more services having different native or default data formats. Thus, different data formats can be determined dynamically based at least in part on the conversion data (e.g., as events are sent using the pipe). In some examples, the event structure can include a field to represent a response to determining that the event data and the event structure have a different number of fields or a different order of field types. For example, the pipe component122can determine that an event has an additional or missing field from a number of fields in the event structure and/or determine that the fields match but an order of the fields in the event is different from an order of the fields in the event structure (e.g., compare the respective data to determine a field that does not match the default or expected event structure). Based on determining that the event data and the event structure have a different number and/or order of fields, the event structure component210can cause the appropriate response to further process or not process the event. For example, the response can indicate to send the event to a buffer (e.g., the first-in first-out queue), archive the event, etc. Thus, the event structure component210can compare fields or other data associated with an event to expected fields or data associated with an event structure and further process the event using a predetermined response rather than failing to transmit the event as in examples that do not implement the event structure component210. In various examples, functionality associated with the filter component202, the data format component204, the buffer component206, the batch component208, and/or the event structure component210can be implemented in an order that efficiently utilizes computational resources (e.g., processing resources, memory resource, and the like) across the components being implemented. For example, events can be buffered, batched, filtered, transformed, archived, etc. in a particular order to minimize an impact on the available computational resources. In one non-limiting example, the buffer component206can buffer events which can be available to batch using the batch component208. In another example, the batch component208can batch events prior to the events being transformed using a compute service or the data format component204. The pipe component122, the filter component202, the data format component204, the buffer component206, the batch component208, and/or the event structure component210can be associated with a resource name or other identifier that identifies the respective component. The event structure can include resource name information for components, services, applications, databases, etc. for a variety of reasons including to enable routing of an event to an intermediary service (if included for the event) and/or the destination. In some examples, a validation configuration can access the event structure (schema) usable to validate an event using a resource name assigned to the schema. In another example, the buffer can be associated with a resource name that can be included as part of the pipe characteristics so that events can be sent to the buffer. In some examples, the event structure can include fields that store values to indicate a resource name, an API, or other characteristics associated with components or services of the pipe. FIG.3illustrates a diagram300of an example pipe between a first entity and a second entity in an example service provider network. For example, the pipe106ofFIG.1can represent a pipe control plane302and a pipe data plane304usable for transmitting event data associated with an event. Generally, the pipe control plane302operates to authorize the client device(s) (e.g., determines a customer account, validates a user, validates security credentials, etc.) and establish and manage the pipe data plane304which transmits the event data using the pipe106. As shown inFIG.3, the client device(s)104can send an API306to a pipe API gateway308of the serverless computing environment114which forwards the API306to the pipe control plane API310. In various examples, the API306can request establishing the pipe data plane304. The pipe API gateway308can be configured to receive APIs that invoke functionality to configure the pipe data plane304of the pipe106. The pipe control plane302can include the pipe control plane API to process the API306and a database312for storing information about event structures, pipes, and services (e.g., an association between an event structure and a pipe, characteristics of the pipe, characteristics of the services). For example, the database312can store event structure information defining characteristics of the event as well as an association between the event structure and a particular pipe. In some examples, information from the database312can be sent to the pipe data plane304based on, for example, the first entity110sending an event314to the pipe data plane304causing the pipe data plane304to send a request for the event structure information. More generally,FIG.3depicts a dashed line between the pipe control plane302and the pipe data plane304to represent one or more data transmissions. Based on receiving the event314, the pipe data plane304can implement a pipe data plane API316to process an API associated with the event314. For example, the pipe data plane API316can authorize the API associated with the event314, and initiate processing of the event314. In some examples, the pipe data plane304can request event structure information (or other information such as data format information) from the database312, and based at least in part on the request, receive information indicating whether to filter, buffer, batch, archive, or transform a data format of the event314. In various examples, an event modifier component318can perform functionality associated with one or more of: the pipe component122or one or more of the filter component202, the data format component204, the buffer component206, the batch component208, and the event structure component210(e.g., depending upon information associated with the event). The pipe data plane304can include a pipe dispatcher320that is configured to identify a destination for the event314and transmit a modified event322to the second entity112. The modified event322can represent the event314after filtering, buffering, batching, archiving, and/or data format transforming. The event modifier component318can determine the modified event322based at least in part on the fields in the event structure indicating whether to perform one or more of: filtering, buffering, batching, archiving, and/or data format transforming. In a non-limiting example, the event modifier component318can generate the modified event322to include values for the fields in the event structure. For instance, the event modifier component318can transform the event314from a first data format to a second data format based on a field in the event structure having a value indicating to transform the event314. The pipe dispatcher320can also, or instead, be configured to determine an order for implementing the filter component202, the data format component204, the buffer component206, the batch component208, and/or the event structure component210to process the event314. In various examples, the pipe dispatcher320can batch the event314with other events and transform a data format of a portion of the event314after batching to make efficient use of computational resources available in the serverless computing environment114. As mentioned, the event structure associated with the event structure component210can be stored in the database312for sending, however, in another example the event structure can be received and/or stored at the first entity110, the second entity112, the pipe component, or a component thereof. In some examples, the event structure can be stored as mentioned prior to receiving the event314so that a respective component can process the event based at least in part on information in the event structure. In various examples, the database312can include characteristics of the event314such as one or more of: a first batch size of the event source, a first batch interval of the event source, a second batch size of the event destination, a second batch interval of the event destination, a queue to queue the event data, a name a time, a version, an account, or metadata, just to name a few. The characteristics of the event can be based at least in part on input from a user (e.g., the user116) at a time prior to an occurrence of the event314(e.g., receiving the configuration data212ofFIG.2). In some examples, the pipe dispatcher320can represent a microkemel architecture configured to integrate upstream and downstream services. For example, the pipe dispatcher320can represent an SPI (Service Provider Interface), or other interface, to route events to an intermediary service, application, and/or database between a source of an event and a destination of the event of a pipe. In other words, the pipe dispatcher320can manage sending and receiving data between the components of the pipe component122, a third-party service, and/or a computing device including services invoked after the event is detected and before the event (or modified event) is received at the destination. FIG.4illustrates a diagram400of an example pipe for implementing the techniques described herein. For example, the pipe data plane304ofFIG.3can exchange event data between an event source402and an event destination404. As shown inFIG.3, the pipe component122ofFIG.1can provide functionality to filter, format, buffer, batch, and/or archive event(s) at the event source402, the pipe data plane304, and/or the event destination404. The event source402can represent the first entity110, another pipe, or the client device(s)104, and the event destination404can represent the second entity112or another pipe. The event source402can include one or more a first service412A, a second service412B, up to an Nth service412N, and the event destination404can include one or more a first service414A, a second service414B, up to an Nth service414N, where N can be any integer greater than 1. Each service may be associated with a respective pipe data plane, and each pipe data plane is capable of performing synchronous and/or asynchronous invocations. The pipe data plane304can receive a synchronous event406, a batch408, or an asynchronous event410from the event source402and use the event modifier component318to modify the synchronous event406, the batch408, or the asynchronous event410based at least in part on which functionality the event invokes (e.g., filtering, buffering, batching, etc.) and output a modified event416. In examples that the event source402includes a single service (e.g., the first service412A), each of the synchronous event406, the batch408, or the asynchronous event410can be processed by the pipe component122and sent to the event destination404as respective modified events416. Thus, the pipe data plane304integrates services of the event source402and services of the event destination404. FIG.5illustrates a diagram of an example service provider network implementing an example pipe for filtering data between an event source and an event destination. For example, the service provider network102can include the pipe106and the filter component202to apply filtering techniques for an event at the event source402. In some examples, an event502is sent from the event source402to a connection service504using the pipe106for filtering by the filter component202or a filter service of the service provider network102. However, the filter component202can, in some examples, cause the event to be filtered at the event source402and a filtered event506is sent to the connection service504using the pipe106. Thus, filtering can occur at different portions of the pipe106including at the event source402and/or an intermediary service(s). FIG.5depicts the service provider network102comprising the connection service504to connect to the event source402to the pipe106, a delivery service508to route event data using the pipe106, and other service(s)510to perform other functionality associated with the pipe106. The connection service504can represent an API gateway (e.g., the pipe API gateway308, etc.) usable to invoke the pipe control plane302and/or the pipe data plane304, for example. The delivery service508can be configured to send a processed event512to the event destination404and/or to route the event502and/or the filtered event506to the other service(s)510. In some examples, the other service(s)510can represent a filter service (e.g., when the event502is sent from the event source402) or other services such as a service to buffer, batch, transform, archive, etc. the filtered event506. Thus, the pipe106of the service provider network102can be implemented to refrain from sending, remove, or rearrange event data associated with the event502for sending to the event destination404. FIG.6illustrates a diagram of an example service provider network implementing an example pipe for determining a data format for transmitting an event between an event source and an event destination. For example, the service provider network102can include the pipe106and the data format component204to apply data formatting techniques for an event at the event source402. In some examples, an event602is sent from the event source402to the connection service504using the pipe106for formatting by the data format component204or a compute service of the service provider network102. However, the data format component204can, in some examples, cause the event602to be formatted at the event source402and a formatted event604is sent to the connection service504using the pipe106. Thus, formatting can occur at different portions of the pipe106including at the event source402and/or an intermediary service(s) (e.g., the other service(s)510). FIG.6depicts the service provider network102comprising the connection service504to connect the event source402to the pipe106and the delivery service508to route event data using the pipe106. The delivery service can also or instead route event data to one or more of the other service(s)510to perform other functionality associated with the pipe106or event. In various examples, the delivery service508can be configured to send a processed event606to the event destination404and/or to route the event602and/or the formatted event604to the other service(s)510. In some examples, the other service(s)510can represent a service to determine a data format (e.g., when the event602is sent from the event source402) or other services such as a service to buffer, batch, filter, archive, etc. the formatted event604. The processed event606can therefore represent an event having a modified or transformed data format and/or an event this is transformed as well as batched, filtered, buffered, etc. depending upon the functionality associated with the pipe. Thus, the pipe106of the service provider network102can be implemented to change a compute service that processes the vent, transform an event to a new data format, and/or identify a service for formatting the event for sending to the event destination404. FIG.7illustrates a diagram of an example service provider network implementing an example pipe for buffering data between an event source and an event destination. For example, the service provider network102can include the pipe106and the buffer component206to apply buffering techniques for an event at the event source402. In some examples, an event702is sent from the event source402to the connection service504using the pipe106for buffering by the buffer component206or a buffer service of the service provider network102. However, the buffer component206can, in some examples, cause the event602to be buffered at the event source402and a buffered event604is sent to the connection service504using the pipe106. Thus, buffering can occur at different portions of the pipe106including at the event source402and/or an intermediary service(s) (e.g., the other service(s)510). In various examples, the delivery service508can be configured to send a processed event706to the event destination404and/or to route the event702and/or the buffered event704to the other service(s)510. In some examples, the other service(s)510can represent a buffer service to buffer the event702and/or other services such as a service to filter, batch, transform, archive, etc. the buffered event (e.g., buffered at the event source402as the buffered event704or buffered at a buffer service). The processed event706can therefore represent a buffered event and/or an event this is buffered as well as batched, filtered, transformed, etc. depending upon the functionality associated with the pipe106. Thus, the pipe106of the service provider network102can be implemented to buffer event data at an event source, buffer event data at a service, buffer an output of a component (e.g., a component of the pipe component122), or identify a service for buffering events prior to being sent to the event destination404using the pipe106. FIG.8illustrates a diagram of an example service provider network implementing an example pipe for batching data between an event source and an event destination. For example, the service provider network102can include the pipe106and the batch component208to apply batching techniques for events at the event source402or other portion of the pipe106. In some examples, an event802is sent from the event source402to the connection service504using the pipe106for batching by the batch component208or a batch service of the service provider network102. However, the batch component208can, in some examples, cause the event802to be batched at the event source402and batched events804are sent to the connection service504using the pipe106. Thus, batching can occur at different portions of the pipe106including at the event source402and/or an intermediary service(s) (e.g., the other service(s)510). In various examples, the delivery service508can be configured to send a processed event806to the event destination404and/or to route the event802and/or the batched events804to the other service(s)510. In some examples, the other service(s)510can represent a batch service to batch the event702and/or other services such as a service to filter, buffer, transform, archive, etc. the batched events804. The processed event806can therefore represent a batched event and/or an event this is batched as well as buffered, filtered, transformed, etc. depending upon the functionality associated with the pipe106. Thus, the pipe106of the service provider network102can be implemented to batch event data at an event source, batch event data at a service, batch an output of a component (e.g., a component of the pipe component122), or identify a service for batching events prior to being sent to the event destination404using the pipe106. FIG.9illustrates a diagram of an example service provider network implementing an example pipe for transmitting an event based at least in part on an event structure that define the event. For example, the service provider network102can include the pipe106and the event structure component210to define an event902for sending from the event source402to the event destination404. The event structure can represent a schema that identifies different event characteristics for consideration when generating, sending, or receiving an event using a particular pipe. The event structure may, in various examples, have portions or fields for storing values that represent characteristics of the event (e.g., an event name, a time, a version, an account, a source, a destination, metadata, a pipe name, a batch size, a batch interval, and/or a data format, just to name a few). Other components of the service provider network102including components of the pipe component122can access or receive the event structure from a database or a registry associated with the event structure component210. In some examples, the event902can be generated at the event source402based at least in part on the event structure. For instance, the event structure can define which changes in a dataset or a state trigger an event to cause the event source402to detect an occurrence of a future event that matches the event structure. The event structure can also store characteristics associated with one or more of: a source of the pipe, a destination of the pipe, a service or application accessible by the pipe that enable the pipe to transmit the event. The characteristics can include, for example, an identifier or resource name, an API, a data format, a batch size, a batch interval, and so on to support transmitting the events to the destination along with any optional services. The event structure component210can, for example, compare a structure of the event902to the event structure and, based on the comparing, transmit event data when the structure of the event902is the same as the event structure or refrain from transmitting the event902when the structure of the event902is the same as the event structure. The event structure can, in some examples, identify how to manage events that differ from the event structure by enabling a user to indicate in a user interface an action to being unable to send an event (e.g., forward to the buffer, modify a compute resource, and so on). In some examples, the event902can be modified after being generated at the event source402and before being sent to the event destination404. For example, the event902can be filtered, transformed, batched, buffered, and/or archived, just to name a few. In such examples, modified event data can be sent to the event destination404instead of the event902as shown inFIG.9. In some examples, the other service(s)510can represent a batch service to batch the event902and/or other services such as a service to filter, buffer, transform, archive, etc. the event902. FIGS.10-14illustrate flow diagrams of example methods1000,1100,1200,1300, and1400that illustrate aspects of the functions performed at least partly by the service provider network102as described in relation toFIG.1and elsewhere. The logical operations described herein with respect toFIGS.10-14may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIGS.10-14and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. In some examples, the techniques of methods1000,1100,1200,1300, and1400may be performed by a system comprising one or more processors and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the operations of the methods. FIG.10illustrates a flow diagram of an example method performed by a service provider network for filtering data associated with an example pipe. For example, the service provider network102can implement the pipe106to filter events transmitted between a source (e.g., a service, an application, another pipe) and a destination (also referred to herein as a target). At1002, a service provider network may receive first data indicating a first event service as a source for sending event data associated with an event, the event representing a change in a resource state, second data indicating a second event service as a target for receiving the event data associated with the event, and third data indicating a filter to apply to the event data. For instance, the pipe component122can receive configuration data212from the client device(s)104and/or an event structure indicating a source (e.g., the first entity110) of an event for sending event data associated with an event. In some examples, the first data can be received from a user of the client device(s)104at a previous time. The first data may also or instead be associated with an event structure and the first data can be received from the event structure component210, for example. The service provider network may receive second data indicating a second event service as a target for receiving the event data associated with the event. For instance, the pipe component122can receive configuration data212from the client device(s)104and/or the event structure indicating a target or destination (e.g., the second entity112) to receive the event data associated with the event. The second data can represent a previous input from a user, such as a developer an application or service associated with the client device(s)104such as when the user initiates setting up a pipe or modifying a pipe. The second data may also, or instead, be identified or received in associated with an event structure. The service provider network may receive third data indicating a filter to apply to the event data. For instance, the client device(s)104can send configuration data212to the pipe component122and/or to the filter component202indicating a filter to apply to an upcoming event to be transmitted using the pipe. For instance, the configuration data212can include one or more filters to apply to the event such as a payload filter that causes the event data to adhere to a specific payload size for transferring using the pipe. In some examples, examples, the filter component202can apply a filter to trim or otherwise select portions of the event data to include or exclude for transmitting as filtered event data to the target. In other examples, the second data can be received from the event structure and filter information can be stored as one or more fields of the event structure as described herein. At1004, a pipe component associated with the service provider network may configure a pipe between the first event service and the second event service based on the first data, the second data, and the third data. The pipe can represent, for example, a virtual communication channel for sending the event data associated with the event, the pipe configured between the first event service and the second event service independent of (free of, without requiring, etc.) a developer providing computer-readable instructions to the first event service (e.g., to configure the pipe). For instance, the pipe component122can configure the pipe to include filtering techniques based at least in part on receiving source, target, and filter information from previous input from the user conveying preferences for how to configure the pipe. Generally, the pipe can represent a data plane for exchanging event data associated with the event with various services. In some examples, the filtering techniques can be applied to the pipe based at least in part on identifying the source, the target, and the filter information from the event structure that defines the event. In various examples, the user, or developer, need not provide computer-readable instructions (e.g., integration code to connect the first event service with the target) and instead the filter component202can identify and apply the filter(s). At1006, the service provider network may receive an indication of the event at the first event service. For instance, the service provider network102can receive an API from the first event service at an API gateway and forward the event to the pipe component122for processing. At1008, the service provider network may determine, by the filter and as filtered event data, a modification to the event data based on the indication of the event. For instance, the filter component202may apply one or more filters to the event based at least in part on one or more filtering rules identified in the configuration data212and/or the event structure associated with the event. In some examples, the event can represent the event314and the filter component202(or functionality thereof) can be included in the event modifier component318for applying the appropriate filters. At1010, the service provider network may send, by the first event service and via the pipe, the filtered event data to the second event service based on the indication of the event. For instance, the pipe component122can cause the filtered event data to transmit from the first entity110to the second entity112using the pipe106. In some examples, the pipe dispatcher320can forward the filtered event data from the filter component202to the second entity112though in other examples the filtered event data can be transmitted directly from the filter component202. FIG.11illustrates a flow diagram of an example method performed by a service provider network for determining a data format for transmitting an event using an example pipe. For example, the service provider network102can implement the pipe106to determine a data format usable for an event destination to process an event sent from an event source. At1102, a service provider network may detect an occurrence of an event in a serverless computing environment, the event representing a change in a resource state. For instance, the pipe component122can receive an API indicating an event at a source (e.g., the first entity110). In various examples, the event may be generated at the first entity110based at least in part on determining a change in a resource state related to a field of an event structure that defines the event. At1104, the service provider network may determine, based on detecting the occurrence of the event, a first event service representing a first end of a pipe. For instance, the pipe component122can determine a source of an event (e.g., the first entity110) based at least in part on configuration data212from the client device(s)104and/or an event structure indicating a source of an event. At1106, the service provider network may determine a second event service representing a second end of the pipe, the pipe to send event data associated with the event to the second event service representing the second end of the pipe. For instance, the pipe component122can determine a destination of the event (e.g., the second entity112) based at least in part on configuration data212from the client device(s)104and/or an event structure indicating the destination of the event. At1108, a pipe component associated with the service provider network may determine an indication to transform the event data from a first data format to a second data format different from the first data format. For instance, the data format component204can determine that a current data format associated with the event is unable to be processed by the destination, and output an indication to transform the event data to another data format usable by the destination. In some example, the indication to transform the event data can be based on an input received at a previous time such as input indicating a compute service or program interface to process the event data by the second event service (associated with a previous input from a user (e.g., the user116). In some examples, the data format techniques can be applied to the pipe106(or events associated therewith) based at least in part on identifying the data format (e.g., a programming language, an API, a kernel, or the like) from the event structure that defines the event. In various examples, the user, or developer, need not provide computer-readable instructions, such as an API, to transform a data format responsive to detecting the event. At1110, a data format component of the service provider network may determine the second data format for the second event service to process the event data based on the indication. For instance, data format component204can determine the data format that the second entity112can process responsive to receiving the event (e.g., to send acknowledgement of receiving the event, for example). In some examples, the data format component204can access mapping data from a database (e.g., the database312) to map data formats to different services, applications, etc. and use the mapped data to select the data format native to target or destination. In some examples, the data format component204may query the database312to identify a rule associated with, or mapped to, an API call. At1112, the data format component of the service provider network may transform, as transformed event data, at least a portion of the event data for processing in accordance with the second data format. For instance, the data format component204may convert portions of the event (e.g., a subset of data that requires transformation) from the first data format to the second data format. At1114, the service provider network may send the transformed event data from the first event service to the second event service using the pipe and independent of a server between the first end of the pipe and the second end of the pipe. For instance, the pipe component122can cause the transformed event data to transmit from the first entity110to the second entity112using the pipe106. In some examples, the pipe dispatcher320can forward the transformed event data from the data format component204to the second entity112. FIG.12illustrates a flow diagram of an example method performed by a service provider network for implementing a buffer to capture events associated with an example pipe. For example, the service provider network102can implement the pipe106to buffer an event generated at an event source. At1202, a service provider network may receive first data indicating a first event service as a source for sending event data associated with an event, the event representing a change in a resource state. For instance, the pipe component122can receive configuration data212from the client device(s)104and/or an event structure indicating a source (e.g., the first entity110) of an event for sending event data associated with an event. In some examples, the first data can be received from a user of the client device(s)104at a previous time. The first data may also, or instead, be associated with an event structure and the first data can be receive from the event structure component210, for example. At1204, the service provider network may receive second data indicating a second event service as a target for receiving the event data associated with the event. For instance, the pipe component122can receive configuration data212from the client device(s)104and/or an event structure indicating a target or destination (e.g., the second entity112) to receive the event data associated with an event. The second data can represent a previous input from a user, such as a developer an application or service associated with the client device(s)104such as when the user initiates setting up a pipe or modifying a pipe. The second data may also, or instead, be identified or received from an event structure. At1206, the service provider network may receive third data indicating a queue for storing the event data. For instance, the client device(s)104can send configuration data212to the pipe component122and/or to the buffer component206indicating a queue to apply to an upcoming event to be transmitted using the pipe. For instance, the configuration data212can include one or more buffering rules to apply to the event such as a rule to determine when to initiate buffering, how long to store data in the buffer, a type of buffer to use, and the like. In other examples, the second data can be received from the event structure and buffer information can be stored as one or more fields of the event structure. At1208, a pipe component associated with the service provider network may configure a pipe between the first event service and the second event service based on the first data, the second data, and the third data, the pipe disposed the first event service and the second event service independent of receiving computer-readable instructions from a developer implementing the first event service. For instance, the pipe component122can configure the pipe to include buffering techniques based at least in part on receiving source, target, and buffer information from previous input from the user conveying preferences for how to configure the pipe for buffering. Generally, the pipe can represent a virtual communication channel for exchanging event data associated with the event between a source and a target. The pipe may also enable communications with one or more services that may support functionality implemented by the pipe component122including the buffer component206. In some examples, the buffering techniques can be applied to the pipe106based at least in part on identifying the source, the target, and the buffer information from the event structure that defines the event. At1210, the service provider network may determine an occurrence of the event at the first event service. For instance, the service provider network102can receive an API from the first event service at an API gateway and forward the event to the pipe component122for processing. In various example, the event occurs after configuring the pipe. At1212, a buffer component of the service provider network may determine, based on determining the occurrence of the event, that the first event service is unable to send the event data to the second event service. For instance, the buffer component206may capture events by the queue based at least in part on one or more buffering rules identified in the event structure or otherwise associated with the pipe106. In some examples, the event can represent the event314and the buffer component206(or functionality thereof) can be included in the event modifier component318which can buffer the events. At1214, the service provider network may store, based on determining that the first event service is unable to send the event data, the event data in the queue for sending to the second event service at a later time. For instance, the buffer component206can send the buffered event data from the first entity110to the second entity112using the pipe106. In some examples, the pipe dispatcher320can forward the filtered event data from the filter component202to the second entity112though in other examples the event data can be transmitted directly from the buffer component206. FIG.13illustrates a flow diagram of an example method performed by a service provider network for batching events for transmission using an example pipe. For example, the service provider network102can implement the pipe106to batch events for transmission using the pipe106. At1302, a service provider network may identify multiple events at a first event service for sending to a second event service in a serverless computing environment, an event of the multiple events representing a change in a resource state. For instance, the pipe component122can determine that at least some of the multiple events are configured to be included in a batch based at least in part on configuration data212from the client device(s)104and/or an event structure indicating whether or not to batch an event associated with a particular pipe. In some examples, the first data can be received from a user of the client device(s)104at a previous time. The first data may also, or instead, be associated with an event structure and the first data can be receive from the event structure component210, for example. At1304, the service provider network may determine, based on identifying the multiple events, a configuration value of a pipe representing a virtual communication channel for transferring event data associated with the multiple events between the first event service and the second event service. For example, the configuration value can identify that the pipe106is configured to provide batching functionality at the first event service (e.g., the first entity110) and/or the second event service (e.g., the second entity112). In various examples, the pipe can be disposed between the first event service and the second event service based at least in part on receiving input data at a previous time indicating a) the first event service as a source of the multiple events b) the second event service as a target of the multiple events, and c) to perform batching at one or more of: the first event service or the second event service. At1306, a batching component of the service provider network may determine, based on the configuration value of the pipe, one or more batching rules for batching the multiple events. For instance, the client device(s)104can send configuration data212to the pipe component122and/or to the batch component208indicating how to batch upcoming events for transmission using the pipe. For instance, the configuration data212can include one or more batch rules to apply to the event such as a first maximum payload of the first event service, a second maximum payload of the second event service, a first maximum batch size of the first event service, or a second maximum batch size of the second event service. In some examples, the pipe component122can receive configuration data212from the client device(s)104and/or an event structure indicating one or more batch rules associated with the pipe106. The one or more batch rules can be based at least in part on a previous input from a user, such as a developer an application or service associated with the client device(s)104(e.g., when the user initiates setting up a pipe or modifying a pipe). At1308, a pipe component associated with the service provider network may expose, by the pipe, a batch Application Program Interface (API). For instance, the pipe component122can expose a batch API to invoke batching functionality associated with the first entity110and/or the second entity112. At1310, the service provider network may determine a batch size for a portion of the multiple events based on the one or more batching rules and the batch API. For instance, the batch component208can determine a batch size, a batch interval, or other batch characteristics based on the one or more batch rules. In some examples, the batch size can be the smaller of a batch size associated with the first entity110or a batch size associated with the second entity112(as determined by the event structure, for example) At1312, the service provider network may send, from the first event service to the second event service, the portion of the multiple events over the pipe in a batch based on the batch size. For instance, the pipe component122can cause batched event data to transmit from the first entity110to the second entity112using the pipe106. FIG.14illustrates a flow diagram of an example method performed by a service provider network for defining events for transmission using an example pipe. For example, the service provider network102can implement the pipe106to define an event structure for events transmitted using the pipe106. At1402, a service provider network may determine a pipe between an event source and an event destination of a serverless computing environment, the pipe representing a virtual communication channel for transmitting event data associated with an event. The event can represent a change in a resource state. In some examples, the pipe component122can receive configuration data212from the client device(s)104indicating a source (e.g., the first entity110) of an event and a destination of the event (e.g., the second entity112). In some examples, the pipe106can be determined based at least in part on input from a user of the client device(s)104at a previous time. At1404, an event structure component of the service provider network may determine an event structure for the event in the serverless computing environment. For instance, the event structure component210can determine the event structure to include fields for storing values that represent characteristics of the event comprising: a source, a destination, a pipe name, and a data format (e.g., a programming language, an API, or a kernel for processing the event data). In various examples, the characteristics of the event can also comprise one or more of: a first batch size of the event source, a first batch interval of the event source, a second batch size of the event destination, a second batch interval of the event destination, a queue to queue the event data, a name a time, a version, an account, metadata, and so on. For instance, the pipe component122can receive configuration data212from the client device(s)104and determine the event structure to define characteristics of the event based at least in part on the configuration data212. In some examples, the event structure can include a validation configuration comprising a setting, a threshold, or other information usable to validate event data before sending from the first entity110to the second entity112. The serverless computing environment can present an evet-driven architecture that exchanges events between the source and the destination using the pipe106. At1406, the event structure component of the service provider network may associate, as an association, the pipe and the event structure for the event in a container registry. For instance, the event structure component210can store the event structure in the container registry that is accessible to other components of the service provider network (e.g., the pipe component122) that includes an association between a name of the pipe and the event structure. At1408, a pipe component associated with the service provider network may detect the event at the event source that includes the event structure. For instance, the pipe component122can receive an API indicating an event at a source (e.g., the first entity110). In various examples, the event may be generated at the first entity110based at least in part on determining a change in a resource state related to a field of an event structure that defines the event. At1410, the service provider network may validate the event data based at least in part on the event structure. For instance, the first entity110can generate the event, and the pipe component122can compare an event structure of the event to the event structure associated with the pipe106to determine that the event data matches the event structure and outputting a first indication to send the event data to the event destination or determining that the event data does not match the event structure and outputting a second indication to refrain from sending the event data to the event destination. At1412, the service provider network may send, based at least in part on the validating, the event data from the event source to the event destination using the pipe. For instance, the pipe component122can cause the generated event data to transmit from the first entity110to the second entity112using the pipe106based at least in part on the first indication. FIG.15illustrates a system and network diagram1500of an example operating environment that includes a service provider network (that may be part of or associated with a cloud-based service network/platform) for implementing the techniques described herein. The service provider network102can include an API gateway1502may receive an API call and route the API call to a component or service. In various examples, the service provider102can include the pipe component122which comprises the filter component202, the data format component204, the buffer component206, the batch component208, and the event structure component210. The service provider network102can provide computing resources1506, like VM instances, containers, serverless functions, storage, etc., on a permanent or an as-needed basis. Among other types of functionality, the computing resources1506provided by the service provider network102may be utilized to implement the various cloud-based services. The computing resources provided by the service provider network102can include various types of computing resources, such as data processing resources like VM instances, data storage resources, networking resources, data communication resources, application-container/hosting services, network services, and the like. Each type of computing resource provided by the service provider network102can be general-purpose or can be available in a number of specific configurations. For example, data processing resources can be available as physical computers or VM instances in a number of different configurations. The VM instances can be configured to execute applications, including web servers, application servers, media servers, database servers, some or all of the network services described above, and/or other types of programs. Data storage resources can include file storage devices, block storage devices, and the like. As shown, the service provider network102can include the database312. The service provider network102can also be configured to provide other types of computing resources not mentioned specifically herein. The computing resources1506provided by the service provider network102may be enabled in one embodiment by one or more data centers1504A-1504N (which might be referred to herein singularly as “a data center1504” or in the plural as “the data centers1504”). The data centers1504are facilities utilized to house and operate computer systems and associated components. The data centers1504typically include redundant and backup power, communications, cooling, and security systems. The data centers1504can also be located in geographically disparate locations. One illustrative embodiment for a data center1504that can be utilized to implement the technologies disclosed herein will be described below with regard toFIG.16. The data centers1504may be configured in different arrangements depending on the service provider network102. For example, one or more data centers1504may be included in or otherwise make-up an availability zone. Further, one or more availability zones may make-up or be included in a region. Thus, the service provider network102may comprise one or more availability zones, one or more regions, and so forth. The regions may be based on geographic areas, such as being located within a predetermined geographic perimeter. The users and/or admins of the service provider network102may access the computing resources1506provided by the data centers1504of the service provider network102over any wired and/or wireless network(s)118(utilizing a client device104and/or another accessing-user device), which can be a wide area communication network (“WAN”), such as the Internet, an intranet or an Internet service provider (“ISP”) network or a combination of such networks. For example, and without limitation, a device operated by a user of the service provider network102may be utilized to access the service provider network102by way of the network(s)118. It should be appreciated that a local-area network (“LAN”), the Internet, or any other networking topology known in the art that connects the data centers1504to remote clients and other users can be utilized. It should also be appreciated that combinations of such networks can also be utilized. In a distributed computing environment, such as the one included in the service provider network102(e.g., computing-resource network), a fleet of VM instances and/or servers may have workflow or processes executed thereon to manage resources. For instance, a patch may need to be installed on each VM instance and/or resource at a particular time. In such distributed applications of workflows or processes, a load balancer may be at the front end in front of the fleet of servers where a request for a workflow comes in, and the load balancer distributes the request to execute the workflow amongst the servers. FIG.16illustrates a computing system diagram illustrating a configuration for a data center that can be utilized to implement the techniques disclosed herein. The example data center1504shown inFIG.16includes several server computers1602A-1602F (which might be referred to herein singularly as “a server computer1602” or in the plural as “the server computers1602”) for providing computing resources1604A-1604E. In some examples, the resources1604may include, or correspond to, resources associated with the pipe component122or a component thereof. The server computers1602can be standard tower, rack-mount, or blade server computers configured appropriately for providing the computing resources described herein (illustrated inFIG.16as the computing resources1604A-1604E). As mentioned above, the computing resources provided by the service provider network102can be data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, and others. Some of the servers1602can also be configured to execute a resource manager1606capable of instantiating and/or managing the computing resources. In the case of VM instances, for example, the resource manager1606can be a hypervisor or another type of program configured to enable the execution of multiple VM instances on a single server computer1602. Server computers1602in the data center1504can also be configured to provide network services and other types of services, some of which are described in detail below with regard toFIG.17. The data center1504shown inFIG.16also includes a server computer1602F that can execute some or all of the software components described above. For example, and without limitation, the server computer1602F can be configured to execute components of the service provider network102, including the services108. In the example data center1504shown inFIG.16, an appropriate LAN1608is also utilized to interconnect the server computers1602A-1602F. It should be appreciated that the configuration and network topology described herein has been greatly simplified and that many more computing systems, software components, networks, and networking devices can be utilized to interconnect the various computing systems disclosed herein and to provide the functionality described above. Appropriate load balancing devices or other types of network infrastructure components can also be utilized for balancing a load between each of the data centers1504A-1504N, between each of the server computers1602A-1602F in each data center1504, and, potentially, between computing resources in each of the server computers1602. It should be appreciated that the configuration of the data center1504described with reference toFIG.16is merely illustrative and that other implementations can be utilized. FIG.17is a computer architecture diagram showing an illustrative computer hardware architecture for implementing one or more computing devices1700that can be utilized to implement the techniques disclosed herein. The computer architecture shown inFIG.17illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. The computing device1700includes a baseboard1702, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)1704operate in conjunction with a chipset1706. The CPUs1704can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device1700. The CPUs1704perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset1706provides an interface between the CPUs1704and the remainder of the components and devices on the baseboard1702. The chipset1706can provide an interface to a RAM1708, used as the main memory in the computing device1700. The chipset1706can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)1710or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computing device1700and to transfer information between the various components and devices. The ROM1710or NVRAM can also store other software components necessary for the operation of the computing device1700in accordance with the configurations described herein. The computing device1700can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network118. The chipset1706can include functionality for providing network connectivity through a network interface controller (NIC1712), such as a gigabit Ethernet adapter. The NIC1712is capable of connecting the computing devices1700over the network118. It should be appreciated that multiple NICs1712can be present in the computing device1700, connecting the computer to other types of networks and remote computer systems. The computing device1700can be connected to one or more computer-readable media1718storing software components for the computer device1700, and one or more mass storage devices1720for storing data. The computer-readable storage media1718can store an operating system1722, programs1724, the API gateway1502, and the pipe component122, which have been described in greater detail herein. The mass storage device1720can be connected to the computing device1700through a storage controller1714connected to the chipset1706. The mass storage device1720can consist of one or more physical storage units. The storage controller1714can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. Generally, the computer-readable storage media1718may store the components described herein as executable, computer-readable instructions. For instance, the components may include the API gateway1502, the pipe component122, or components associated with the pipe component122. The components may be stored and/or executed on a single server, or on a system of two or more computing devices1700. The computing device1700can store data on the mass storage device1720by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the mass storage device1720is characterized as primary or secondary storage, and the like. For example, the computing device1700can store information to the mass storage device1720by issuing instructions through the storage controller1714to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computing device1700can further read information from the mass storage device1720by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device1720described above, the computing device1700can have access to the computer-readable storage media1718to store and retrieve information, such as program modules, event structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computing device1700. In some examples, the operations performed by the service provider network102, and or any components included therein, may be supported by one or more devices similar to computing device1700. Stated otherwise, some or all of the operations performed by the service provider network102, and or any components included therein, may be performed by one or more computer devices1700operating in a cloud-based arrangement. As shown, the storage device1720may store the database312that includes information about event structures, pipes, and services (e.g., an association between an event structure and a pipe, characteristics of the pipe, characteristics of the services) as well as rules and access policies. By way of example, and not limitation, computer-readable storage media1718can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the mass storage device1720can store an operating system1722utilized to control the operation of the computing device1700. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The mass storage device1720can store other system or application programs and data utilized by the computing device1700. In one embodiment, the mass storage device1720or other computer-readable storage media1718is encoded with computer-executable instructions which, when loaded into the computing device1700, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computing device1700by specifying how the CPUs1704transition between states, as described above. According to one embodiment, the computing device1700has access to computer-readable storage media storing computer-executable instructions which, when executed by the computing device1700, perform the various processes described above with regard toFIGS.1-16. The computing device1700can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computing device1700can also include one or more input/output controllers1716for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller1716can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computing device1700might not include all of the components shown inFIG.17, can include other components that are not explicitly shown inFIG.17, or might utilize an architecture completely different than that shown inFIG.17. In various examples, the service provider network may be part of or associated with a cloud-based service network that can be configured to implement aspects of the functionality described herein. The service provider network102can provide computing resources, like physical servers, VM instances, containers, serverless functions, network functions, and storage, on a permanent or an as-needed basis. Among other types of functionality, the computing resources provided by the service provider network102may be utilized to implement the various services described above. The computing resources provided by the service provider network102can include various types of computing resources, such as data processing resources like VM instances, data storage resources, networking resources, data communication resources, application-container/hosting services, network services, and the like. Each type of computing resource provided by the service provider network102can be general-purpose or can be available in a number of specific configurations. For example, data processing resources can be available as physical computers or VM instances in a number of different configurations. The VM instances can be configured to execute applications, including web servers, application servers, media servers, database servers, some or all of the network services described above, and/or other types of programs. Data storage resources can include file storage devices, block storage devices, and the like. The service provider network102can also be configured to provide other types of computing resources not mentioned specifically herein. The computing resources provided by the service provider network102may be enabled in one embodiment by one or more data centers1504(which might be referred to herein singularly as “a data center1504” or in the plural as “the data centers1504”). The data centers1504are facilities utilized to house and operate computer systems and associated components. The data centers1504typically include redundant and backup power, communications, cooling, and security systems. The data centers1504can also be located in geographically disparate locations. While the foregoing invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application. The methods described herein represent sequences of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. In some embodiments, one or more operations of the method may be omitted entirely. Moreover, the methods described herein can be combined in whole or in part with each other or with other methods. The various techniques described herein may be implemented in the context of computer-executable instructions or software, such as program modules, that are stored in computer-readable storage and executed by the processor(s) of one or more computing devices such as those illustrated in the figures. Generally, program modules include routines, programs, objects, components, data structures, etc., and define operating logic for performing particular tasks or implement particular abstract data types. Other architectures may be used to implement the described functionality and are intended to be within the scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, the various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. Similarly, software may be stored and distributed in various ways and using different means, and the particular software storage and execution configurations described above may be varied in many different ways. Thus, software implementing the techniques described above may be distributed on various types of computer-readable media, not limited to the forms of memory that are specifically described.
131,790
11861422
The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features. DETAILED DESCRIPTION The following description is presented byway of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments will now be described by way of example only. As discussed earlier, virtual platforms enable system developers to overcome significant challenges such as testing, developing, debugging, and maintaining various complex electronic systems. Virtual platforms are software devices that can fully mirror the functionality of a System-on-Chip (SoC). The virtual platforms may comprise high-speed processor simulators and device models that simulate the behaviour of an actual hardware device. There are different types of Virtual Platforms, those that are targeted as being close representations of the hardware and those that are targeted for major software development. A virtual platform supports software simulation of different hardware components, allowing software targeting particular hardware to be run and debugged unmodified in a purely software-simulated environment. Examples of when this is useful include when the hardware is only available in limited quantities, or is still under development. Hence the virtual platform can act as a substitute for the actual hardware device enabling a system developer to control, monitor or analyse the simulated hardware device. However, in the case of testing and debugging of electronic systems such as communication systems, there is a need to enable communication between different virtual platforms to test the communication. Some electronic systems (such as computer devices) are capable of running/hosting multiple virtual platforms on the same hardware platform. Each virtual platform is one or more self-contained process run by an operating system in a physical processor in such a system. That means a virtual platform is a piece of software operating in their own address space. Thus, two different virtual platforms operate on two separate address spaces and one virtual platform is not capable of accessing the data in the address space of another virtual platform. There are various predefined mechanisms or ways for enabling communication between different virtual platforms. Some of such mechanisms include pipes, shared files, and shared memory. Piping is a mechanism for inter-process communication using message passing. The mechanism of the pipes is a traditional way in which a set of processes are chained together such that text can be passed between them though a file-like interface. The mechanism of shared files is where two processes synchronize their access to a single file in the filesystem. Similarly, the shared memory is a mechanism for inter-process communication (IPC), i.e. a way of exchanging data (between programs such as virtual platforms) running on a physical processor at the same time. Shared memory enables sharing a portion or a part of physical memory (of the electronic system running the virtual platforms) across the virtual platforms or within a virtual platform. In an implementation, the process may create an area in the memory (such as RAM) which other processes can access. In another implementation shared memory is a method of conserving memory space by directing accesses to copies of a piece of data to a single instance instead, by using virtual memory mappings. This second implementation is most often used for shared libraries and for Execute in Place (XIP). Shared memory is a faster way of enabling communication between processes such as virtual machines compared to piping and shared files as the processes can access the shared memory area like regular working memory. There is almost no computational overhead while using shared memory. Further, the shared memory enables maximum efficient utilization of the resources (such as available physical memory) of the device. On the other hand, communication using shared memory is less scalable, as for example the communicating processes must be running on the same electronic system and care must be taken to avoid potential conflict if processes sharing memory are running on separate processors and the underlying architecture is not cache coherent. While using shared memory, one of the processes (a first virtual platform) tries to communicate with another process (a second virtual platform) by writing data into the shared memory. However, there are no mechanisms to synchronize the communication between the first and second virtual platforms using the shared memory. Therefore, yet another process (a third virtual platform) capable of accessing the shared memory may overwrite the data written by the first virtual platform. Thus, integrity or protection of data is affected while communicating via the shared memory. The inventors have devised a method of utilizing a shared memory-based mechanism efficiently to achieve faster communication between the virtual platforms without compromising the security and integrity of the data transferred. The inventors formulated that by using a dedicated address space of a memory allocated to a simulated device as a shared memory it is possible to perform secure transactions between multiple virtual platforms. The shared memory allocated to the simulated device is referred to as shared device memory. Further, separating the shared device memory in a particular manner into different portions and using one for transferring the data and the other for synchronizing the communication, the shared device memory can be efficiently used for communication between virtual platforms. The detailed explanation of how this is achieved is provided below with reference to the description of the figures. FIG.1illustrates a block diagram of a computer system100running a plurality of virtual platforms (processes). The computer system100comprises a physical processor102and a physical memory104. The computer system may in some examples include one or more physical processors or one or more physical memories. The physical processor102may be a microprocessor, controller, or any other suitable type of processor for processing computer executable instructions to control the operation of the computer system100. Examples of the physical processor102include, but are not limited to, an x86 processor, a RISC-V® processor or an ARM® processor. The physical processor102runs one or more host operating systems dictating tasks to the physical processor. The host operating system could be any operating system such as, but not limited to, Linux or Windows. The physical processor102is coupled to a physical memory104. In some examples, the physical processor may be coupled to one or more physical memories. The memory may be implemented using any suitable type of memory such as, but not limited to, Random Access Memory (RAM), Flash, or any other physical memory modules. The physical processor102runs or executes two or more virtual platforms106a,106b. . .106n. Each virtual platform simulates or virtualizes the behaviour of an actual electronic device. Though virtual platforms can be used for simulating, testing, and debugging different types of electronic devices, in the examples described herein, the virtual platforms are considered to simulate the behaviour of actual communication devices. This comprises simulating devices such as Wi-Fi® devices, wired ethernet or Bluetooth® devices, or other communication device hardware. The physical memory104may comprise a plurality of memory portions108. Each virtual platform (process) is allocated at least one separate, non-overlapping, memory portion (i.e. address spaces) from the plurality of memory portions108. Thus, each virtual platform runs in parallel without interrupting each other. Each virtual platform, as discussed above are one or more stand-alone self-contained processes, not interacting with each other. One virtual platform cannot access the memory portions allocated to the other virtual platforms. Thus, to make the virtual platforms to communicate with each other, a dedicated method of communication needs to be established. The two or more virtual platforms106a,106b. . .106nherein are made to communicate with each other using a shared memory-based mechanism. The two or more virtual platforms106a,106b. . .106nas shown inFIG.1are coupled to the physical memory104of the computer system100. Separate memory portions of the physical memory104are allocated to each virtual platform. A predefined address space of a memory portion allocated to one of the virtual platforms is assigned as the shared device memory110for enabling communication via a shared memory-based mechanism. In an example, the two or more virtual platforms running on computer system100may be implemented to test the communication between two or more communication devices. The computer system100may be configured to simulate one to one, or one to many communication between the two or more communication devices, thereby testing and debugging the transmission and reception of signals (data packets) between two or more communication devices and the functioning of various parts of the communication devices. The detailed explanation of virtual platforms, as well as interconnections between the virtual platforms and the memory104, are explained with reference toFIG.2.FIG.2is a block diagram of an example of a computer system100running two virtual platforms. FIG.2illustrates two virtual platforms (processes)106aand106brunning on the physical processor of the computer system. As explained with reference toFIG.1, there can be two or more virtual platforms running on the computer system100at the same time. Each virtual platform comprises a processor simulator202, a simulated communication device204, and an interface206. The processor simulator202is a piece of software simulating the behaviour of an actual physical processor. The processor simulator202, in each virtual platform, simulates a virtual processor thereby enabling the virtualization of an actual physical processor. The processor simulator may be any suitable simulator including, but not limited to, Quick Emulator (QEMU) or gem5. The processor simulators such as QEMU execute one or more virtual processors in parallel. The processor simulators can interface with many types of physical host hardware, including the hard disks or physical memory, CD-ROM drives, network cards, audio interfaces, and USB devices. The virtual processor simulated by the processor simulator202may be of the same type or different type of processor compared to the physical processor102. In an example, consider the physical processor102is an x86 processor. The virtual processor simulated could be an ARM® processor, RISC-V® processor, MIPS® processor, or even a different x86 processor. The processor simulator202simulates the virtual processors and enables them to run a variety of guest operating systems. The processor simulator may run none, one or more than one guest operating system on the virtual processor. The guest operating system may be any suitable operating system such as, but not limited to, Windows or Linux. The guest operating systems may be same as the host operating system running on the physical processor or may be different from the host operating system. The processor simulator202on different virtual platform could be a different type of processor simulator. Further, the processor simulator on different virtual platforms could simulate the same or different virtual processors executing same or different operating systems. Each virtual platform further comprises a simulated communication device204simulated by a device simulator. In some examples, there could be more than one simulated communication device on a virtual platform. The simulated communication device may be the simulation of any type of communication device hardware, as discussed earlier, including but not limited to, Wi-Fi®, wired ethernet, or Bluetooth®. To simulate a communication device, the device simulator may simulate dedicated hardware of the communication device and run a plurality of lines of code developed for the corresponding hardware on a further simulated processor. This simulated processor is different from the virtual processor simulated by the processor simulator202. In an example, device simulator may be a Radio Processing Unit (RPU) simulator simulating an RPU having a MIPS processor, and using a TCP/IP communication protocol for communication. The simulated communication device204is further configured to run firmware on the simulated processor associated with the simulated communication device. The firmware is a set of instructions running on a device that provides low-level control of that device. The firmware running on the simulated communication device uses the shared device memory110to enable communication between a virtual platform and at least one other virtual platform running on the computer system100. That is, the firmware interacts with the simulated communication device to transmit or receive a signal (or a data packet). In the example shown inFIG.2, the simulated communication device204is not running as stand-alone executable code. The simulated communication device is run as a shared library that interacts with the processor simulator202through an interface206. The interface may be any type of suitable interface, such as but not limited to, Peripheral Component Interconnect express (PCIe), Memory-Mapped Input/Output, (MMIO), Port Mapped Input/output (PMIO), or USB. The simulated communication device204when connected to the processor simulator202via the interface behaves like an actual communication device. The processor simulator202further executes a device driver on the virtual processor and the device driver drives the simulated communication device to communicate with the simulated communication devices on other virtual platforms. Thus, the virtual processor behaves as an actual processor with an operating system on which a final embedded system of a device driver of an actual communication device hardware is expected to run. The virtual platform thus simulates or creates the virtualization of a preferable actual hardware processor with simulations of one or more devices mapped on it that represents a final device on silicon. Further, as shown inFIG.2, the virtual processor simulated by the processor simulator202and the simulated communication device204in each virtual platform is coupled to separate memory portions in the physical memory104. The processor simulator202allocates a portion of the physical memory to the virtual processor. Thus, the virtual processor behaves like an actual processor coupled to an actual memory. Similarly, the device simulator executing the simulated communication device allocates another portion of the physical memory to the simulated communication device204(i.e. to the processor associated with the communication device204). The physical memory104may be partitioned into a plurality of memory portions. Each memory portion represents a subset of the total memory. The multiple memory portions may be of the same size or may be different sizes. The memory portions may form a contiguous block of memory; or the memory portions may be separated from each other by other memory modules or components. In the example shown inFIG.2, the memory104is divided into multiple memory portions that are separate from each other. However, it will be evident to a person of skill in the art that this is an example only and that the memory104may not be divided or may be divided into more or fewer memory blocks and may be contiguous. In the example shown inFIG.2, a first virtual processor run by the processor simulator202in a first virtual platform106ais coupled to a first memory portion208ain the memory104. The simulated communication device204in the first virtual platform106ais coupled to a second memory portion208b. Further, a second virtual processor run by the processor simulator202in a second virtual platform106bis coupled to a third portion208cand the simulated communication device204in the second virtual platform106bis coupled to a fourth memory portion208d. This is an example, and the computer system may have n number of virtual platforms, with the virtual processor and the simulated communication device in each virtual platform allocated with separate memory portions in the physical memory104. In other words, each virtual platform is configured to access separate memory portions of the physical memory104of the computer system100. A predefined range of addresses of a memory portion allocated to at least one of the virtual platforms is configured as a shared device memory110to enable communication between the two or more virtual platforms through shared memory-based mechanism. More specifically, shared device memory110is configured as a predefined range of addresses of a memory portion allocated to a simulated communication device in a virtual platform. There may be more than one shared device memory configured between two or more communication devices. The virtual platform whose allocated memory portion is configured as the shared device memory is assigned the responsibilities to manage, update, or delete the shared device memory. The shared device memory thus configured is visible to other virtual platforms among the two or more virtual platforms. These other virtual platforms are capable of reading from and writing into the shared device memory. The detailed explanation of the shared device memory110is explained in detail with respect toFIG.3. In the example inFIG.2, the predefined range of addresses of the second memory portion208ballocated to the simulated communication device204in the first virtual platform106ais configured as a shared device memory110. As shown inFIG.2, the shared device memory110is also accessible by the second virtual platform106b. More specifically, the shared device memory is accessible by the simulated communication device204in the second virtual platform106b. Similarly, when there are two or more virtual platforms executing on the computer system100, the shared device memory is configured to be accessible by the simulated communication devices on each of the virtual platforms. The computer system enabling communication between the two or more virtual platforms is explained in detail with reference toFIG.3. For simplicity,FIG.3illustrates only two virtual platforms302aand302binterconnected to each other through a shared device memory304. The two virtual platforms302aand302bcorresponds to the virtual platforms described inFIGS.1and2. The two virtual platforms302aand302bas discussed in the above paragraphs, run on the physical processor of the computer system300. As described above, each virtual platform comprises a virtual processor (308a,308b) (simulated by a processor simulator (306a,306b)) and a simulated communication device (310a,310b) having a processor running firmware (312a,312b). In each virtual platform, the processor simulator is interconnected to the simulated communication device through a standard interface (314a,314b). Each virtual processor runs a device driver (316a,316b) through an interface. The device driver is a computer program that drives the simulated communication device that is connected to the virtual processor. The device driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used. The device driver communicates with the simulated communication device via the interface (314a,314b). The device driver interacts with the firmware to drive the firmware to initiate the corresponding simulated communication device to communicate with simulated communication devices in the other virtual platforms. The device driver is software specifically configured to interact with and control a transmitter/receiver in a communication device. The device drivers usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. Further, as described above, the two virtual platforms are configured to access separate portions of the physical memory of the computer system300. The physical memory comprises predefined addresses of a memory portion configured as shared device memory304(configured as the shared device memory110defined inFIG.1andFIG.2) accessible by both virtual platforms. In the example shown inFIG.3, consider that the predefined address of a part of the memory portion allocated to the simulated communication device310ain the first virtual platform302ais configured as a shared device memory304, such that the shared device memory304is visible to both the virtual platforms (302aand302b). The shared device memory304may be further arbitrarily partitioned into a data portion318and a register portion320. The data portion318in the shared device memory is configured such that it is accessible by the two or more virtual platforms running on the computer system. The data portion318is configured for transferring data between the virtual platforms. The virtual platforms are enabled to transfer the data into the data portion or access data from the data portion while communicating with each other. The register portion320comprises a plurality of registers. The simulated communication device in each virtual platform is allocated one or more registers among the plurality of registers. The register portion320is used for synchronizing the communication between the virtual platforms. The register portion302is a region or portion of the shared device memory304configured to enable the simulated communication devices in the virtual platforms to send signals, interrupts, and/or other information to other virtual platforms. The method of enabling communication between the virtual platforms302aand302bare explained in detail below. Consider in an example, the first virtual platform302ais transmitting a data packet to the second virtual platform302b. The communication can happen in the either direction. In other examples, a virtual platform may transmit the data packet to multiple other virtual platforms at the same time. When the first virtual platform302acommunicates with the second virtual platform302b, the device driver316arunning on the virtual processor308ainitiates the first virtual platform302ato transfer a data packet to the second virtual platform302b. The device driver316ainteracts with the firmware312arunning on the simulated processor associated with the simulated communication device310ain the first virtual platform302aand drives the firmware312ato interact with the simulated communication device310ato initiate a transmission (transfer of a data packet). The transfer is performed by copying the data packet from a memory portion allocated to the first virtual platform302ato the shared device memory320. More specifically, firmware312acopies the data packet from the memory portion allocated to the simulated communication device310ato the data portion318in the shared device memory visible to the second virtual platform302b. Further, the communication between the first virtual platform302aand the second virtual platform302bmay be synchronized using the register portion320of the shared device memory. As discussed, one or more registers in the register portion320is allocated to each virtual platform. The one or more registers allocated to a virtual platform is mapped to the simulated processor associated with the communication device. The one or more registers allocated to each virtual platform is monitored by the corresponding virtual platform in every simulated clock cycle to decide whether any action needs to be performed. For example, the simulated communication device in each virtual platform polls for any interrupt. The one or more register is configured to indicate/trigger a virtual platform to access a transferred data packet. The first virtual platform302atriggers the second virtual platform302bto access a data packet sent to it. This is performed, by the firmware312acausing the simulated processor of the first virtual platform302ato write a value into one or more registers allocated to the simulated communication device310bto indicate that the data packet is stored in the data portion318and ready to be transferred to the second virtual platform302b. In an example, the firmware312awrites a non-zero value to one or more registers allocated to the simulated communication device310b. The communication between the virtual platforms can be synchronized using different methods based on the communication device or the simulated processor. An example method of synchronising the communication using the register portion by writing into one or more registers is explained with reference toFIG.6and based on the table 1 given below. TABLE 1Name of registerAddressDetailsVP_INDEXD1_1000Returns the index for therunning virtual platform.VP_INT_STATUS[1 . . . n]D1_1004 +Writing any non-zero value[(n − 1) × 8]to this register raises anexternal interrupt on virtualplatform [1 . . . n].VP_INT_ACK[1 . . . n]D1_1008 +Writing any non-zero value[(n − 1) × 8]to this register clearsassociated externalinterrupt status In the example in Table 1, consider that the register portion is allocated address space from D1_1000 to D1_1100. D1 indicates that the address space is a part of the memory portion allocated to the simulated communication device of a first virtual platform302a. The address D1_1000 indicates that the register portion starts from the memory address1000in the memory portion allocated to the simulated communication device. However, in other examples, the shared device memory and therefore the register portion can be a part of a memory portion allocated to a simulated communication device in any other virtual platform running on the computer system. Each virtual platform running on the computer system checks their allocated registers in every simulated clock cycle. The simulated clock cycle is the time taken by the simulated processor to complete the execution of an instruction. In the example provided in Table 1, three registers are allocated to each virtual platform. In some other examples, any number of registers in the register portion can be allocated to each virtual platform. A first register allocated to each virtual platform is an index register (VP_INDEX) providing the index value of each virtual platform running on the computer system. This is shown in the first row of Table 1. The VP_INDEX register allocated to each virtual platform may in actual hardware may occur at different addresses. The value of VP_INDEX allocated to each virtual platform, identifies each virtual platform as a first virtual platform or a second virtual platform and so on. In the simulated virtual platforms, the VP_INDEX may be a single register allocated to each virtual platform at an address D1_1000. In that case, the VP_INDEX would be programmed to return different values to each virtual platform when the virtual platform reads the VP_INDEX register. When each virtual platform checks the register VP_INDEX, the register returns an index value to the corresponding virtual platform. When the first virtual platform checks the VP_INDEX, the register returns an index value for the first virtual platform, which identifies it as the first virtual platform. Similarly, the second running virtual platform is returned a VP_INDEX value identifying it as the second virtual platform. A second register allocated to each virtual platform is an interrupt status register (VP_INT_STATUS[1 . . . n]), which provides an interrupt status to each virtual platform. In Table 1, the VP_INT_STATUS[1 . . . n] for the two or more virtual platform (n virtual platforms) occurs at different addresses. The register VP_INT_STATUS1 for the first virtual platform is given as the address D1_1004 (i.e. D1_1004+(0x8)). Similarly, for the second virtual platform, the VP_INT_STATUS2 would occur at the address D1_1012 (i.e. D1_1004+(1×8)). Any value written into the VP_INT_STATUS raises an interrupt to the corresponding virtual platform. In an example a non-zero value is written into the register VP_INT_STATUS to raises an interrupt to the corresponding virtual platform. A third register allocated to each virtual platform is an acknowledge register (VP_INT_ACK[1 . . . n]). When an interrupt is received by a virtual platform, the virtual platform acknowledges the interrupt by writing back into the register VP_INT_ACK[1 . . . n]. The first virtual platform is allocated with a VP_INT_ACK1 register at an address D1_1008 (i.e. D1_1008+(0x8)). The first virtual platform acknowledges an interrupt raised to it by writing back into VP_INT_ACK1 register at address D1_1008. The second virtual platform is allocated with VP_INT_ACK2 at address D1_1016 and a third virtual platform is allocated with VP_INT_ACK3 at address D1_1024, and so on. FIG.6illustrates the above mentioned registers allocated to the virtual platforms. InFIG.6, the first virtual platform302ais allocated with a VP_INDEX1 register which identifies it as the first virtual platform. Similarly, the second virtual platform302bis allocated with a VP_INDEX2 register which identifies it as the second virtual platform. As mentioned above in a simulated virtual platform (unlike in the actual hardware) the VP_INDEX register could also be a single register in the register portion allocated to each virtual platform which is programmed such that the VP_INDEX return a different values to each virtual platform.FIG.6shows the interrupt status registers (VP_INT_STATUS[1 . . . n]) and acknowledge registers (VP_INT_ACK[1 . . . n]) as a part of the register portion320. The same set registers are also shown in the first virtual platform302aand the second virtual platform302bbecause these registers are visible to each virtual platform. The example shown inFIG.6illustrates an example method of synchronizing the communication between the first virtual platform and the second virtual platform on the computer system300inFIG.3. The simulated communication device in the first and second virtual platforms checks the allocated registers in every simulated clock cycle. As discussed in the above paragraphs, when the simulated communication device310apolls the allocated register in a simulated clock cycle, the register VP_INDEX returns an index value to the simulated communication device310aindicating it as the first virtual platform. Similarly, when the simulated communication device310bpolls the allocated register in a simulated clock cycle, the register VP_INDEX returns an index value to the simulated communication device310bindicating it as the second virtual platform. As discussed above, on transferring a data packet from the simulated communication device310ain the first virtual platform302ato the simulated communication device310bin the second virtual platform302b, the firmware312awrites a value into the VP_INT_STATUS2 at address D1_1012 allocated to the simulated communication device310bin the second virtual platform302bto indicate the transfer. The simulated communication device310awrites the value to VP_INT_STATUS2 at address D1_1012. The simulated communication device310bchecks the VP_INT_STATUS2 during each simulated clock cycle. On identifying a value written in the VP_INT_STATUS2, the simulated communication device310braises an interrupt to the simulated processor of the simulated communication device310b. The interrupt is raised at the interrupt pin INT2604. The simulated processor in the simulated communication device310bidentifies the interrupt as an external interrupt and immediately takes the action based on the interrupt. On receiving an interrupt by the simulated processor, the firmware312binteracts with the simulated communication device310band accesses the data packet written the data portion318of the shared device memory304. The firmware312brunning on the simulated processor acknowledges the raised interrupt by writing back into the register VP_INT_ACK2 at the address D1_1016 to clear the interrupt raised (i.e. the simulated processor in second communication device310b). The simulated communication device310amay check the acknowledgement register allocated to the simulated communication device310bafter transferring the data to the second virtual platform. When the simulated communication device310aidentifies a value written to VP_INT_ACK2 at address D1_1016, the simulated processor in the simulated communication device310aidentifies that the transfer of the data is complete. The transmission is thus synchronized using the register portion320. The second virtual platform302bis also capable of transmitting back a data packet in a similar manner using the shared memory-based mechanism to the first virtual platform302a. In that case, the second virtual platform copies a data packet from the memory portion allocated to the simulated communication device310bto the shared device memory304and writes a value into the register VP_INT_STATUS1 allocated to the first virtual platform302a. Writing the value to the allocated register VP_INT_STATUS1 by the second virtual platform302bcauses an interrupt to be raised. The simulated communication device310achecks the VP_INT_STATUS1 during each simulated clock cycle. On identifying a value written in the VP_INT_STATUS1, the simulated communication device310araises an interrupt to the simulated processor of the simulated communication device310a. The interrupt is raised at the interrupt pin INT1602. The simulated processor in the simulated communication device310aidentifies the interrupt as an external interrupt and immediately takes the action based on the interrupt. On receiving an interrupt, the firmware312ainteracts with the simulated communication device310ato access the data packet written in the shared device memory304. The simulated processor acknowledges the interrupt by writing back to the register VP_INT_ACK1 at address D1_1008 to acknowledge and clears the interrupt raised. The simulated communication device310bchecks the register VP_INT_ACK1 after transferring the data to the first virtual platform. When the simulated communication device310bidentifies a value written to VP_INT_ACK1 at address D1_1008, the simulated processor in the simulated communication device310bwould consider that the transfer of the data is complete. The transmission is thus synchronized using the register portion320. The communication protocol may determine the virtual platforms to which a particular (say first) virtual platform need to transfer the data. Further, if there are multiple virtual platforms running on the computer system300, a virtual platform may be enabled to communicate with multiple other virtual platforms at the same time. Suppose there are four virtual platforms running on the computer system300, and the first virtual platform is required to communicate with the second and fourth virtual platform. The device driver316ainteracts with the firmware312arunning on the simulated communication device310ato transfer data packet to the second and fourth virtual platforms. The firmware312acopies the data packet to the data portion318of the shared device memory304. The firmware312afurther writes into the register VP_INT_STATUS2 and VP_INT_STATUS4 at addresses D1_1012 and D1_1028 respectively. The simulated communication devices in the second virtual platform and the fourth virtual platform polls the allocated registers in the register portion320on every simulated clock cycle. When the second virtual platform checks the register VP_INDEX2 on a simulated clock cycle, the register returns a value identifying it as the second virtual platform. When the fourth virtual platform checks the register VP_INDEX4 on a simulated clock cycle, the register returns a value identifying it as the fourth virtual platform. Further, when the first virtual platform302awrites into the corresponding allocated register VP_INT_STATUS2 and VP_INT_STATUS4 (at addresses D1_1012 and D1_1028), it causes an interrupt to be raised to corresponding simulated processors of the simulated communication device of the second virtual platform302band fourth virtual platform. Thus, when an interrupt is raised to the simulated processor of the simulated communication device310b, the simulated processor reads the transmitted data packet from the shared device memory304. Similarly, when an interrupt is raised to the simulated processor of the simulated communication device on the fourth virtual platform, the is simulated processor reads the transmitted data packet from the shared device memory304. Further, the second virtual platform writes back to the register VP_INT_ACK2 at address D1_1016 to acknowledge the interrupt raised to the simulated processor of the simulated communication device310b. Similarly, the fourth virtual platform writes back to the register VP_INT_ACK4 at address D1_1032 to acknowledge the interrupt raised to the simulated processor of its corresponding simulated communication device. Thus, by allocating a predefined addresses of device memory (i.e. memory portion allocated to a simulated communication device in a virtual platform), it is possible to transfer the data packet efficiently and also synchronize the communication between virtual platforms, thereby making the communication much faster and reliable. The synchronization of the communication device eliminates the situation where other virtual platforms overwrites the shared device memory before transfer of data to another virtual platform. The above described method inFIG.6is an example. However, it is evident that a person skilled in the art may use other methods for synchronising the transfer between two virtual platforms based on the type of the communication device and different modes of communication between them. Further, the output of the computer system300can be obtained for verification, correction and development of actual hardware electronic device in order to modify or design the actual hardware before manufacturing. For example, the simulated communication device (310a,310b) in a virtual platform may provide output (such as emitting log files) that can potentially capture every step during the execution, including memory transaction, PC fetch etc to a debugger (322a,322b). The debugger analyses the output received from the simulated communication device to verify if the communication between two or more virtual platforms was successful and output a corresponding indication. The contents of the shared device memory depicting the communication between the virtual platforms may be received by a debugger (322a,322b) connected to the simulated communication device (310a,310b). The contents of the shared device memory304may be copied to a log folder and can be used to identify the errors in the transmission and reception. The debugger may be used to debug the errors occurring while transmission or reception of the data and output an indication. An example of the debugger is a Codescape™ debugger. The debugger may be used for various applications such as IP evaluation, SoC design, driver development, application development, or code optimization. The debugger may be capable of performing step debugging, memory view and all the standard debugging capabilities. Thus, based on the indication from the debugger, the functioning of the simulated communication device is modified to correct any errors in communication. Further, the necessary modifications are made to the design of the simulated communication device to make the device work correctly and efficiently before implementing the actual communication device on a SoC. Further, the device driver and the firmware are identical to those deployed on the actual device hardware and hence the simulations are accurate for testing the real software of the device driver and the firmware. The transfer of data packet from the memory portion allocated to the simulated communication device to the shared device memory is explained in detail with reference toFIG.4. The simulated communication device, as discussed in the above paragraphs, is the simulation of any communication device hardware such as Wi-Fi®, wired ethernet, or Bluetooth. Specifically, simulation of communication device hardware is the simulation of the communication protocol used by the actual communication device. Different communication devices communicate using different communication protocols. Communication protocols such as IEEE, IETF, OSI or IP protocols define the rules and conventions for exchanging information between communication devices through a network or other media. Communication devices typically use a set of cooperating protocols, sometimes called a protocol suite for communication. Some examples of protocol suites include but are not limited to Internet protocol suites TCP/IP, IPX/SPX, X.25, AX.25, and AppleTalk. The protocol suites provide end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into different layers. In the explanation provided with reference toFIG.4, an example of a Wi-Fi® device communicating using TCP/IP protocol is described in detail. Although the methods and system are described herein as being used in a Wi-Fi® device, it will be appreciated that the methods may also be applied in other communication devices which operate using any communication protocol. The TCP/IP protocol has five layers. The layers include the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications. The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled both in the device driver for the network hardware, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the Internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. FIG.4depicts two virtual platforms connected at a MAC layer level. This is an example, and it is known to a person skilled in the art that communication can be enabled at any layer in the communication protocol. It is also possible to establish the communication at the physical layer. The MAC layer is further split into the upper MAC (UMAC) layer and the lower MAC (LMAC) layer. InFIG.4, the virtual platforms are interconnected to each other at an LMAC layer. The shared device memory bridges304the two LMAC layers (404a, and404b) to allow communication between two simulated communication devices (310a,310b). Since the communication devices are connected at the LMAC layer, the physical layer is not simulated. This reduces the complexity of simulating the simulated communication device. As shown inFIG.4, when a first virtual platform is to transfer a data packet to the second virtual platform, the device driver running on the simulated virtual processor308ainteracts with a firmware running on the simulated communication device312a. The firmware interacts with the UMAC layer402aregarding transmission of the data packet and the UMAC layer402ain turn communicates with LMAC layer404aregarding the transmission. The communication between the LMAC layer404aand UMAC layer402aremains unchanged. The UMAC layer402adoes not have any access to the shared device memory304. Typically, in a TCP/IP model, the data packet is generally sent from the LMAC layer to the physical layer before transmission. However, in this example, the transmission is enabled at the LMAC layer404aof protocol. At the LMAC layer, the device can achieve zero error while transmitting a data packet. This is because the Internet layer packets are prepared as frames and are transmitted without any conversion. The device driver in the first virtual platform interacts with the firmware on the simulated communication device310a, and the firmware interacts with the LMAC layer404ato copy the data packet (signal), from the memory portion allocated to the simulated communication device310ato the shared device memory304. Further, the firmware writes into a shared register allocated to the simulated communication device310b. The simulated communication device310bupdates the shared registers in every simulated clock cycle. When a value is written into the shared register, an interrupt is raised to the simulated processor of the simulated communication device310b. The simulated processor on receiving the interrupt, causes the firmware to interact with the LMAC layer404baccess the data packet from the shared device memory304and acknowledges the interrupt raised. The transmission at the LMAC layer can be achieved because the shared device memory304is a part of the memory allocated to the device rather than a common physical memory. FIGS.5aand5bis a flowchart illustrating the method of enabling communication between two or more virtual platforms.FIG.5ais a flowchart illustrating the transmission of a data packet from a virtual platform to one or more virtual platforms. The two or more virtual platforms discussed here are executed on a physical processor on a computer system. Each virtual platform among the two or more virtual platforms comprises a virtual processor simulated by a processor simulator and a simulated communication device simulated by a device simulator. Further, each virtual processor runs a device driver that drives the simulated communication device and each simulated communication device runs firmware controlling the operations of the corresponding simulated communication device. The processor simulator and the device simulator are interconnected by an interface. Further, the virtual processor and simulated communication device in each virtual platform are allocated separate memory portions of a physical memory of the computer system. A part of a memory portion allocated to one of the simulated communication devices is configured as a shared device memory to enable communication via a shared memory-based mechanism. At step502, the method initiates a virtual platform to transfer a data packet to one or more other virtual platforms. The transfer is initiated by the device driver running on the virtual processor of the virtual platform transmitting the data packet. The device driver is software specifically configured to interact with and control a transmitter/receiver in a communication device. The device driver interacts with the firmware, and the firmware interacts with the simulated communication device in the virtual platform to transmit (transfer) a data packet to one or more virtual platforms. At step504, the method includes copying a data packet from a memory portion allocated to the virtual platform to the data portion of the shared device memory. The data packet is copied from the memory portion allocated to the simulated communication device in the virtual platform to the data portion of the shared device memory. The shared device memory is a part of a memory portion allocated to one of the simulated communication devices. The shared device memory is visible to the two or more virtual platforms running on the computer system. At step506, the method includes notifying the communication by indicating the transfer of data packet to those virtual platforms to which the data packet is transferred. The transfer is indicated by the virtual platform writing into one or more registers allocated to the one or more virtual platforms to which the data packet is transferred. The shared device memory includes a register portion which comprises one or more registers allocated to the simulated communication device in each virtual platform. Writing a value into the one or more registers triggers further steps that synchronise the communication between the virtual platforms. FIG.5bis a flowchart illustrating the method of accessing a data packet by one or more virtual platforms to which a data packet is transmitted by a virtual platform. Each virtual platform among the two or more virtual platforms check the allocated registers in every simulated clock cycle (at step508). As mention in step506, the communication is notified by the virtual platform by writing a value into the allocated register. The one or more virtual platforms to which data packet is transferred, identifies that a data packet is transferred to it on reading a value written at least one of the registers allocated to it. At step510, for each virtual platform among the one or more virtual platforms to which the data packet is transferred, writing a value in the allocated registers, raises an interrupt to the simulated processor of the corresponding simulated communication devices. These interrupts, though raised internally, appears to the simulated processor as external interrupts. Further, at step512, the firmware, on the simulated communication device in each virtual platform among the one or more virtual platforms, accesses the data packet from the data portion of the shared device memory on receiving the interrupt. At step514, the firmware running on the processor of the simulated communication device further interacts with the simulated communication device to acknowledge the interrupt raised. The interrupt is acknowledged by each virtual platform by writing a value into another register allocated to the corresponding simulating communication device. The virtual platform which transferred the data packet may routinely check the registers allocated to those virtual platforms to which the data is transferred. The virtual platform, which transferred the data packet, on reading a value in the registers allocated to each virtual platform to which the data is transferred identifies that the communication to those virtual platforms as complete. Thus, the method enables and synchronises communication between two or more virtual platforms. FIG.7shows a system in which the computer system described herein may be implemented. The computer system comprises a CPU702, a GPU704, a memory706and other devices714, such as a display716, speakers718and a camera708. A processing block710(corresponding to processing blocks in the computer system100) is implemented on the CPU702. In other examples, the processing block710may be implemented on the GPU704. The components of the computer system can communicate with each other via a communications bus720. A store712is implemented as part of the memory706. WhileFIG.7illustrates one implementation of a graphics processing system, it will be understood that a similar block diagram could be drawn for an artificial intelligence accelerator system—for example, by replacing either the CPU702or the GPU704with a Neural Network Accelerator (NNA), or by adding the NNA as an additional unit. In such cases, the processing block710can be implemented in the NNA. The computer system described herein may be embodied in hardware on an integrated circuit. The computer system described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine. The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code. A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be or comprise any kind of general purpose or dedicated processor, such as a CPU, GPU, NNA, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like. A computer or computer system may comprise one or more processors. It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a hardware implementation of the simulated communication device configured to perform any of the methods described herein, or to manufacture a hardware implementation of the simulated communication device comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description. Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a hardware implementation of the simulated communication device as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a hardware implementation of the simulated communication device to be performed. An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining hardware suitable for manufacture in an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS® and GDSII. Higher level representations which logically define hardware suitable for manufacture in an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit. An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a hardware implementation of the simulated communication device will now be described with respect toFIG.8. FIG.8shows an example of an integrated circuit (IC) manufacturing system802which is configured to manufacture a hardware implementation of the simulated communication device as described in any of the examples herein. In particular, the IC manufacturing system802comprises a layout processing system804and an integrated circuit generation system806. The IC manufacturing system802is configured to receive an IC definition dataset (e.g. defining a hardware implementation of the simulated communication device as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a hardware implementation of the simulated communication device as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system802to manufacture an integrated circuit embodying a hardware implementation of the simulated communication device as described in any of the examples herein. The layout processing system804is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system804has determined the circuit layout it may output a circuit layout definition to the IC generation system806. A circuit layout definition may be, for example, a circuit layout description. The IC generation system806generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system806may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system806may be in the form of computer-readable code which the IC generation system806can use to form a suitable mask for use in generating an IC. The different processes performed by the IC manufacturing system802may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system802may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties. In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a hardware implementation of the simulated communication device without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA). In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect toFIG.8by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured. In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown inFIG.8, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit. The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget. The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
64,991